[2024-07-20 12:29:27] INFO 📝 Pipeline data will be written to '/home/runner/.cache/distilabel/pipelines/embedding-queries/0beaa44a146caf40e3953a9ed6e6263c267f3c58/data' [2024-07-20 12:29:27] INFO ⏳ Waiting for all the steps to load... [2024-07-20 12:29:27] WARNING Since the `base_url=https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct` is available and either one of `model_id` or `endpoint_name` is also provided, the `base_url` will either be ignored or overwritten with the one generated from either of those args, for serverless or dedicated inference endpoints, respectively. [2024-07-20 12:29:27] WARNING Since the `base_url=https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct` is available and either one of `model_id` or `endpoint_name` is also provided, the `base_url` will either be ignored or overwritten with the one generated from either of those args, for serverless or dedicated inference endpoints, respectively. [2024-07-20 12:29:29] INFO ⏳ Steps loaded: 5/5 * 'load_data' workers: 1 * 'generate_sentence_pair' workers: 1 * 'multiply_queries' workers: 1 * 'merge_columns' workers: 1 * 'expand_columns_0' workers: 1 [2024-07-20 12:29:29] INFO ✅ All the steps have been loaded! [2024-07-20 12:29:29] INFO 🧬 Starting yielding batches from generator step 'load_data'. Offset: 0 [2024-07-20 12:29:29] INFO 📨 Step 'load_data' sending batch 0 to output queue [2024-07-20 12:29:29] INFO 📦 Processing batch 0 in 'generate_sentence_pair' [2024-07-20 12:29:29] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:29] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:29] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:29] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:29] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:29] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:29] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:29] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:29] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:29] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:29] INFO 📨 Step 'generate_sentence_pair' sending batch 0 to output queue [2024-07-20 12:29:29] INFO 📦 Processing batch 0 in 'multiply_queries' [2024-07-20 12:29:29] INFO 📨 Step 'load_data' sending batch 1 to output queue [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] INFO 📨 Step 'multiply_queries' sending batch 0 to output queue [2024-07-20 12:29:30] INFO 📦 Processing batch 1 in 'generate_sentence_pair' [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] INFO 📨 Step 'generate_sentence_pair' sending batch 1 to output queue [2024-07-20 12:29:30] INFO 📦 Processing batch 1 in 'multiply_queries' [2024-07-20 12:29:30] INFO 📨 Step 'load_data' sending batch 2 to output queue [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] INFO 📨 Step 'multiply_queries' sending batch 1 to output queue [2024-07-20 12:29:30] INFO 📦 Processing batch 2 in 'generate_sentence_pair' [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] INFO 📨 Step 'generate_sentence_pair' sending batch 2 to output queue [2024-07-20 12:29:30] INFO 📨 Step 'load_data' sending batch 3 to output queue [2024-07-20 12:29:30] INFO 📦 Processing batch 2 in 'multiply_queries' [2024-07-20 12:29:30] INFO 📦 Processing batch 3 in 'generate_sentence_pair' [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:30] INFO 📨 Step 'generate_sentence_pair' sending batch 3 to output queue [2024-07-20 12:29:31] INFO 📨 Step 'load_data' sending batch 4 to output queue [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:30] INFO 📨 Step 'multiply_queries' sending batch 2 to output queue [2024-07-20 12:29:31] INFO 📦 Processing batch 3 in 'multiply_queries' [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] INFO 📦 Processing batch 4 in 'generate_sentence_pair' [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] INFO 📨 Step 'load_data' sending batch 5 to output queue [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] INFO 📦 Processing batch 0 in 'merge_columns' [2024-07-20 12:29:31] INFO 📨 Step 'generate_sentence_pair' sending batch 4 to output queue [2024-07-20 12:29:31] WARNING ⚠️ Processing batch 0 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] INFO 📦 Processing batch 5 in 'generate_sentence_pair' [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] INFO 📦 Processing batch 0 in 'expand_columns_0' [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] INFO 📨 Step 'load_data' sending batch 6 to output queue [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING ⚠️ Processing batch 0 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:31] INFO 📨 Step 'multiply_queries' sending batch 3 to output queue [2024-07-20 12:29:31] INFO 📨 Step 'expand_columns_0' sending batch 0 to output queue [2024-07-20 12:29:31] INFO 📦 Processing batch 4 in 'multiply_queries' [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] INFO 📨 Step 'merge_columns' sending batch 0 to output queue [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] INFO 📨 Step 'generate_sentence_pair' sending batch 5 to output queue [2024-07-20 12:29:31] INFO 📦 Processing batch 6 in 'generate_sentence_pair' [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] INFO 📨 Step 'load_data' sending batch 7 to output queue [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] INFO 📨 Step 'generate_sentence_pair' sending batch 6 to output queue [2024-07-20 12:29:32] INFO 📦 Processing batch 7 in 'generate_sentence_pair' [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] INFO 📨 Step 'load_data' sending batch 8 to output queue [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] INFO 📨 Step 'generate_sentence_pair' sending batch 7 to output queue [2024-07-20 12:29:32] INFO 📦 Processing batch 8 in 'generate_sentence_pair' [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] INFO 📨 Step 'load_data' sending batch 9 to output queue [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] INFO 📨 Step 'multiply_queries' sending batch 4 to output queue [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] INFO 📦 Processing batch 5 in 'multiply_queries' [2024-07-20 12:29:32] INFO 📨 Step 'generate_sentence_pair' sending batch 8 to output queue [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] INFO 📦 Processing batch 9 in 'generate_sentence_pair' [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:31] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] INFO 📨 Step 'generate_sentence_pair' sending batch 9 to output queue [2024-07-20 12:29:33] INFO 📦 Processing batch 1 in 'expand_columns_0' [2024-07-20 12:29:33] INFO 📦 Processing batch 10 in 'generate_sentence_pair' [2024-07-20 12:29:31] INFO 📨 Step 'multiply_queries' sending batch 5 to output queue [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] INFO 📨 Step 'load_data' sending batch 10 to output queue [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] INFO 📨 Step 'load_data' sending batch 11 to output queue [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] INFO 📦 Processing batch 6 in 'multiply_queries' [2024-07-20 12:29:33] WARNING ⚠️ Processing batch 1 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:33] INFO 📦 Processing batch 1 in 'merge_columns' [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Processing batch 1 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:33] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:33] INFO 📨 Step 'merge_columns' sending batch 1 to output queue [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] INFO 📨 Step 'expand_columns_0' sending batch 1 to output queue [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] INFO 📨 Step 'generate_sentence_pair' sending batch 10 to output queue [2024-07-20 12:29:33] INFO 📦 Processing batch 11 in 'generate_sentence_pair' [2024-07-20 12:29:34] INFO 📨 Step 'load_data' sending batch 12 to output queue [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] INFO 📨 Step 'load_data' sending batch 13 to output queue [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] INFO 📨 Step 'generate_sentence_pair' sending batch 11 to output queue [2024-07-20 12:29:34] INFO 📦 Processing batch 12 in 'generate_sentence_pair' [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] INFO 📨 Step 'generate_sentence_pair' sending batch 12 to output queue [2024-07-20 12:29:34] INFO 📦 Processing batch 13 in 'generate_sentence_pair' [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] INFO 📨 Step 'load_data' sending batch 14 to output queue [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] INFO 📨 Step 'generate_sentence_pair' sending batch 13 to output queue [2024-07-20 12:29:34] INFO 📦 Processing batch 14 in 'generate_sentence_pair' [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] INFO 📨 Step 'load_data' sending batch 15 to output queue [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] INFO 📨 Step 'multiply_queries' sending batch 6 to output queue [2024-07-20 12:29:32] INFO 📦 Processing batch 7 in 'multiply_queries' [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] INFO 📨 Step 'generate_sentence_pair' sending batch 14 to output queue [2024-07-20 12:29:35] INFO 📦 Processing batch 15 in 'generate_sentence_pair' [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] INFO 📦 Processing batch 2 in 'expand_columns_0' [2024-07-20 12:29:35] WARNING ⚠️ Processing batch 2 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:35] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:35] INFO 📨 Step 'expand_columns_0' sending batch 2 to output queue [2024-07-20 12:29:35] INFO 📦 Processing batch 2 in 'merge_columns' [2024-07-20 12:29:35] WARNING ⚠️ Processing batch 2 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] INFO 📨 Step 'load_data' sending batch 16 to output queue [2024-07-20 12:29:35] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] INFO 📨 Step 'merge_columns' sending batch 2 to output queue [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] INFO 📨 Step 'multiply_queries' sending batch 7 to output queue [2024-07-20 12:29:32] INFO 📦 Processing batch 8 in 'multiply_queries' [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] INFO 📨 Step 'load_data' sending batch 17 to output queue [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] INFO 📨 Step 'generate_sentence_pair' sending batch 15 to output queue [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] INFO 📦 Processing batch 16 in 'generate_sentence_pair' [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] INFO 📨 Step 'load_data' sending batch 18 to output queue [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:32] INFO 📨 Step 'multiply_queries' sending batch 8 to output queue [2024-07-20 12:29:33] INFO 📦 Processing batch 9 in 'multiply_queries' [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] INFO 📨 Step 'multiply_queries' sending batch 9 to output queue [2024-07-20 12:29:33] INFO 📦 Processing batch 10 in 'multiply_queries' [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] INFO 📨 Step 'load_data' sending batch 19 to output queue [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] INFO 📨 Step 'generate_sentence_pair' sending batch 16 to output queue [2024-07-20 12:29:36] INFO 📦 Processing batch 17 in 'generate_sentence_pair' [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] INFO 📨 Step 'load_data' sending batch 20 to output queue [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] INFO 📨 Step 'generate_sentence_pair' sending batch 17 to output queue [2024-07-20 12:29:37] INFO 📦 Processing batch 3 in 'merge_columns' [2024-07-20 12:29:36] INFO 📦 Processing batch 18 in 'generate_sentence_pair' [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Processing batch 3 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:37] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:37] INFO 📨 Step 'merge_columns' sending batch 3 to output queue [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:33] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] INFO 📦 Processing batch 3 in 'expand_columns_0' [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Processing batch 3 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:37] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:33] INFO 📨 Step 'multiply_queries' sending batch 10 to output queue [2024-07-20 12:29:37] INFO 📨 Step 'expand_columns_0' sending batch 3 to output queue [2024-07-20 12:29:34] INFO 📦 Processing batch 11 in 'multiply_queries' [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] INFO 📨 Step 'load_data' sending batch 21 to output queue [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] INFO 📨 Step 'load_data' sending batch 22 to output queue [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] INFO 📨 Step 'generate_sentence_pair' sending batch 18 to output queue [2024-07-20 12:29:37] INFO 📦 Processing batch 19 in 'generate_sentence_pair' [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] INFO 📨 Step 'multiply_queries' sending batch 11 to output queue [2024-07-20 12:29:34] INFO 📦 Processing batch 12 in 'multiply_queries' [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] INFO 📨 Step 'generate_sentence_pair' sending batch 19 to output queue [2024-07-20 12:29:37] INFO 📦 Processing batch 20 in 'generate_sentence_pair' [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] INFO 📨 Step 'load_data' sending batch 23 to output queue [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] INFO 📨 Step 'load_data' sending batch 24 to output queue [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] INFO 📨 Step 'generate_sentence_pair' sending batch 20 to output queue [2024-07-20 12:29:37] INFO 📦 Processing batch 21 in 'generate_sentence_pair' [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:34] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] INFO 📨 Step 'load_data' sending batch 25 to output queue [2024-07-20 12:29:34] INFO 📨 Step 'multiply_queries' sending batch 12 to output queue [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:34] INFO 📦 Processing batch 13 in 'multiply_queries' [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] INFO 📨 Step 'generate_sentence_pair' sending batch 21 to output queue [2024-07-20 12:29:38] INFO 📦 Processing batch 22 in 'generate_sentence_pair' [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] INFO 📨 Step 'generate_sentence_pair' sending batch 22 to output queue [2024-07-20 12:29:38] INFO 📦 Processing batch 23 in 'generate_sentence_pair' [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] INFO 📦 Processing batch 4 in 'merge_columns' [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Processing batch 4 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] INFO 📨 Step 'load_data' sending batch 26 to output queue [2024-07-20 12:29:39] INFO 📦 Processing batch 4 in 'expand_columns_0' [2024-07-20 12:29:39] WARNING ⚠️ Processing batch 4 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:39] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:39] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] INFO 📨 Step 'merge_columns' sending batch 4 to output queue [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] INFO 📨 Step 'expand_columns_0' sending batch 4 to output queue [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] INFO 📨 Step 'generate_sentence_pair' sending batch 23 to output queue [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] INFO 📦 Processing batch 24 in 'generate_sentence_pair' [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] INFO 📨 Step 'multiply_queries' sending batch 13 to output queue [2024-07-20 12:29:35] INFO 📦 Processing batch 14 in 'multiply_queries' [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] INFO 📨 Step 'load_data' sending batch 27 to output queue [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] INFO 📨 Step 'load_data' sending batch 28 to output queue [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 400, message='Bad Request', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] INFO 📨 Step 'multiply_queries' sending batch 14 to output queue [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] INFO 📦 Processing batch 15 in 'multiply_queries' [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] INFO 📨 Step 'load_data' sending batch 29 to output queue [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] INFO 📨 Step 'generate_sentence_pair' sending batch 24 to output queue [2024-07-20 12:29:39] INFO 📦 Processing batch 25 in 'generate_sentence_pair' [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] INFO 📨 Step 'generate_sentence_pair' sending batch 25 to output queue [2024-07-20 12:29:39] INFO 📦 Processing batch 26 in 'generate_sentence_pair' [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] INFO 📨 Step 'load_data' sending batch 30 to output queue [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:35] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:35] INFO 📨 Step 'multiply_queries' sending batch 15 to output queue [2024-07-20 12:29:40] INFO 📨 Step 'generate_sentence_pair' sending batch 26 to output queue [2024-07-20 12:29:36] INFO 📦 Processing batch 16 in 'multiply_queries' [2024-07-20 12:29:40] INFO 📦 Processing batch 27 in 'generate_sentence_pair' [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] INFO 📨 Step 'generate_sentence_pair' sending batch 27 to output queue [2024-07-20 12:29:40] INFO 📦 Processing batch 28 in 'generate_sentence_pair' [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] INFO 📨 Step 'load_data' sending batch 31 to output queue [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] INFO 📦 Processing batch 5 in 'merge_columns' [2024-07-20 12:29:42] INFO 📦 Processing batch 5 in 'expand_columns_0' [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] INFO 📨 Step 'generate_sentence_pair' sending batch 28 to output queue [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] INFO 📦 Processing batch 29 in 'generate_sentence_pair' [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] INFO 📨 Step 'generate_sentence_pair' sending batch 29 to output queue [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:42] WARNING ⚠️ Processing batch 5 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:42] INFO 📨 Step 'load_data' sending batch 32 to output queue [2024-07-20 12:29:42] WARNING ⚠️ Processing batch 5 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:42] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:42] INFO 📨 Step 'merge_columns' sending batch 5 to output queue [2024-07-20 12:29:42] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:42] INFO 📨 Step 'expand_columns_0' sending batch 5 to output queue [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] INFO 📨 Step 'multiply_queries' sending batch 16 to output queue [2024-07-20 12:29:36] INFO 📦 Processing batch 17 in 'multiply_queries' [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] INFO 📦 Processing batch 30 in 'generate_sentence_pair' [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] INFO 📨 Step 'load_data' sending batch 33 to output queue [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] INFO 📨 Step 'generate_sentence_pair' sending batch 30 to output queue [2024-07-20 12:29:42] INFO 📦 Processing batch 31 in 'generate_sentence_pair' [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:43] INFO 📨 Step 'load_data' sending batch 34 to output queue [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] INFO 📨 Step 'generate_sentence_pair' sending batch 31 to output queue [2024-07-20 12:29:42] INFO 📦 Processing batch 32 in 'generate_sentence_pair' [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] INFO 📨 Step 'generate_sentence_pair' sending batch 32 to output queue [2024-07-20 12:29:43] INFO 📦 Processing batch 33 in 'generate_sentence_pair' [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] INFO 📨 Step 'load_data' sending batch 35 to output queue [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] INFO 📨 Step 'generate_sentence_pair' sending batch 33 to output queue [2024-07-20 12:29:43] INFO 📦 Processing batch 34 in 'generate_sentence_pair' [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] INFO 📦 Processing batch 6 in 'merge_columns' [2024-07-20 12:29:44] WARNING ⚠️ Processing batch 6 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:44] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] INFO 📦 Processing batch 6 in 'expand_columns_0' [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Processing batch 6 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:44] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:36] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:44] INFO 📨 Step 'expand_columns_0' sending batch 6 to output queue [2024-07-20 12:29:44] INFO 📨 Step 'merge_columns' sending batch 6 to output queue [2024-07-20 12:29:36] INFO 📨 Step 'multiply_queries' sending batch 17 to output queue [2024-07-20 12:29:44] INFO 📨 Step 'load_data' sending batch 36 to output queue [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] INFO 📨 Step 'generate_sentence_pair' sending batch 34 to output queue [2024-07-20 12:29:44] INFO 📦 Processing batch 35 in 'generate_sentence_pair' [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:36] INFO 📦 Processing batch 18 in 'multiply_queries' [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] INFO 📨 Step 'generate_sentence_pair' sending batch 35 to output queue [2024-07-20 12:29:44] INFO 📨 Step 'load_data' sending batch 37 to output queue [2024-07-20 12:29:44] INFO 📦 Processing batch 36 in 'generate_sentence_pair' [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] INFO 📨 Step 'generate_sentence_pair' sending batch 36 to output queue [2024-07-20 12:29:45] INFO 📦 Processing batch 37 in 'generate_sentence_pair' [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] INFO 📨 Step 'load_data' sending batch 38 to output queue [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] INFO 📨 Step 'load_data' sending batch 39 to output queue [2024-07-20 12:29:45] INFO 📨 Step 'generate_sentence_pair' sending batch 37 to output queue [2024-07-20 12:29:45] INFO 📦 Processing batch 38 in 'generate_sentence_pair' [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] INFO 📨 Step 'multiply_queries' sending batch 18 to output queue [2024-07-20 12:29:37] INFO 📦 Processing batch 19 in 'multiply_queries' [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] INFO 📨 Step 'load_data' sending batch 40 to output queue [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] INFO 📨 Step 'generate_sentence_pair' sending batch 38 to output queue [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] INFO 📦 Processing batch 39 in 'generate_sentence_pair' [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] INFO 📦 Processing batch 7 in 'merge_columns' [2024-07-20 12:29:46] WARNING ⚠️ Processing batch 7 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] INFO 📨 Step 'load_data' sending batch 41 to output queue [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] INFO 📨 Step 'multiply_queries' sending batch 19 to output queue [2024-07-20 12:29:37] INFO 📦 Processing batch 20 in 'multiply_queries' [2024-07-20 12:29:46] INFO 📦 Processing batch 7 in 'expand_columns_0' [2024-07-20 12:29:46] INFO 📨 Step 'merge_columns' sending batch 7 to output queue [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Processing batch 7 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:46] INFO 📨 Step 'expand_columns_0' sending batch 7 to output queue [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] INFO 📨 Step 'generate_sentence_pair' sending batch 39 to output queue [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] INFO 📦 Processing batch 40 in 'generate_sentence_pair' [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] INFO 📨 Step 'load_data' sending batch 42 to output queue [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:37] INFO 📨 Step 'multiply_queries' sending batch 20 to output queue [2024-07-20 12:29:38] INFO 📦 Processing batch 21 in 'multiply_queries' [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] INFO 📨 Step 'load_data' sending batch 43 to output queue [2024-07-20 12:29:47] INFO 📨 Step 'load_data' sending batch 44 to output queue [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] INFO 📨 Step 'generate_sentence_pair' sending batch 40 to output queue [2024-07-20 12:29:46] INFO 📦 Processing batch 41 in 'generate_sentence_pair' [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] INFO 📨 Step 'generate_sentence_pair' sending batch 41 to output queue [2024-07-20 12:29:46] INFO 📦 Processing batch 42 in 'generate_sentence_pair' [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] INFO 📨 Step 'load_data' sending batch 45 to output queue [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] INFO 📨 Step 'multiply_queries' sending batch 21 to output queue [2024-07-20 12:29:38] INFO 📦 Processing batch 22 in 'multiply_queries' [2024-07-20 12:29:48] INFO 📦 Processing batch 8 in 'merge_columns' [2024-07-20 12:29:48] WARNING ⚠️ Processing batch 8 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:48] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:48] INFO 📨 Step 'merge_columns' sending batch 8 to output queue [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] INFO 📨 Step 'generate_sentence_pair' sending batch 42 to output queue [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] INFO 📨 Step 'load_data' sending batch 46 to output queue [2024-07-20 12:29:47] INFO 📦 Processing batch 43 in 'generate_sentence_pair' [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] INFO 📦 Processing batch 8 in 'expand_columns_0' [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Processing batch 8 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:38] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] INFO 📨 Step 'expand_columns_0' sending batch 8 to output queue [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] INFO 📨 Step 'load_data' sending batch 47 to output queue [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] INFO 📨 Step 'generate_sentence_pair' sending batch 43 to output queue [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:38] INFO 📨 Step 'multiply_queries' sending batch 22 to output queue [2024-07-20 12:29:39] INFO 📦 Processing batch 23 in 'multiply_queries' [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] INFO 📦 Processing batch 44 in 'generate_sentence_pair' [2024-07-20 12:29:48] INFO 📨 Step 'load_data' sending batch 48 to output queue [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] INFO 📨 Step 'generate_sentence_pair' sending batch 44 to output queue [2024-07-20 12:29:47] INFO 📦 Processing batch 45 in 'generate_sentence_pair' [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] INFO 📨 Step 'load_data' sending batch 49 to output queue [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] INFO 📨 Step 'multiply_queries' sending batch 23 to output queue [2024-07-20 12:29:39] INFO 📦 Processing batch 24 in 'multiply_queries' [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] INFO 📨 Step 'load_data' sending batch 50 to output queue [2024-07-20 12:29:39] INFO 📨 Step 'multiply_queries' sending batch 24 to output queue [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] INFO 📦 Processing batch 9 in 'merge_columns' [2024-07-20 12:29:39] INFO 📦 Processing batch 25 in 'multiply_queries' [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Processing batch 9 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] INFO 📨 Step 'merge_columns' sending batch 9 to output queue [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:50] INFO 📦 Processing batch 9 in 'expand_columns_0' [2024-07-20 12:29:50] INFO 📨 Step 'load_data' sending batch 51 to output queue [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:50] WARNING ⚠️ Processing batch 9 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:50] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] INFO 📨 Step 'expand_columns_0' sending batch 9 to output queue [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] INFO 📨 Step 'generate_sentence_pair' sending batch 45 to output queue [2024-07-20 12:29:50] INFO 📨 Step 'load_data' sending batch 52 to output queue [2024-07-20 12:29:48] INFO 📦 Processing batch 46 in 'generate_sentence_pair' [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:39] INFO 📨 Step 'multiply_queries' sending batch 25 to output queue [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] INFO 📦 Processing batch 26 in 'multiply_queries' [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] INFO 📨 Step 'load_data' sending batch 53 to output queue [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] INFO 📨 Step 'generate_sentence_pair' sending batch 46 to output queue [2024-07-20 12:29:48] INFO 📦 Processing batch 47 in 'generate_sentence_pair' [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] INFO 📨 Step 'load_data' sending batch 54 to output queue [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] INFO 📨 Step 'multiply_queries' sending batch 26 to output queue [2024-07-20 12:29:40] INFO 📦 Processing batch 27 in 'multiply_queries' [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] INFO 📨 Step 'generate_sentence_pair' sending batch 47 to output queue [2024-07-20 12:29:49] INFO 📦 Processing batch 48 in 'generate_sentence_pair' [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:51] INFO 📨 Step 'load_data' sending batch 55 to output queue [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:40] INFO 📨 Step 'multiply_queries' sending batch 27 to output queue [2024-07-20 12:29:40] INFO 📦 Processing batch 28 in 'multiply_queries' [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] INFO 📦 Processing batch 10 in 'merge_columns' [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Processing batch 10 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:51] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:51] INFO 📨 Step 'merge_columns' sending batch 10 to output queue [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] INFO 📨 Step 'generate_sentence_pair' sending batch 48 to output queue [2024-07-20 12:29:49] INFO 📦 Processing batch 49 in 'generate_sentence_pair' [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] INFO 📦 Processing batch 10 in 'expand_columns_0' [2024-07-20 12:29:52] WARNING ⚠️ Processing batch 10 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:52] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:52] INFO 📨 Step 'expand_columns_0' sending batch 10 to output queue [2024-07-20 12:29:51] INFO 📨 Step 'load_data' sending batch 56 to output queue [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] INFO 📨 Step 'multiply_queries' sending batch 28 to output queue [2024-07-20 12:29:41] INFO 📦 Processing batch 29 in 'multiply_queries' [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:52] INFO 📨 Step 'load_data' sending batch 57 to output queue [2024-07-20 12:29:52] INFO 📨 Step 'load_data' sending batch 58 to output queue [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:41] INFO 📨 Step 'multiply_queries' sending batch 29 to output queue [2024-07-20 12:29:42] INFO 📦 Processing batch 30 in 'multiply_queries' [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:42] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:42] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:42] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:42] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:42] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:49] INFO 📨 Step 'generate_sentence_pair' sending batch 49 to output queue [2024-07-20 12:29:49] INFO 📦 Processing batch 50 in 'generate_sentence_pair' [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:53] INFO 📨 Step 'load_data' sending batch 59 to output queue [2024-07-20 12:29:42] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] INFO 📨 Step 'multiply_queries' sending batch 30 to output queue [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] INFO 📦 Processing batch 31 in 'multiply_queries' [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:42] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] INFO 📨 Step 'generate_sentence_pair' sending batch 50 to output queue [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:53] INFO 📨 Step 'load_data' sending batch 60 to output queue [2024-07-20 12:29:50] INFO 📦 Processing batch 51 in 'generate_sentence_pair' [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] INFO 📦 Processing batch 11 in 'expand_columns_0' [2024-07-20 12:29:54] INFO 📦 Processing batch 11 in 'merge_columns' [2024-07-20 12:29:54] WARNING ⚠️ Processing batch 11 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:54] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:54] INFO 📨 Step 'merge_columns' sending batch 11 to output queue [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:54] WARNING ⚠️ Processing batch 11 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:54] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:54] INFO 📨 Step 'expand_columns_0' sending batch 11 to output queue [2024-07-20 12:29:43] INFO 📨 Step 'multiply_queries' sending batch 31 to output queue [2024-07-20 12:29:43] INFO 📦 Processing batch 32 in 'multiply_queries' [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] INFO 📨 Step 'load_data' sending batch 61 to output queue [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] INFO 📨 Step 'generate_sentence_pair' sending batch 51 to output queue [2024-07-20 12:29:50] INFO 📦 Processing batch 52 in 'generate_sentence_pair' [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] INFO 📨 Step 'load_data' sending batch 62 to output queue [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:50] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:50] INFO 📨 Step 'generate_sentence_pair' sending batch 52 to output queue [2024-07-20 12:29:50] INFO 📦 Processing batch 53 in 'generate_sentence_pair' [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] INFO 📨 Step 'multiply_queries' sending batch 32 to output queue [2024-07-20 12:29:43] INFO 📦 Processing batch 33 in 'multiply_queries' [2024-07-20 12:29:54] INFO 📨 Step 'load_data' sending batch 63 to output queue [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] INFO 📨 Step 'generate_sentence_pair' sending batch 53 to output queue [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] INFO 📦 Processing batch 54 in 'generate_sentence_pair' [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] INFO 📨 Step 'load_data' sending batch 64 to output queue [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] INFO 📨 Step 'generate_sentence_pair' sending batch 54 to output queue [2024-07-20 12:29:51] INFO 📦 Processing batch 55 in 'generate_sentence_pair' [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] INFO 📨 Step 'load_data' sending batch 65 to output queue [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:51] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:51] INFO 📨 Step 'generate_sentence_pair' sending batch 55 to output queue [2024-07-20 12:29:52] INFO 📦 Processing batch 56 in 'generate_sentence_pair' [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] INFO 📦 Processing batch 12 in 'merge_columns' [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] INFO 📨 Step 'generate_sentence_pair' sending batch 56 to output queue [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:55] INFO 📦 Processing batch 12 in 'expand_columns_0' [2024-07-20 12:29:52] INFO 📦 Processing batch 57 in 'generate_sentence_pair' [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:55] WARNING ⚠️ Processing batch 12 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:55] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:55] INFO 📨 Step 'expand_columns_0' sending batch 12 to output queue [2024-07-20 12:29:55] WARNING ⚠️ Processing batch 12 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:55] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:55] INFO 📨 Step 'merge_columns' sending batch 12 to output queue [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:43] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:43] INFO 📨 Step 'multiply_queries' sending batch 33 to output queue [2024-07-20 12:29:43] INFO 📦 Processing batch 34 in 'multiply_queries' [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] INFO 📨 Step 'generate_sentence_pair' sending batch 57 to output queue [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:52] INFO 📦 Processing batch 58 in 'generate_sentence_pair' [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:56] INFO 📨 Step 'load_data' sending batch 66 to output queue [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] INFO 📨 Step 'load_data' sending batch 67 to output queue [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:56] INFO 📨 Step 'load_data' sending batch 68 to output queue [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:44] INFO 📨 Step 'multiply_queries' sending batch 34 to output queue [2024-07-20 12:29:44] INFO 📦 Processing batch 35 in 'multiply_queries' [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] INFO 📨 Step 'load_data' sending batch 69 to output queue [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:53] INFO 📨 Step 'generate_sentence_pair' sending batch 58 to output queue [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:53] INFO 📦 Processing batch 59 in 'generate_sentence_pair' [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] INFO 📨 Step 'load_data' sending batch 70 to output queue [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:53] INFO 📨 Step 'generate_sentence_pair' sending batch 59 to output queue [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:53] INFO 📦 Processing batch 60 in 'generate_sentence_pair' [2024-07-20 12:29:53] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] INFO 📦 Processing batch 13 in 'merge_columns' [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:57] WARNING ⚠️ Processing batch 13 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:57] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:57] INFO 📦 Processing batch 13 in 'expand_columns_0' [2024-07-20 12:29:57] WARNING ⚠️ Processing batch 13 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:57] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:44] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] INFO 📨 Step 'expand_columns_0' sending batch 13 to output queue [2024-07-20 12:29:57] INFO 📨 Step 'merge_columns' sending batch 13 to output queue [2024-07-20 12:29:44] INFO 📨 Step 'multiply_queries' sending batch 35 to output queue [2024-07-20 12:29:44] INFO 📦 Processing batch 36 in 'multiply_queries' [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] INFO 📨 Step 'generate_sentence_pair' sending batch 60 to output queue [2024-07-20 12:29:54] INFO 📦 Processing batch 61 in 'generate_sentence_pair' [2024-07-20 12:29:57] INFO 📨 Step 'load_data' sending batch 71 to output queue [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] INFO 📨 Step 'load_data' sending batch 72 to output queue [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] INFO 📨 Step 'generate_sentence_pair' sending batch 61 to output queue [2024-07-20 12:29:54] INFO 📦 Processing batch 62 in 'generate_sentence_pair' [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] INFO 📨 Step 'load_data' sending batch 73 to output queue [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] INFO 📨 Step 'multiply_queries' sending batch 36 to output queue [2024-07-20 12:29:45] INFO 📦 Processing batch 37 in 'multiply_queries' [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] INFO 📨 Step 'load_data' sending batch 74 to output queue [2024-07-20 12:29:54] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:54] INFO 📨 Step 'generate_sentence_pair' sending batch 62 to output queue [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] INFO 📦 Processing batch 63 in 'generate_sentence_pair' [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] INFO 📨 Step 'load_data' sending batch 75 to output queue [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] INFO 📨 Step 'generate_sentence_pair' sending batch 63 to output queue [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:55] INFO 📦 Processing batch 64 in 'generate_sentence_pair' [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:59] INFO 📦 Processing batch 14 in 'merge_columns' [2024-07-20 12:29:59] INFO 📦 Processing batch 14 in 'expand_columns_0' [2024-07-20 12:29:59] WARNING ⚠️ Processing batch 14 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Processing batch 14 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:59] INFO 📨 Step 'load_data' sending batch 76 to output queue [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:59] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] INFO 📨 Step 'merge_columns' sending batch 14 to output queue [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:59] INFO 📨 Step 'expand_columns_0' sending batch 14 to output queue [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] INFO 📨 Step 'generate_sentence_pair' sending batch 64 to output queue [2024-07-20 12:29:55] INFO 📦 Processing batch 65 in 'generate_sentence_pair' [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] INFO 📨 Step 'multiply_queries' sending batch 37 to output queue [2024-07-20 12:29:45] INFO 📦 Processing batch 38 in 'multiply_queries' [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:55] INFO 📨 Step 'generate_sentence_pair' sending batch 65 to output queue [2024-07-20 12:29:56] INFO 📦 Processing batch 66 in 'generate_sentence_pair' [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:45] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:00] INFO 📨 Step 'load_data' sending batch 77 to output queue [2024-07-20 12:30:00] INFO 📨 Step 'load_data' sending batch 78 to output queue [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:45] INFO 📨 Step 'multiply_queries' sending batch 38 to output queue [2024-07-20 12:29:45] INFO 📦 Processing batch 39 in 'multiply_queries' [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] INFO 📨 Step 'generate_sentence_pair' sending batch 66 to output queue [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] INFO 📦 Processing batch 67 in 'generate_sentence_pair' [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:01] INFO 📨 Step 'load_data' sending batch 79 to output queue [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:56] INFO 📨 Step 'generate_sentence_pair' sending batch 67 to output queue [2024-07-20 12:29:56] INFO 📦 Processing batch 68 in 'generate_sentence_pair' [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] INFO 📨 Step 'multiply_queries' sending batch 39 to output queue [2024-07-20 12:29:46] INFO 📦 Processing batch 40 in 'multiply_queries' [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] INFO 📨 Step 'generate_sentence_pair' sending batch 68 to output queue [2024-07-20 12:29:57] INFO 📦 Processing batch 69 in 'generate_sentence_pair' [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] INFO 📨 Step 'generate_sentence_pair' sending batch 69 to output queue [2024-07-20 12:29:57] INFO 📦 Processing batch 70 in 'generate_sentence_pair' [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] INFO 📨 Step 'load_data' sending batch 80 to output queue [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:57] INFO 📨 Step 'generate_sentence_pair' sending batch 70 to output queue [2024-07-20 12:29:58] INFO 📦 Processing batch 71 in 'generate_sentence_pair' [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:02] INFO 📦 Processing batch 15 in 'expand_columns_0' [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:02] INFO 📦 Processing batch 15 in 'merge_columns' [2024-07-20 12:30:02] WARNING ⚠️ Processing batch 15 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:30:02] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:30:02] INFO 📨 Step 'merge_columns' sending batch 15 to output queue [2024-07-20 12:30:02] WARNING ⚠️ Processing batch 15 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:02] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:30:02] INFO 📨 Step 'load_data' sending batch 81 to output queue [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:02] INFO 📨 Step 'expand_columns_0' sending batch 15 to output queue [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:58] INFO 📨 Step 'generate_sentence_pair' sending batch 71 to output queue [2024-07-20 12:29:58] INFO 📦 Processing batch 72 in 'generate_sentence_pair' [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] INFO 📨 Step 'multiply_queries' sending batch 40 to output queue [2024-07-20 12:29:46] INFO 📦 Processing batch 41 in 'multiply_queries' [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:02] INFO 📨 Step 'load_data' sending batch 82 to output queue [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] INFO 📨 Step 'generate_sentence_pair' sending batch 72 to output queue [2024-07-20 12:29:58] INFO 📦 Processing batch 73 in 'generate_sentence_pair' [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] INFO 📨 Step 'load_data' sending batch 83 to output queue [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:58] INFO 📨 Step 'generate_sentence_pair' sending batch 73 to output queue [2024-07-20 12:29:59] INFO 📦 Processing batch 74 in 'generate_sentence_pair' [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] INFO 📨 Step 'generate_sentence_pair' sending batch 74 to output queue [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:59] INFO 📦 Processing batch 75 in 'generate_sentence_pair' [2024-07-20 12:30:03] INFO 📨 Step 'load_data' sending batch 84 to output queue [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:46] INFO 📨 Step 'multiply_queries' sending batch 41 to output queue [2024-07-20 12:29:47] INFO 📦 Processing batch 42 in 'multiply_queries' [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] INFO 📨 Step 'generate_sentence_pair' sending batch 75 to output queue [2024-07-20 12:29:59] INFO 📦 Processing batch 76 in 'generate_sentence_pair' [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:04] INFO 📨 Step 'load_data' sending batch 85 to output queue [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:04] INFO 📦 Processing batch 16 in 'merge_columns' [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:59] INFO 📨 Step 'generate_sentence_pair' sending batch 76 to output queue [2024-07-20 12:30:00] INFO 📦 Processing batch 77 in 'generate_sentence_pair' [2024-07-20 12:30:00] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:00] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:00] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:00] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:04] WARNING ⚠️ Processing batch 16 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:04] INFO 📦 Processing batch 16 in 'expand_columns_0' [2024-07-20 12:30:04] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:30:04] INFO 📨 Step 'merge_columns' sending batch 16 to output queue [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:04] WARNING ⚠️ Processing batch 16 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:30:04] INFO 📨 Step 'load_data' sending batch 86 to output queue [2024-07-20 12:30:04] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:30:00] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:04] INFO 📨 Step 'expand_columns_0' sending batch 16 to output queue [2024-07-20 12:30:00] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] INFO 📨 Step 'multiply_queries' sending batch 42 to output queue [2024-07-20 12:29:47] INFO 📦 Processing batch 43 in 'multiply_queries' [2024-07-20 12:30:05] INFO 📨 Step 'load_data' sending batch 87 to output queue [2024-07-20 12:30:00] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:00] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:00] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:00] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:00] INFO 📨 Step 'generate_sentence_pair' sending batch 77 to output queue [2024-07-20 12:30:00] INFO 📦 Processing batch 78 in 'generate_sentence_pair' [2024-07-20 12:30:00] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:05] INFO 📨 Step 'load_data' sending batch 88 to output queue [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] INFO 📨 Step 'generate_sentence_pair' sending batch 78 to output queue [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:01] INFO 📦 Processing batch 79 in 'generate_sentence_pair' [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:01] INFO 📨 Step 'generate_sentence_pair' sending batch 79 to output queue [2024-07-20 12:30:05] INFO 📨 Step 'load_data' sending batch 89 to output queue [2024-07-20 12:30:02] INFO 📦 Processing batch 80 in 'generate_sentence_pair' [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] INFO 📨 Step 'multiply_queries' sending batch 43 to output queue [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] INFO 📦 Processing batch 44 in 'multiply_queries' [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:06] INFO 📨 Step 'load_data' sending batch 90 to output queue [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:06] INFO 📦 Processing batch 17 in 'merge_columns' [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:06] WARNING ⚠️ Processing batch 17 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:02] INFO 📨 Step 'generate_sentence_pair' sending batch 80 to output queue [2024-07-20 12:29:47] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:06] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:06] INFO 📦 Processing batch 17 in 'expand_columns_0' [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:06] WARNING ⚠️ Processing batch 17 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:02] INFO 📦 Processing batch 81 in 'generate_sentence_pair' [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:06] INFO 📨 Step 'merge_columns' sending batch 17 to output queue [2024-07-20 12:30:06] INFO 📨 Step 'load_data' sending batch 91 to output queue [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:06] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:06] INFO 📨 Step 'expand_columns_0' sending batch 17 to output queue [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:47] INFO 📨 Step 'multiply_queries' sending batch 44 to output queue [2024-07-20 12:29:48] INFO 📦 Processing batch 45 in 'multiply_queries' [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:02] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:02] INFO 📨 Step 'generate_sentence_pair' sending batch 81 to output queue [2024-07-20 12:30:03] INFO 📦 Processing batch 82 in 'generate_sentence_pair' [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:06] INFO 📨 Step 'load_data' sending batch 92 to output queue [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] INFO 📨 Step 'generate_sentence_pair' sending batch 82 to output queue [2024-07-20 12:30:03] INFO 📦 Processing batch 83 in 'generate_sentence_pair' [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:07] INFO 📨 Step 'load_data' sending batch 93 to output queue [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] INFO 📨 Step 'multiply_queries' sending batch 45 to output queue [2024-07-20 12:29:48] INFO 📦 Processing batch 46 in 'multiply_queries' [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:07] INFO 📨 Step 'load_data' sending batch 94 to output queue [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] INFO 📨 Step 'generate_sentence_pair' sending batch 83 to output queue [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:48] INFO 📨 Step 'multiply_queries' sending batch 46 to output queue [2024-07-20 12:29:48] INFO 📦 Processing batch 47 in 'multiply_queries' [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] INFO 📦 Processing batch 84 in 'generate_sentence_pair' [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:08] INFO 📨 Step 'load_data' sending batch 95 to output queue [2024-07-20 12:30:08] INFO 🏁 Finished running step 'load_data' [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:08] INFO 📦 Processing batch 18 in 'merge_columns' [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:08] WARNING ⚠️ Processing batch 18 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:08] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:08] INFO 📨 Step 'merge_columns' sending batch 18 to output queue [2024-07-20 12:29:49] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:49] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:49] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:49] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:49] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:49] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:29:49] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:08] INFO 📦 Processing batch 18 in 'expand_columns_0' [2024-07-20 12:29:49] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:08] INFO 📦 Processing batch 19 in 'merge_columns' [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING Task 'multiply_queries' failed to format output: 'NoneType' object has no attribute 'split'. Saving raw response. [2024-07-20 12:30:08] WARNING ⚠️ Processing batch 18 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:30:08] WARNING ⚠️ Processing batch 19 with step 'merge_columns' failed. Sending empty batch filled with `None`s... [2024-07-20 12:30:08] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:29:49] INFO 📨 Step 'multiply_queries' sending batch 47 to output queue [2024-07-20 12:30:08] INFO 📨 Step 'expand_columns_0' sending batch 18 to output queue [2024-07-20 12:30:08] INFO 📦 Processing batch 19 in 'expand_columns_0' [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] INFO 📦 Processing batch 48 in 'multiply_queries' [2024-07-20 12:30:08] WARNING ⚠️ Processing batch 19 with step 'expand_columns_0' failed. Sending empty batch filled with `None`s... [2024-07-20 12:30:08] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in process yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 111, in yield [row for input in inputs for row in self._expand_columns(input)] File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/expand.py", line 126, in _expand_columns for item, expanded in zip_longest(*[data, expanded_rows], fillvalue=input): TypeError: 'NoneType' object is not iterable [2024-07-20 12:30:08] INFO 📨 Step 'expand_columns_0' sending batch 19 to output queue [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:08] WARNING Subprocess traceback: Traceback (most recent call last): File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/local.py", line 512, in _non_generator_process_loop result = next(self.step.process_applying_mappings(*batch.data)) File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/base.py", line 512, in process_applying_mappings for output_rows in generator: File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/steps/combine.py", line 119, in process yield combine_dicts( File "/home/runner/UnlawfulBothVoxel/.pythonlibs/lib/python3.10/site-packages/distilabel/pipeline/utils.py", line 39, in combine_dicts raise ValueError( ValueError: The length of output_merge_keys must be the same as the length of merge_keys [2024-07-20 12:30:08] INFO 🏁 Finished running step 'expand_columns_0' [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:30:03] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:29:49] WARNING ⚠️ Received no response using Inference Client (model: 'meta-llama/Meta-Llama-3-70B-Instruct'). Finish reason was: 429, message='Too Many Requests', url=URL('https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-70B-Instruct') [2024-07-20 12:36:15] INFO 💾 Loading `_BatchManager` from cache: '/home/runner/.cache/distilabel/pipelines/embedding-queries/0beaa44a146caf40e3953a9ed6e6263c267f3c58/batch_manager.json' [2024-07-20 12:36:15] INFO 💾 Loaded batch manager from cache doesn't contain any remaining data. Returning `Distiset` from cache data... [2024-07-20 12:51:53] INFO 💾 Loading `_BatchManager` from cache: '/home/runner/.cache/distilabel/pipelines/embedding-queries/0beaa44a146caf40e3953a9ed6e6263c267f3c58/batch_manager.json' [2024-07-20 12:51:53] INFO 💾 Loaded batch manager from cache doesn't contain any remaining data. Returning `Distiset` from cache data... [2024-07-20 13:02:13] INFO 💾 Loading `_BatchManager` from cache: '/home/runner/.cache/distilabel/pipelines/embedding-queries/0beaa44a146caf40e3953a9ed6e6263c267f3c58/batch_manager.json' [2024-07-20 13:02:13] INFO 💾 Loaded batch manager from cache doesn't contain any remaining data. Returning `Distiset` from cache data... [2024-07-20 13:04:12] INFO 💾 Loading `_BatchManager` from cache: '/home/runner/.cache/distilabel/pipelines/embedding-queries/0beaa44a146caf40e3953a9ed6e6263c267f3c58/batch_manager.json' [2024-07-20 13:04:12] INFO 💾 Loaded batch manager from cache doesn't contain any remaining data. Returning `Distiset` from cache data...