SwapAnything is a new method that allows swapping any object in an image with personalized concepts given by a reference image.
Key points: 1️⃣ It uses pre-trained diffusion models to enable precise and high-fidelity object swapping in images. 2️⃣Targeted variable swapping ensures perfect background preservation while swapping specific areas. 3️⃣SwapAnything achieves good results in single-object, multi-object, partial-object, and cross-domain swapping tasks.
Anthropic introduces "Many-shot Jailbreaking" (MSJ), a new attack on large language models! MSJ exploits long context windows to override safety constraints.
Key Points: * Prompts LLMs with hundreds of examples of harmful behavior formatted as a dialogue * Generates malicious examples using an uninhibited "helpful-only" model * Effective at jailbreaking models like Claude 2.0, GPT-3.5, GPT-4 * Standard alignment techniques provide limited protection against long context attacks
Google DeepMind introduces Gecko a new text embedding! Gecko uses a two-step process that leverages synthetic data generation and reranking.
Keypoints: * Uses an LLM to generate diverse synthetic queries and tasks from web passages * Refines the data by retrieving candidate passages and relabeling positives/negatives using the same LLM * Achieves very good results on the Massive Text Embedding Benchmark, where compact 256D Gecko outperforms 768D models. * 768D Gecko achieves state-of-the-art performance competing with models a lot larger larger.
A new paper titled "Long-Form Factuality in Large Language Models" proposes a new approach to evaluate the long-form factuality of large language models using an AI agent! They introduce SAFE (Search-Augmented Factuality Evaluator) which leverages an LLM to break down responses into individual facts, query Google to verify each fact, and perform multi-step reasoning.
Keypoints: * SAFE (Search-Augmented Factuality Evaluator) is an automated method using an LLM agent to evaluate factuality * It also introduces LongFact, a 2,280 prompt set spanning 38 topics to test open-domain factual knowledge * SAFE achieves a 72% humans agreement while being 20x cheaper. It also wins 76% of the disagreements measured on a small scale experiment where a more thorough human procedure (researchers + full internet search) was used. * Larger models like GPT-4, Claude Opus and Gemini Ultra tend to exhibit better long-form factuality.
A new paper introduces Visual CoT, a new approach that enhances multi-modal large language models with visual chain-of-thought reasoning capabilities. This allows language models to dynamically identify and focus on specific regions within images that are most relevant for answering questions, mimicking human-like efficient visual reasoning.
Keypoints: * Introduces the 373k Visual CoT dataset with bounding box annotations highlighting essential image regions * Proposes a multi-turn pipeline for focusing on relevant visual inputs * Achieves strong results on multi-modal benchmarks
"Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts" is a new framework designed to animate specific regions within an image through user inputs.
Key points: * Enables precise animation of selected image regions with just a user click and a concise motion description. * Achieves promising results for generating localized animations.
Synth^2 is a new approach that leverages large language models and text-to-image generators to create synthetic image-caption data for boosting visual-language model performance.
Key Points: * Overcomes data limitations by generating high-quality synthetic image-caption pairs, reducing reliance on costly human annotations. * Achieves competitive results on image captioning tasks using 40x less paired data than state-of-the-art methods.
A recent paper titled "ShortGPT: Layers in Large Language Models are More Redundant Than You Expect" proposes a simple and effective approach to pruning Large Language Models (LLMs) by removing redundant layers.
Key points: * Discovers significant redundancy across layers in LLMs, with some layers playing a negligible role for the final performance. * Defines a new metric called Block Influence (BI) to quantify the importance of each layer in an LLM. * Removes layers with low BI scores, achieving up to 25% reduction in parameters and computation while maintaining 92% of the LLM's performance.
A recent paper titled "Finetuned Multimodal Language Models Are High-Quality Image-Text Data Filters" proposes using fine-tuned Multimodal Language Models (MLMs) as high-quality filters for image-text data.
Key points: * Defines multiple metrics to assess image-text quality from different perspectives like object details, text quality, and semantic understanding. * Leverages GPT-4 and GPT-4V to construct high-quality instruction data for fine-tuning open-source MLMs as effective data filters. * Fine-tuned MLM filters generate more precise scores, leading to better filtered data and improved performance of pre-trained models on various downstream tasks.
"Multi-LoRA Composition for Image Generation" introduces two new approaches for combining multiple visual elements in text-to-image generation using Low-Rank Adaptations (LoRAs)! 🎨
Key Points: * Proposes two methods - LoRA Switch and LoRA Composite - that activate/combine LoRAs during the denoising process rather than merging weights * LoRA Switch cycles through different LoRAs at each step, while LoRA Composite averages guidance from all LoRAs simultaneously
The "Design2Code: How Far Are We From Automating Front-End Engineering" paper presents a benchmark for multimodal large language models (LLMs) aimed at automating front-end web development by translating webpage designs (screenshots) into code. This task evaluates the models' ability to recreate webpages that are visually and structurally similar to the original designs.
Key Points: * Introduces the Design2Code task and benchmark for converting webpage screenshots into code, aiming to automate front-end web development. * Evaluates multimodal LLMs using comprehensive metrics for visual similarity and element matching. * GPT-4V outperforms other models in terms of visual resemblance and content accuracy, with generated webpages often preferred over the original references.
VisionLLaMA is a new vision transformer architecture that adapts the successful LLaMA language model design for vision tasks. By integrating components like rotary positional embeddings, SwiGLU activation, and LayerNorm from LLaMA, VisionLLaMA achieves very promising performance across various vision tasks, including image generation, classification, semantic segmentation, and object detection.
Keypoints: * Outperforms state-of-the-art vision transformers like DiT, SiT, DeiT3, and Swin on multiple benchmarks and tasks. * Leverages Auto-Scaled 2D Rotary Positional Embeddings (AS2DRoPE) to handle variable input resolutions efficiently. * Serves as a powerful, unified modeling framework for vision generation and understanding tasks.
Panda-70M is a new large-scale video dataset comprising 70 million high-quality video clips, each paired with textual captions, designed to be used as pre-training for video understanding tasks.
Key Points: * Automatic Caption Generation: Utilizes an automatic pipeline with multiple cross-modality teacher models to generate captions for video clips. * Fine-tuned Caption Selection: Employs a fine-tuned retrieval model to select the most appropriate caption from multiple candidates for each video clip. * Improved Performance: Pre-training on Panda-70M shows significant performance gains in video captioning, text-video retrieval, and text-driven video generation.
"What Evidence Do Language Models Find Convincing?" is a new paper that explores what types of evidence and argumentation techniques language models find convincing when presented with ambiguous, open-domain questions that have conflicting answers online.
Keypoints: * Dataset: It introduces "ConflictingQA," a dataset of controversial questions and real-world evidence paragraphs supporting both "yes" and "no" answers. * Convincingness Metric: It uses the "paragraph win rate" - when shown two conflicting paragraphs, this measures how often a model predicts the answer that aligns with a given paragraph's stance. * Current models rely on the relevance of the content to the query, while largely ignoring stylistic features such as whether a text contains scientific references or if it is written with a neutral tone.
Genie is a new method from Google DeepMind that generates interactive, action-controllable virtual worlds from unlabelled internet videos using.
Keypoints: * Genie leverages a spatiotemporal video tokenizer, an autoregressive dynamics model, and a latent action model to generate controllable video environments. * The model is trained on video data alone, without requiring action labels, using unsupervised learning to infer latent actions between frames. * The method restricts the size of the action vocabulary to 8 to ensure that the number of possible latent actions remains small. * The dataset used for training is generated by filtering publicly available internet videos with specific criteria related to 2D platformer games for a total of 6.8M videos used for training.
"A Closer Look at the Limitations of Instruction Tuning" is a new paper that explores the efficacy and limitations of Instruction Tuning (IT) in Large Language Models (LLMs) for conversational agents. The authors conduct a series of experiments using both LoRA fine-tuning (LFT) and standard full-parameter fine-tuning (SFT) across various LLMs and IT datasets.
The key findings are: * LoRA fine-tuning (LFT) preserves the pre-training token distribution while SFT doesn't. This indicates that using LFT, post fine-tuning the model still heavily relies on the pre-training and doesn't acquire new information. * Dataset scaling is ineffective for LFT - experiments show that scaling the dataset size 52x or even 326x doesn't improve the performance. * LoRA fine-tuning mainly enhances response initiation and style without substantial knowledge enhancement. * Full-parameter fine-tuning tends to degrade LLM knowledge base and increase hallucination occurrences. * Popular other methods and adjustments fail to significantly outperform simple LoRA fine-tuned models in terms of conversational quality and accuracy.
Congrats to the authors @Sreyan88 and others for their work!
"LLM Agents can Autonomously Hack Websites" is a new paper that investigates the capacity of LLMs to autonomously execute cybersecurity attacks on websites, such as SQL injections without human guidance.
Key points: * It uses a LLM integrated with Playwright, a headless web browser, enabling automated web interactions through function calling. * It gives access to the LLM to 7 web hacking documents and planning capabilities through specific prompting, without disclosing the exact methods to prevent misuse.
GPT-4 achieves a 73.3% success rate on the tested vulnerabilities, emphasizing the potential cybersecurity risks posed by advanced LLMs. Other open models cannot yet perform these types of attacks (results in screenshot).
VideoPrism is a new video encoder that improves video understanding through a unique training strategy, using a vast dataset (36 million high-quality video-caption pairs and 582 million video clips) for comprehensive learning.
Key points: * It employs a two-stage training approach, initially aligning video and text encoders, followed by an enhanced video-only masked autoencoding process to learn appearance and motion. * It achieves superior performance in a wide array of tasks, such as general video understanding, zero-shot video-text retrieval, video captioning, QA, and computer vision for science, having top performance on 30 out of 33 benchmarks.
Web Rephrase Augmented Pre-training (WRAP) enhances language model training efficiency by transforming documents into structured formats.
Key aspects: * Utilizes an instruction-tuned model to rephrase web content into styles such as Wikipedia or Q/A, creating a blend of synthetic and real data for training. * Demonstrated improvements of over 10% better perplexity, alongside more than 2% increase in zero-shot question-answering accuracy.