diff --git "a/collection.json" "b/collection.json" new file mode 100644--- /dev/null +++ "b/collection.json" @@ -0,0 +1,2718 @@ +[ + "The reflective nature of the human eye is an underappreciated source of\ninformation about what the world around us looks like. By imaging the eyes of a\nmoving person, we can collect multiple views of a scene outside the camera's\ndirect line of sight through the reflections in the eyes. In this paper, we\nreconstruct a 3D scene beyond the camera's line of sight using portrait images\ncontaining eye reflections. This task is challenging due to 1) the difficulty\nof accurately estimating eye poses and 2) the entangled appearance of the eye\niris and the scene reflections. Our method jointly refines the cornea poses,\nthe radiance field depicting the scene, and the observer's eye iris texture. We\nfurther propose a simple regularization prior on the iris texture pattern to\nimprove reconstruction quality. Through various experiments on synthetic and\nreal-world captures featuring people with varied eye colors, we demonstrate the\nfeasibility of our approach to recover 3D scenes using eye reflections.", + "The recovery of occluded human meshes presents challenges for current methods\ndue to the difficulty in extracting effective image features under severe\nocclusion. In this paper, we introduce DPMesh, an innovative framework for\noccluded human mesh recovery that capitalizes on the profound diffusion prior\nabout object structure and spatial relationships embedded in a pre-trained\ntext-to-image diffusion model. Unlike previous methods reliant on conventional\nbackbones for vanilla feature extraction, DPMesh seamlessly integrates the\npre-trained denoising U-Net with potent knowledge as its image backbone and\nperforms a single-step inference to provide occlusion-aware information. To\nenhance the perception capability for occluded poses, DPMesh incorporates\nwell-designed guidance via condition injection, which produces effective\ncontrols from 2D observations for the denoising U-Net. Furthermore, we explore\na dedicated noisy key-point reasoning approach to mitigate disturbances arising\nfrom occlusion and crowded scenarios. This strategy fully unleashes the\nperceptual capability of the diffusion prior, thereby enhancing accuracy.\nExtensive experiments affirm the efficacy of our framework, as we outperform\nstate-of-the-art methods on both occlusion-specific and standard datasets.", + "This strategy fully unleashes the\nperceptual capability of the diffusion prior, thereby enhancing accuracy.\nExtensive experiments affirm the efficacy of our framework, as we outperform\nstate-of-the-art methods on both occlusion-specific and standard datasets. The\npersuasive results underscore its ability to achieve precise and robust 3D\nhuman mesh recovery, particularly in challenging scenarios involving occlusion\nand crowded scenes.", + "The training of contemporary deep learning models heavily relies on publicly\navailable data, posing a risk of unauthorized access to online data and raising\nconcerns about data privacy. Current approaches to creating unlearnable data\ninvolve incorporating small, specially designed noises, but these methods\nstrictly limit data usability, overlooking its potential usage in authorized\nscenarios. In this paper, we extend the concept of unlearnable data to\nconditional data learnability and introduce \\textbf{U}n\\textbf{G}eneralizable\n\\textbf{E}xamples (UGEs). UGEs exhibit learnability for authorized users while\nmaintaining unlearnability for potential hackers. The protector defines the\nauthorized network and optimizes UGEs to match the gradients of the original\ndata and its ungeneralizable version, ensuring learnability. To prevent\nunauthorized learning, UGEs are trained by maximizing a designated distance\nloss in a common feature space. Additionally, to further safeguard the\nauthorized side from potential attacks, we introduce additional undistillation\noptimization.", + "To prevent\nunauthorized learning, UGEs are trained by maximizing a designated distance\nloss in a common feature space. Additionally, to further safeguard the\nauthorized side from potential attacks, we introduce additional undistillation\noptimization. Experimental results on multiple datasets and various networks\ndemonstrate that the proposed UGEs framework preserves data usability while\nreducing training performance on hacker networks, even under different types of\nattacks.", + "3D city generation is a desirable yet challenging task, since humans are more\nsensitive to structural distortions in urban environments. Additionally,\ngenerating 3D cities is more complex than 3D natural scenes since buildings, as\nobjects of the same class, exhibit a wider range of appearances compared to the\nrelatively consistent appearance of objects like trees in natural scenes. To\naddress these challenges, we propose \\textbf{CityDreamer}, a compositional\ngenerative model designed specifically for unbounded 3D cities. Our key insight\nis that 3D city generation should be a composition of different types of neural\nfields: 1) various building instances, and 2) background stuff, such as roads\nand green lands. Specifically, we adopt the bird's eye view scene\nrepresentation and employ a volumetric render for both instance-oriented and\nstuff-oriented neural fields. The generative hash grid and periodic positional\nembedding are tailored as scene parameterization to suit the distinct\ncharacteristics of building instances and background stuff.", + "Specifically, we adopt the bird's eye view scene\nrepresentation and employ a volumetric render for both instance-oriented and\nstuff-oriented neural fields. The generative hash grid and periodic positional\nembedding are tailored as scene parameterization to suit the distinct\ncharacteristics of building instances and background stuff. Furthermore, we\ncontribute a suite of CityGen Datasets, including OSM and GoogleEarth, which\ncomprises a vast amount of real-world city imagery to enhance the realism of\nthe generated 3D cities both in their layouts and appearances. CityDreamer\nachieves state-of-the-art performance not only in generating realistic 3D\ncities but also in localized editing within the generated cities.", + "In this work we develop 3D Paintbrush, a technique for automatically\ntexturing local semantic regions on meshes via text descriptions. Our method is\ndesigned to operate directly on meshes, producing texture maps which seamlessly\nintegrate into standard graphics pipelines. We opt to simultaneously produce a\nlocalization map (to specify the edit region) and a texture map which conforms\nto it. This synergistic approach improves the quality of both the localization\nand the stylization. To enhance the details and resolution of the textured\narea, we leverage multiple stages of a cascaded diffusion model to supervise\nour local editing technique with generative priors learned from images at\ndifferent resolutions. Our technique, referred to as Cascaded Score\nDistillation (CSD), simultaneously distills scores at multiple resolutions in a\ncascaded fashion, enabling control over both the granularity and global\nunderstanding of the supervision. We demonstrate the effectiveness of 3D\nPaintbrush to locally texture a variety of shapes within different semantic\nregions. Project page: https://threedle.github.io/3d-paintbrush", + "Unsupervised video object segmentation aims to segment the most prominent\nobject in a video sequence. However, the existence of complex backgrounds and\nmultiple foreground objects make this task challenging. To address this issue,\nwe propose a guided slot attention network to reinforce spatial structural\ninformation and obtain better foreground--background separation. The foreground\nand background slots, which are initialized with query guidance, are\niteratively refined based on interactions with template information.\nFurthermore, to improve slot--template interaction and effectively fuse global\nand local features in the target and reference frames, K-nearest neighbors\nfiltering and a feature aggregation transformer are introduced. The proposed\nmodel achieves state-of-the-art performance on two popular datasets.\nAdditionally, we demonstrate the robustness of the proposed model in\nchallenging scenes through various comparative experiments.", + "Action detection aims to localize the starting and ending points of action\ninstances in untrimmed videos, and predict the classes of those instances. In\nthis paper, we make the observation that the outputs of the action detection\ntask can be formulated as images. Thus, from a novel perspective, we tackle\naction detection via a three-image generation process to generate starting\npoint, ending point and action-class predictions as images via our proposed\nAction Detection Image Diffusion (ADI-Diff) framework. Furthermore, since our\nimages differ from natural images and exhibit special properties, we further\nexplore a Discrete Action-Detection Diffusion Process and a Row-Column\nTransformer design to better handle their processing. Our ADI-Diff framework\nachieves state-of-the-art results on two widely-used datasets.", + "Character animation in real-world scenarios necessitates a variety of\nconstraints, such as trajectories, key-frames, interactions, etc. Existing\nmethodologies typically treat single or a finite set of these constraint(s) as\nseparate control tasks. They are often specialized, and the tasks they address\nare rarely extendable or customizable. We categorize these as solutions to the\nclose-set motion control problem. In response to the complexity of practical\nmotion control, we propose and attempt to solve the open-set motion control\nproblem. This problem is characterized by an open and fully customizable set of\nmotion control tasks. To address this, we introduce a new paradigm,\nprogrammable motion generation. In this paradigm, any given motion control task\nis broken down into a combination of atomic constraints. These constraints are\nthen programmed into an error function that quantifies the degree to which a\nmotion sequence adheres to them. We utilize a pre-trained motion generation\nmodel and optimize its latent code to minimize the error function of the\ngenerated motion. Consequently, the generated motion not only inherits the\nprior of the generative model but also satisfies the required constraints.", + "We utilize a pre-trained motion generation\nmodel and optimize its latent code to minimize the error function of the\ngenerated motion. Consequently, the generated motion not only inherits the\nprior of the generative model but also satisfies the required constraints.\nExperiments show that we can generate high-quality motions when addressing a\nwide range of unseen tasks. These tasks encompass motion control by motion\ndynamics, geometric constraints, physical laws, interactions with scenes,\nobjects or the character own body parts, etc. All of these are achieved in a\nunified approach, without the need for ad-hoc paired training data collection\nor specialized network designs. During the programming of novel tasks, we\nobserved the emergence of new skills beyond those of the prior model. With the\nassistance of large language models, we also achieved automatic programming. We\nhope that this work will pave the way for the motion control of general AI\nagents.", + "Existing text-based person retrieval datasets often have relatively\ncoarse-grained text annotations. This hinders the model to comprehend the\nfine-grained semantics of query texts in real scenarios. To address this\nproblem, we contribute a new benchmark named \\textbf{UFineBench} for text-based\nperson retrieval with ultra-fine granularity.\n Firstly, we construct a new \\textbf{dataset} named UFine6926. We collect a\nlarge number of person images and manually annotate each image with two\ndetailed textual descriptions, averaging 80.8 words each. The average word\ncount is three to four times that of the previous datasets. In addition of\nstandard in-domain evaluation, we also propose a special \\textbf{evaluation\nparadigm} more representative of real scenarios. It contains a new evaluation\nset with cross domains, cross textual granularity and cross textual styles,\nnamed UFine3C, and a new evaluation metric for accurately measuring retrieval\nability, named mean Similarity Distribution (mSD). Moreover, we propose CFAM, a\nmore efficient \\textbf{algorithm} especially designed for text-based person\nretrieval with ultra fine-grained texts.", + "Moreover, we propose CFAM, a\nmore efficient \\textbf{algorithm} especially designed for text-based person\nretrieval with ultra fine-grained texts. It achieves fine granularity mining by\nadopting a shared cross-modal granularity decoder and hard negative match\nmechanism.\n With standard in-domain evaluation, CFAM establishes competitive performance\nacross various datasets, especially on our ultra fine-grained UFine6926.\nFurthermore, by evaluating on UFine3C, we demonstrate that training on our\nUFine6926 significantly improves generalization to real scenarios compared with\nother coarse-grained datasets. The dataset and code will be made publicly\navailable at \\url{https://github.com/Zplusdragon/UFineBench}.", + "Real-time rendering of photorealistic and controllable human avatars stands\nas a cornerstone in Computer Vision and Graphics. While recent advances in\nneural implicit rendering have unlocked unprecedented photorealism for digital\navatars, real-time performance has mostly been demonstrated for static scenes\nonly. To address this, we propose ASH, an animatable Gaussian splatting\napproach for photorealistic rendering of dynamic humans in real-time. We\nparameterize the clothed human as animatable 3D Gaussians, which can be\nefficiently splatted into image space to generate the final rendering. However,\nnaively learning the Gaussian parameters in 3D space poses a severe challenge\nin terms of compute. Instead, we attach the Gaussians onto a deformable\ncharacter model, and learn their parameters in 2D texture space, which allows\nleveraging efficient 2D convolutional architectures that easily scale with the\nrequired number of Gaussians. We benchmark ASH with competing methods on\npose-controllable avatars, demonstrating that our method outperforms existing\nreal-time methods by a large margin and shows comparable or even better results\nthan offline methods.", + "Adversarial training is often formulated as a min-max problem, however,\nconcentrating only on the worst adversarial examples causes alternating\nrepetitive confusion of the model, i.e., previously defended or correctly\nclassified samples are not defensible or accurately classifiable in subsequent\nadversarial training. We characterize such non-ignorable samples as \"hiders\",\nwhich reveal the hidden high-risk regions within the secure area obtained\nthrough adversarial training and prevent the model from finding the real worst\ncases. We demand the model to prevent hiders when defending against adversarial\nexamples for improving accuracy and robustness simultaneously. By rethinking\nand redefining the min-max optimization problem for adversarial training, we\npropose a generalized adversarial training algorithm called Hider-Focused\nAdversarial Training (HFAT). HFAT introduces the iterative evolution\noptimization strategy to simplify the optimization problem and employs an\nauxiliary model to reveal hiders, effectively combining the optimization\ndirections of standard adversarial training and prevention hiders. Furthermore,\nwe introduce an adaptive weighting mechanism that facilitates the model in\nadaptively adjusting its focus between adversarial examples and hiders during\ndifferent training periods.", + "Furthermore,\nwe introduce an adaptive weighting mechanism that facilitates the model in\nadaptively adjusting its focus between adversarial examples and hiders during\ndifferent training periods. We demonstrate the effectiveness of our method\nbased on extensive experiments, and ensure that HFAT can provide higher\nrobustness and accuracy.", + "This work introduces ArtAdapter, a transformative text-to-image (T2I) style\ntransfer framework that transcends traditional limitations of color,\nbrushstrokes, and object shape, capturing high-level style elements such as\ncomposition and distinctive artistic expression. The integration of a\nmulti-level style encoder with our proposed explicit adaptation mechanism\nenables ArtAdapter to achieve unprecedented fidelity in style transfer,\nensuring close alignment with textual descriptions. Additionally, the\nincorporation of an Auxiliary Content Adapter (ACA) effectively separates\ncontent from style, alleviating the borrowing of content from style references.\nMoreover, our novel fast finetuning approach could further enhance zero-shot\nstyle representation while mitigating the risk of overfitting. Comprehensive\nevaluations confirm that ArtAdapter surpasses current state-of-the-art methods.", + "This paper tackles a novel yet challenging problem: how to transfer knowledge\nfrom the emerging Segment Anything Model (SAM) -- which reveals impressive\nzero-shot instance segmentation capacity -- to learn a compact panoramic\nsemantic segmentation model, i.e., student, without requiring any labeled data.\nThis poses considerable challenges due to SAM's inability to provide semantic\nlabels and the large capacity gap between SAM and the student. To this end, we\npropose a novel framework, called GoodSAM, that introduces a teacher assistant\n(TA) to provide semantic information, integrated with SAM to generate ensemble\nlogits to achieve knowledge transfer. Specifically, we propose a\nDistortion-Aware Rectification (DAR) module that first addresses the distortion\nproblem of panoramic images by imposing prediction-level consistency and\nboundary enhancement. This subtly enhances TA's prediction capacity on\npanoramic images. DAR then incorporates a cross-task complementary fusion block\nto adaptively merge the predictions of SAM and TA to obtain more reliable\nensemble logits. Moreover, we introduce a Multi-level Knowledge Adaptation\n(MKA) module to efficiently transfer the multi-level feature knowledge from TA\nand ensemble logits to learn a compact student model.", + "Moreover, we introduce a Multi-level Knowledge Adaptation\n(MKA) module to efficiently transfer the multi-level feature knowledge from TA\nand ensemble logits to learn a compact student model. Extensive experiments on\ntwo benchmarks show that our GoodSAM achieves a remarkable +3.75\\% mIoU\nimprovement over the state-of-the-art (SOTA) domain adaptation methods. Also,\nour most lightweight model achieves comparable performance to the SOTA methods\nwith only 3.7M parameters.", + "An ideal model for dense video captioning -- predicting captions localized\ntemporally in a video -- should be able to handle long input videos, predict\nrich, detailed textual descriptions, and be able to produce outputs before\nprocessing the entire video. Current state-of-the-art models, however, process\na fixed number of downsampled frames, and make a single full prediction after\nseeing the whole video. We propose a streaming dense video captioning model\nthat consists of two novel components: First, we propose a new memory module,\nbased on clustering incoming tokens, which can handle arbitrarily long videos\nas the memory is of a fixed size. Second, we develop a streaming decoding\nalgorithm that enables our model to make predictions before the entire video\nhas been processed. Our model achieves this streaming ability, and\nsignificantly improves the state-of-the-art on three dense video captioning\nbenchmarks: ActivityNet, YouCook2 and ViTT. Our code is released at\nhttps://github.com/google-research/scenic.", + "Despite the growing demand for accurate surface normal estimation models,\nexisting methods use general-purpose dense prediction models, adopting the same\ninductive biases as other tasks. In this paper, we discuss the inductive biases\nneeded for surface normal estimation and propose to (1) utilize the per-pixel\nray direction and (2) encode the relationship between neighboring surface\nnormals by learning their relative rotation. The proposed method can generate\ncrisp - yet, piecewise smooth - predictions for challenging in-the-wild images\nof arbitrary resolution and aspect ratio. Compared to a recent ViT-based\nstate-of-the-art model, our method shows a stronger generalization ability,\ndespite being trained on an orders of magnitude smaller dataset. The code is\navailable at https://github.com/baegwangbin/DSINE.", + "Event sensors offer high temporal resolution visual sensing, which makes them\nideal for perceiving fast visual phenomena without suffering from motion blur.\nCertain applications in robotics and vision-based navigation require 3D\nperception of an object undergoing circular or spinning motion in front of a\nstatic camera, such as recovering the angular velocity and shape of the object.\nThe setting is equivalent to observing a static object with an orbiting camera.\nIn this paper, we propose event-based structure-from-orbit (eSfO), where the\naim is to simultaneously reconstruct the 3D structure of a fast spinning object\nobserved from a static event camera, and recover the equivalent orbital motion\nof the camera. Our contributions are threefold: since state-of-the-art event\nfeature trackers cannot handle periodic self-occlusion due to the spinning\nmotion, we develop a novel event feature tracker based on spatio-temporal\nclustering and data association that can better track the helical trajectories\nof valid features in the event data. The feature tracks are then fed to our\nnovel factor graph-based structure-from-orbit back-end that calculates the\norbital motion parameters (e.g., spin rate, relative rotational axis) that\nminimize the reprojection error.", + "The feature tracks are then fed to our\nnovel factor graph-based structure-from-orbit back-end that calculates the\norbital motion parameters (e.g., spin rate, relative rotational axis) that\nminimize the reprojection error. For evaluation, we produce a new event dataset\nof objects under spinning motion. Comparisons against ground truth indicate the\nefficacy of eSfO.", + "Event camera has significant advantages in capturing dynamic scene\ninformation while being prone to noise interference, particularly in\nchallenging conditions like low threshold and low illumination. However, most\nexisting research focuses on gentle situations, hindering event camera\napplications in realistic complex scenarios. To tackle this limitation and\nadvance the field, we construct a new paired real-world event denoising dataset\n(LED), including 3K sequences with 18K seconds of high-resolution (1200*680)\nevent streams and showing three notable distinctions compared to others:\ndiverse noise levels and scenes, larger-scale with high-resolution, and\nhigh-quality GT. Specifically, it contains stepped parameters and varying\nillumination with diverse scenarios. Moreover, based on the property of noise\nevents inconsistency and signal events consistency, we propose a novel\neffective denoising framework(DED) using homogeneous dual events to generate\nthe GT with better separating noise from the raw. Furthermore, we design a\nbio-inspired baseline leveraging Leaky-Integrate-and-Fire (LIF) neurons with\ndynamic thresholds to realize accurate denoising.", + "Furthermore, we design a\nbio-inspired baseline leveraging Leaky-Integrate-and-Fire (LIF) neurons with\ndynamic thresholds to realize accurate denoising. The experimental results\ndemonstrate that the remarkable performance of the proposed approach on\ndifferent datasets.The dataset and code are at https://github.com/Yee-Sing/led.", + "Federated learning (FL) has emerged as a new paradigm for privacy-preserving\ncollaborative training. Under domain skew, the current FL approaches are biased\nand face two fairness problems. 1) Parameter Update Conflict: data disparity\namong clients leads to varying parameter importance and inconsistent update\ndirections. These two disparities cause important parameters to potentially be\noverwhelmed by unimportant ones of dominant updates. It consequently results in\nsignificant performance decreases for lower-performing clients. 2) Model\nAggregation Bias: existing FL approaches introduce unfair weight allocation and\nneglect domain diversity. It leads to biased model convergence objective and\ndistinct performance among domains. We discover a pronounced directional update\nconsistency in Federated Learning and propose a novel framework to tackle above\nissues. First, leveraging the discovered characteristic, we selectively discard\nunimportant parameter updates to prevent updates from clients with lower\nperformance overwhelmed by unimportant parameters, resulting in fairer\ngeneralization performance. Second, we propose a fair aggregation objective to\nprevent global model bias towards some domains, ensuring that the global model\ncontinuously aligns with an unbiased model.", + "Second, we propose a fair aggregation objective to\nprevent global model bias towards some domains, ensuring that the global model\ncontinuously aligns with an unbiased model. The proposed method is generic and\ncan be combined with other existing FL methods to enhance fairness.\nComprehensive experiments on Digits and Office-Caltech demonstrate the high\nfairness and performance of our method.", + "Visual interactivity understanding within visual scenes presents a\nsignificant challenge in computer vision. Existing methods focus on complex\ninteractivities while leveraging a simple relationship model. These methods,\nhowever, struggle with a diversity of appearance, situation, position,\ninteraction, and relation in videos. This limitation hinders the ability to\nfully comprehend the interplay within the complex visual dynamics of subjects.\nIn this paper, we delve into interactivities understanding within visual\ncontent by deriving scene graph representations from dense interactivities\namong humans and objects. To achieve this goal, we first present a new dataset\ncontaining Appearance-Situation-Position-Interaction-Relation predicates, named\nASPIRe, offering an extensive collection of videos marked by a wide range of\ninteractivities. Then, we propose a new approach named Hierarchical\nInterlacement Graph (HIG), which leverages a unified layer and graph within a\nhierarchical structure to provide deep insights into scene changes across five\ndistinct tasks. Our approach demonstrates superior performance to other methods\nthrough extensive experiments conducted in various scenarios.", + "Trajectory prediction is fundamental in computer vision and autonomous\ndriving, particularly for understanding pedestrian behavior and enabling\nproactive decision-making. Existing approaches in this field often assume\nprecise and complete observational data, neglecting the challenges associated\nwith out-of-view objects and the noise inherent in sensor data due to limited\ncamera range, physical obstructions, and the absence of ground truth for\ndenoised sensor data. Such oversights are critical safety concerns, as they can\nresult in missing essential, non-visible objects. To bridge this gap, we\npresent a novel method for out-of-sight trajectory prediction that leverages a\nvision-positioning technique. Our approach denoises noisy sensor observations\nin an unsupervised manner and precisely maps sensor-based trajectories of\nout-of-sight objects into visual trajectories. This method has demonstrated\nstate-of-the-art performance in out-of-sight noisy sensor trajectory denoising\nand prediction on the Vi-Fi and JRDB datasets. By enhancing trajectory\nprediction accuracy and addressing the challenges of out-of-sight objects, our\nwork significantly contributes to improving the safety and reliability of\nautonomous driving in complex environments.", + "By enhancing trajectory\nprediction accuracy and addressing the challenges of out-of-sight objects, our\nwork significantly contributes to improving the safety and reliability of\nautonomous driving in complex environments. Our work represents the first\ninitiative towards Out-Of-Sight Trajectory prediction (OOSTraj), setting a new\nbenchmark for future research. The code is available at\n\\url{https://github.com/Hai-chao-Zhang/OOSTraj}.", + "Current controls over diffusion models (e.g., through text or ControlNet) for\nimage generation fall short in recognizing abstract, continuous attributes like\nillumination direction or non-rigid shape change. In this paper, we present an\napproach for allowing users of text-to-image models to have fine-grained\ncontrol of several attributes in an image. We do this by engineering special\nsets of input tokens that can be transformed in a continuous manner -- we call\nthem Continuous 3D Words. These attributes can, for example, be represented as\nsliders and applied jointly with text prompts for fine-grained control over\nimage generation. Given only a single mesh and a rendering engine, we show that\nour approach can be adopted to provide continuous user control over several\n3D-aware attributes, including time-of-day illumination, bird wing orientation,\ndollyzoom effect, and object poses. Our method is capable of conditioning image\ncreation with multiple Continuous 3D Words and text descriptions simultaneously\nwhile adding no overhead to the generative process. Project Page:\nhttps://ttchengab.github.io/continuous_3d_words", + "Modern text-to-image generation models produce high-quality images that are\nboth photorealistic and faithful to the text prompts. However, this quality\ncomes at significant computational cost: nearly all of these models are\niterative and require running sampling multiple times with large models. This\niterative process is needed to ensure that different regions of the image are\nnot only aligned with the text prompt, but also compatible with each other. In\nthis work, we propose a light-weight approach to achieving this compatibility\nbetween different regions of an image, using a Markov Random Field (MRF) model.\nWe demonstrate the effectiveness of this method on top of the latent\ntoken-based Muse text-to-image model. The MRF richly encodes the compatibility\namong image tokens at different spatial locations to improve quality and\nsignificantly reduce the required number of Muse sampling steps. Inference with\nthe MRF is significantly cheaper, and its parameters can be quickly learned\nthrough back-propagation by modeling MRF inference as a differentiable\nneural-network layer. Our full model, MarkovGen, uses this proposed MRF model\nto both speed up Muse by 1.5X and produce higher quality images by decreasing\nundesirable image artifacts.", + "The perception of motion behavior in a dynamic environment holds significant\nimportance for autonomous driving systems, wherein class-agnostic motion\nprediction methods directly predict the motion of the entire point cloud. While\nmost existing methods rely on fully-supervised learning, the manual labeling of\npoint cloud data is laborious and time-consuming. Therefore, several\nannotation-efficient methods have been proposed to address this challenge.\nAlthough effective, these methods rely on weak annotations or additional\nmulti-modal data like images, and the potential benefits inherent in the point\ncloud sequence are still underexplored. To this end, we explore the feasibility\nof self-supervised motion prediction with only unlabeled LiDAR point clouds.\nInitially, we employ an optimal transport solver to establish coarse\ncorrespondences between current and future point clouds as the coarse pseudo\nmotion labels. Training models directly using such coarse labels leads to\nnoticeable spatial and temporal prediction inconsistencies. To mitigate these\nissues, we introduce three simple spatial and temporal regularization losses,\nwhich facilitate the self-supervised training process effectively. Experimental\nresults demonstrate the significant superiority of our approach over the\nstate-of-the-art self-supervised methods.", + "In recent interactive segmentation algorithms, previous probability maps are\nused as network input to help predictions in the current segmentation round.\nHowever, despite the utilization of previous masks, useful information\ncontained in the probability maps is not well propagated to the current\npredictions. In this paper, to overcome this limitation, we propose a novel and\neffective algorithm for click-based interactive image segmentation, called MFP,\nwhich attempts to make full use of probability maps. We first modulate previous\nprobability maps to enhance their representations of user-specified objects.\nThen, we feed the modulated probability maps as additional input to the\nsegmentation network. We implement the proposed MFP algorithm based on the\nResNet-34, HRNet-18, and ViT-B backbones and assess the performance extensively\non various datasets. It is demonstrated that MFP meaningfully outperforms the\nexisting algorithms using identical backbones. The source codes are available\nat https://github.com/cwlee00/MFP.", + "Domain adaptive object detection aims to adapt detection models to domains\nwhere annotated data is unavailable. Existing methods have been proposed to\naddress the domain gap using the semi-supervised student-teacher framework.\nHowever, a fundamental issue arises from the class imbalance in the labelled\ntraining set, which can result in inaccurate pseudo-labels. The relationship\nbetween classes, especially where one class is a majority and the other\nminority, has a large impact on class bias. We propose Class-Aware Teacher\n(CAT) to address the class bias issue in the domain adaptation setting. In our\nwork, we approximate the class relationships with our Inter-Class Relation\nmodule (ICRm) and exploit it to reduce the bias within the model. In this way,\nwe are able to apply augmentations to highly related classes, both inter- and\nintra-domain, to boost the performance of minority classes while having minimal\nimpact on majority classes. We further reduce the bias by implementing a\nclass-relation weight to our classification loss. Experiments conducted on\nvarious datasets and ablation studies show that our method is able to address\nthe class bias in the domain adaptation setting.", + "We further reduce the bias by implementing a\nclass-relation weight to our classification loss. Experiments conducted on\nvarious datasets and ablation studies show that our method is able to address\nthe class bias in the domain adaptation setting. On the Cityscapes to Foggy\nCityscapes dataset, we attained a 52.5 mAP, a substantial improvement over the\n51.2 mAP achieved by the state-of-the-art method.", + "We tackle the problem of 3D point cloud localization based on a few natural\nlinguistic descriptions and introduce a novel neural network, Text2Loc, that\nfully interprets the semantic relationship between points and text. Text2Loc\nfollows a coarse-to-fine localization pipeline: text-submap global place\nrecognition, followed by fine localization. In global place recognition,\nrelational dynamics among each textual hint are captured in a hierarchical\ntransformer with max-pooling (HTM), whereas a balance between positive and\nnegative pairs is maintained using text-submap contrastive learning. Moreover,\nwe propose a novel matching-free fine localization method to further refine the\nlocation predictions, which completely removes the need for complicated\ntext-instance matching and is lighter, faster, and more accurate than previous\nmethods. Extensive experiments show that Text2Loc improves the localization\naccuracy by up to $2\\times$ over the state-of-the-art on the KITTI360Pose\ndataset. Our project page is publicly available at\n\\url{https://yan-xia.github.io/projects/text2loc/}.", + "Tensor network (TN) representation is a powerful technique for computer\nvision and machine learning. TN structure search (TN-SS) aims to search for a\ncustomized structure to achieve a compact representation, which is a\nchallenging NP-hard problem. Recent \"sampling-evaluation\"-based methods require\nsampling an extensive collection of structures and evaluating them one by one,\nresulting in prohibitively high computational costs. To address this issue, we\npropose a novel TN paradigm, named SVD-inspired TN decomposition (SVDinsTN),\nwhich allows us to efficiently solve the TN-SS problem from a regularized\nmodeling perspective, eliminating the repeated structure evaluations. To be\nspecific, by inserting a diagonal factor for each edge of the fully-connected\nTN, SVDinsTN allows us to calculate TN cores and diagonal factors\nsimultaneously, with the factor sparsity revealing a compact TN structure. In\ntheory, we prove a convergence guarantee for the proposed method. Experimental\nresults demonstrate that the proposed method achieves approximately 100 to 1000\ntimes acceleration compared to the state-of-the-art TN-SS methods while\nmaintaining a comparable level of representation ability.", + "Medical vision language pre-training (VLP) has emerged as a frontier of\nresearch, enabling zero-shot pathological recognition by comparing the query\nimage with the textual descriptions for each disease. Due to the complex\nsemantics of biomedical texts, current methods struggle to align medical images\nwith key pathological findings in unstructured reports. This leads to the\nmisalignment with the target disease's textual representation. In this paper,\nwe introduce a novel VLP framework designed to dissect disease descriptions\ninto their fundamental aspects, leveraging prior knowledge about the visual\nmanifestations of pathologies. This is achieved by consulting a large language\nmodel and medical experts. Integrating a Transformer module, our approach\naligns an input image with the diverse elements of a disease, generating\naspect-centric image representations. By consolidating the matches from each\naspect, we improve the compatibility between an image and its associated\ndisease. Additionally, capitalizing on the aspect-oriented representations, we\npresent a dual-head Transformer tailored to process known and unknown diseases,\noptimizing the comprehensive detection efficacy.", + "By consolidating the matches from each\naspect, we improve the compatibility between an image and its associated\ndisease. Additionally, capitalizing on the aspect-oriented representations, we\npresent a dual-head Transformer tailored to process known and unknown diseases,\noptimizing the comprehensive detection efficacy. Conducting experiments on\nseven downstream datasets, ours improves the accuracy of recent methods by up\nto 8.56% and 17.26% for seen and unseen categories, respectively. Our code is\nreleased at https://github.com/HieuPhan33/MAVL.", + "We introduce MoMask, a novel masked modeling framework for text-driven 3D\nhuman motion generation. In MoMask, a hierarchical quantization scheme is\nemployed to represent human motion as multi-layer discrete motion tokens with\nhigh-fidelity details. Starting at the base layer, with a sequence of motion\ntokens obtained by vector quantization, the residual tokens of increasing\norders are derived and stored at the subsequent layers of the hierarchy. This\nis consequently followed by two distinct bidirectional transformers. For the\nbase-layer motion tokens, a Masked Transformer is designated to predict\nrandomly masked motion tokens conditioned on text input at training stage.\nDuring generation (i.e. inference) stage, starting from an empty sequence, our\nMasked Transformer iteratively fills up the missing tokens; Subsequently, a\nResidual Transformer learns to progressively predict the next-layer tokens\nbased on the results from current layer. Extensive experiments demonstrate that\nMoMask outperforms the state-of-art methods on the text-to-motion generation\ntask, with an FID of 0.045 (vs e.g.", + "Extensive experiments demonstrate that\nMoMask outperforms the state-of-art methods on the text-to-motion generation\ntask, with an FID of 0.045 (vs e.g. 0.141 of T2M-GPT) on the HumanML3D dataset,\nand 0.228 (vs 0.514) on KIT-ML, respectively. MoMask can also be seamlessly\napplied in related tasks without further model fine-tuning, such as text-guided\ntemporal inpainting.", + "Inverse rendering aims at recovering both geometry and materials of objects.\nIt provides a more compatible reconstruction for conventional rendering\nengines, compared with the neural radiance fields (NeRFs). On the other hand,\nexisting NeRF-based inverse rendering methods cannot handle glossy objects with\nlocal light interactions well, as they typically oversimplify the illumination\nas a 2D environmental map, which assumes infinite lights only. Observing the\nsuperiority of NeRFs in recovering radiance fields, we propose a novel 5D\nNeural Plenoptic Function (NeP) based on NeRFs and ray tracing, such that more\naccurate lighting-object interactions can be formulated via the rendering\nequation. We also design a material-aware cone sampling strategy to efficiently\nintegrate lights inside the BRDF lobes with the help of pre-filtered radiance\nfields. Our method has two stages: the geometry of the target object and the\npre-filtered environmental radiance fields are reconstructed in the first\nstage, and materials of the target object are estimated in the second stage\nwith the proposed NeP and material-aware cone sampling strategy.", + "Our method has two stages: the geometry of the target object and the\npre-filtered environmental radiance fields are reconstructed in the first\nstage, and materials of the target object are estimated in the second stage\nwith the proposed NeP and material-aware cone sampling strategy. Extensive\nexperiments on the proposed real-world and synthetic datasets demonstrate that\nour method can reconstruct high-fidelity geometry/materials of challenging\nglossy objects with complex lighting interactions from nearby objects. Project\nwebpage: https://whyy.site/paper/nep", + "Large vision-language models (VLMs) like CLIP have demonstrated good\nzero-shot learning performance in the unsupervised domain adaptation task. Yet,\nmost transfer approaches for VLMs focus on either the language or visual\nbranches, overlooking the nuanced interplay between both modalities. In this\nwork, we introduce a Unified Modality Separation (UniMoS) framework for\nunsupervised domain adaptation. Leveraging insights from modality gap studies,\nwe craft a nimble modality separation network that distinctly disentangles\nCLIP's features into language-associated and vision-associated components. Our\nproposed Modality-Ensemble Training (MET) method fosters the exchange of\nmodality-agnostic information while maintaining modality-specific nuances. We\nalign features across domains using a modality discriminator. Comprehensive\nevaluations on three benchmarks reveal our approach sets a new state-of-the-art\nwith minimal computational costs. Code: https://github.com/TL-UESTC/UniMoS", + "Foundation models encompass an extensive knowledge base and offer remarkable\ntransferability. However, this knowledge becomes outdated or insufficient over\ntime. The challenge lies in continuously updating foundation models to\naccommodate novel information while retaining their original capabilities.\nLeveraging the fact that foundation models have initial knowledge on various\ntasks and domains, we propose a novel approach that, instead of updating all\nparameters equally, localizes the updates to a sparse set of parameters\nrelevant to the task being learned. We strike a balance between efficiency and\nnew task performance, while maintaining the transferability and\ngeneralizability of foundation models. We extensively evaluate our method on\nfoundational vision-language models with a diverse spectrum of continual\nlearning tasks. Our method achieves improvements on the accuracy of the newly\nlearned tasks up to 7% while preserving the pretraining knowledge with a\nnegligible decrease of 0.9% on a representative control set accuracy.", + "Templates serve as a good starting point to implement a design (e.g., banner,\nslide) but it takes great effort from designers to manually create. In this\npaper, we present Desigen, an automatic template creation pipeline which\ngenerates background images as well as harmonious layout elements over the\nbackground. Different from natural images, a background image should preserve\nenough non-salient space for the overlaying layout elements. To equip existing\nadvanced diffusion-based models with stronger spatial control, we propose two\nsimple but effective techniques to constrain the saliency distribution and\nreduce the attention weight in desired regions during the background generation\nprocess. Then conditioned on the background, we synthesize the layout with a\nTransformer-based autoregressive generator. To achieve a more harmonious\ncomposition, we propose an iterative inference strategy to adjust the\nsynthesized background and layout in multiple rounds. We constructed a design\ndataset with more than 40k advertisement banners to verify our approach.\nExtensive experiments demonstrate that the proposed pipeline generates\nhigh-quality templates comparable to human designers. More than a single-page\ndesign, we further show an application of presentation generation that outputs\na set of theme-consistent slides.", + "Extensive experiments demonstrate that the proposed pipeline generates\nhigh-quality templates comparable to human designers. More than a single-page\ndesign, we further show an application of presentation generation that outputs\na set of theme-consistent slides. The data and code are available at\nhttps://whaohan.github.io/desigen.", + "Audiovisual representation learning typically relies on the correspondence\nbetween sight and sound. However, there are often multiple audio tracks that\ncan correspond with a visual scene. Consider, for example, different\nconversations on the same crowded street. The effect of such counterfactual\npairs on audiovisual representation learning has not been previously explored.\nTo investigate this, we use dubbed versions of movies and television shows to\naugment cross-modal contrastive learning. Our approach learns to represent\nalternate audio tracks, differing only in speech, similarly to the same video.\nOur results, from a comprehensive set of experiments investigating different\ntraining strategies, show this general approach improves performance on a range\nof downstream auditory and audiovisual tasks, without majorly affecting\nlinguistic task performance overall. These findings highlight the importance of\nconsidering speech variation when learning scene-level audiovisual\ncorrespondences and suggest that dubbed audio can be a useful augmentation\ntechnique for training audiovisual models toward more robust performance on\ndiverse downstream tasks.", + "Vision Transformer (ViT) has emerged as a prominent backbone for computer\nvision. For more efficient ViTs, recent works lessen the quadratic cost of the\nself-attention layer by pruning or fusing the redundant tokens. However, these\nworks faced the speed-accuracy trade-off caused by the loss of information.\nHere, we argue that token fusion needs to consider diverse relations between\ntokens to minimize information loss. In this paper, we propose a Multi-criteria\nToken Fusion (MCTF), that gradually fuses the tokens based on multi-criteria\n(e.g., similarity, informativeness, and size of fused tokens). Further, we\nutilize the one-step-ahead attention, which is the improved approach to capture\nthe informativeness of the tokens. By training the model equipped with MCTF\nusing a token reduction consistency, we achieve the best speed-accuracy\ntrade-off in the image classification (ImageNet1K). Experimental results prove\nthat MCTF consistently surpasses the previous reduction methods with and\nwithout training.", + "By training the model equipped with MCTF\nusing a token reduction consistency, we achieve the best speed-accuracy\ntrade-off in the image classification (ImageNet1K). Experimental results prove\nthat MCTF consistently surpasses the previous reduction methods with and\nwithout training. Specifically, DeiT-T and DeiT-S with MCTF reduce FLOPs by\nabout 44% while improving the performance (+0.5%, and +0.3%) over the base\nmodel, respectively. We also demonstrate the applicability of MCTF in various\nVision Transformers (e.g., T2T-ViT, LV-ViT), achieving at least 31% speedup\nwithout performance degradation. Code is available at\nhttps://github.com/mlvlab/MCTF.", + "Long-form video content constitutes a significant portion of internet\ntraffic, making automated video summarization an essential research problem.\nHowever, existing video summarization datasets are notably limited in their\nsize, constraining the effectiveness of state-of-the-art methods for\ngeneralization. Our work aims to overcome this limitation by capitalizing on\nthe abundance of long-form videos with dense speech-to-video alignment and the\nremarkable capabilities of recent large language models (LLMs) in summarizing\nlong text. We introduce an automated and scalable pipeline for generating a\nlarge-scale video summarization dataset using LLMs as Oracle summarizers. By\nleveraging the generated dataset, we analyze the limitations of existing\napproaches and propose a new video summarization model that effectively\naddresses them. To facilitate further research in the field, our work also\npresents a new benchmark dataset that contains 1200 long videos each with\nhigh-quality summaries annotated by professionals. Extensive experiments\nclearly indicate that our proposed approach sets a new state-of-the-art in\nvideo summarization across several benchmarks.", + "Novel-view synthesis through diffusion models has demonstrated remarkable\npotential for generating diverse and high-quality images. Yet, the independent\nprocess of image generation in these prevailing methods leads to challenges in\nmaintaining multiple-view consistency. To address this, we introduce\nViewFusion, a novel, training-free algorithm that can be seamlessly integrated\ninto existing pre-trained diffusion models. Our approach adopts an\nauto-regressive method that implicitly leverages previously generated views as\ncontext for the next view generation, ensuring robust multi-view consistency\nduring the novel-view generation process. Through a diffusion process that\nfuses known-view information via interpolated denoising, our framework\nsuccessfully extends single-view conditioned models to work in multiple-view\nconditional settings without any additional fine-tuning. Extensive experimental\nresults demonstrate the effectiveness of ViewFusion in generating consistent\nand detailed novel views.", + "We propose SketchINR, to advance the representation of vector sketches with\nimplicit neural models. A variable length vector sketch is compressed into a\nlatent space of fixed dimension that implicitly encodes the underlying shape as\na function of time and strokes. The learned function predicts the $xy$ point\ncoordinates in a sketch at each time and stroke. Despite its simplicity,\nSketchINR outperforms existing representations at multiple tasks: (i) Encoding\nan entire sketch dataset into a fixed size latent vector, SketchINR gives\n$60\\times$ and $10\\times$ data compression over raster and vector sketches,\nrespectively. (ii) SketchINR's auto-decoder provides a much higher-fidelity\nrepresentation than other learned vector sketch representations, and is\nuniquely able to scale to complex vector sketches such as FS-COCO. (iii)\nSketchINR supports parallelisation that can decode/render $\\sim$$100\\times$\nfaster than other learned vector representations such as SketchRNN. (iv)\nSketchINR, for the first time, emulates the human ability to reproduce a sketch\nwith varying abstraction in terms of number and complexity of strokes.", + "(iv)\nSketchINR, for the first time, emulates the human ability to reproduce a sketch\nwith varying abstraction in terms of number and complexity of strokes. As a\nfirst look at implicit sketches, SketchINR's compact high-fidelity\nrepresentation will support future work in modelling long and complex sketches.", + "This paper studies open-vocabulary segmentation (OVS) through calibrating\nin-vocabulary and domain-biased embedding space with generalized contextual\nprior of CLIP. As the core of open-vocabulary understanding, alignment of\nvisual content with the semantics of unbounded text has become the bottleneck\nof this field. To address this challenge, recent works propose to utilize CLIP\nas an additional classifier and aggregate model predictions with CLIP\nclassification results. Despite their remarkable progress, performance of OVS\nmethods in relevant scenarios is still unsatisfactory compared with supervised\ncounterparts. We attribute this to the in-vocabulary embedding and\ndomain-biased CLIP prediction. To this end, we present a Semantic-assisted\nCAlibration Network (SCAN). In SCAN, we incorporate generalized semantic prior\nof CLIP into proposal embedding to avoid collapsing on known categories.\nBesides, a contextual shift strategy is applied to mitigate the lack of global\ncontext and unnatural background noise. With above designs, SCAN achieves\nstate-of-the-art performance on all popular open-vocabulary segmentation\nbenchmarks.", + "Besides, a contextual shift strategy is applied to mitigate the lack of global\ncontext and unnatural background noise. With above designs, SCAN achieves\nstate-of-the-art performance on all popular open-vocabulary segmentation\nbenchmarks. Furthermore, we also focus on the problem of existing evaluation\nsystem that ignores semantic duplication across categories, and propose a new\nmetric called Semantic-Guided IoU (SG-IoU).", + "Progress in lighting estimation is tracked by computing existing image\nquality assessment (IQA) metrics on images from standard datasets. While this\nmay appear to be a reasonable approach, we demonstrate that doing so does not\ncorrelate to human preference when the estimated lighting is used to relight a\nvirtual scene into a real photograph. To study this, we design a controlled\npsychophysical experiment where human observers must choose their preference\namongst rendered scenes lit using a set of lighting estimation algorithms\nselected from the recent literature, and use it to analyse how these algorithms\nperform according to human perception. Then, we demonstrate that none of the\nmost popular IQA metrics from the literature, taken individually, correctly\nrepresent human perception. Finally, we show that by learning a combination of\nexisting IQA metrics, we can more accurately represent human preference. This\nprovides a new perceptual framework to help evaluate future lighting estimation\nalgorithms.", + "The annotation of blind image quality assessment (BIQA) is labor-intensive\nand time-consuming, especially for authentic images. Training on synthetic data\nis expected to be beneficial, but synthetically trained models often suffer\nfrom poor generalization in real domains due to domain gaps. In this work, we\nmake a key observation that introducing more distortion types in the synthetic\ndataset may not improve or even be harmful to generalizing authentic image\nquality assessment. To solve this challenge, we propose distortion-guided\nunsupervised domain adaptation for BIQA (DGQA), a novel framework that\nleverages adaptive multi-domain selection via prior knowledge from distortion\nto match the data distribution between the source domains and the target\ndomain, thereby reducing negative transfer from the outlier source domains.\nExtensive experiments on two cross-domain settings (synthetic distortion to\nauthentic distortion and synthetic distortion to algorithmic distortion) have\ndemonstrated the effectiveness of our proposed DGQA. Besides, DGQA is\northogonal to existing model-based BIQA methods, and can be used in combination\nwith such models to improve performance with less training data.", + "Data replay is a successful incremental learning technique for images. It\nprevents catastrophic forgetting by keeping a reservoir of previous data,\noriginal or synthesized, to ensure the model retains past knowledge while\nadapting to novel concepts. However, its application in the video domain is\nrudimentary, as it simply stores frame exemplars for action recognition. This\npaper presents the first exploration of video data replay techniques for\nincremental action segmentation, focusing on action temporal modeling. We\npropose a Temporally Coherent Action (TCA) model, which represents actions\nusing a generative model instead of storing individual frames. The integration\nof a conditioning variable that captures temporal coherence allows our model to\nunderstand the evolution of action features over time. Therefore, action\nsegments generated by TCA for replay are diverse and temporally coherent. In a\n10-task incremental setup on the Breakfast dataset, our approach achieves\nsignificant increases in accuracy for up to 22% compared to the baselines.", + "We have recently seen tremendous progress in photo-real human modeling and\nrendering. Yet, efficiently rendering realistic human performance and\nintegrating it into the rasterization pipeline remains challenging. In this\npaper, we present HiFi4G, an explicit and compact Gaussian-based approach for\nhigh-fidelity human performance rendering from dense footage. Our core\nintuition is to marry the 3D Gaussian representation with non-rigid tracking,\nachieving a compact and compression-friendly representation. We first propose a\ndual-graph mechanism to obtain motion priors, with a coarse deformation graph\nfor effective initialization and a fine-grained Gaussian graph to enforce\nsubsequent constraints. Then, we utilize a 4D Gaussian optimization scheme with\nadaptive spatial-temporal regularizers to effectively balance the non-rigid\nprior and Gaussian updating. We also present a companion compression scheme\nwith residual compensation for immersive experiences on various platforms. It\nachieves a substantial compression rate of approximately 25 times, with less\nthan 2MB of storage per frame. Extensive experiments demonstrate the\neffectiveness of our approach, which significantly outperforms existing\napproaches in terms of optimization speed, rendering quality, and storage\noverhead.", + "Diffusion probabilistic models (DPMs) are a key component in modern\ngenerative models. DPM-solvers have achieved reduced latency and enhanced\nquality significantly, but have posed challenges to find the exact inverse\n(i.e., finding the initial noise from the given image). Here we investigate the\nexact inversions for DPM-solvers and propose algorithms to perform them when\nsamples are generated by the first-order as well as higher-order DPM-solvers.\nFor each explicit denoising step in DPM-solvers, we formulated the inversions\nusing implicit methods such as gradient descent or forward step method to\nensure the robustness to large classifier-free guidance unlike the prior\napproach using fixed-point iteration. Experimental results demonstrated that\nour proposed exact inversion methods significantly reduced the error of both\nimage and noise reconstructions, greatly enhanced the ability to distinguish\ninvisible watermarks and well prevented unintended background changes\nconsistently during image editing. Project page:\n\\url{https://smhongok.github.io/inv-dpm.html}.", + "Segment Anything Model (SAM) has emerged as a powerful tool for numerous\nvision applications. A key component that drives the impressive performance for\nzero-shot transfer and high versatility is a super large Transformer model\ntrained on the extensive high-quality SA-1B dataset. While beneficial, the huge\ncomputation cost of SAM model has limited its applications to wider real-world\napplications. To address this limitation, we propose EfficientSAMs,\nlight-weight SAM models that exhibits decent performance with largely reduced\ncomplexity. Our idea is based on leveraging masked image pretraining, SAMI,\nwhich learns to reconstruct features from SAM image encoder for effective\nvisual representation learning. Further, we take SAMI-pretrained light-weight\nimage encoders and mask decoder to build EfficientSAMs, and finetune the models\non SA-1B for segment anything task. We perform evaluations on multiple vision\ntasks including image classification, object detection, instance segmentation,\nand semantic object detection, and find that our proposed pretraining method,\nSAMI, consistently outperforms other masked image pretraining methods.", + "We perform evaluations on multiple vision\ntasks including image classification, object detection, instance segmentation,\nand semantic object detection, and find that our proposed pretraining method,\nSAMI, consistently outperforms other masked image pretraining methods. On\nsegment anything task such as zero-shot instance segmentation, our\nEfficientSAMs with SAMI-pretrained lightweight image encoders perform favorably\nwith a significant gain (e.g., ~4 AP on COCO/LVIS) over other fast SAM models.", + "We present ChatScene, a Large Language Model (LLM)-based agent that leverages\nthe capabilities of LLMs to generate safety-critical scenarios for autonomous\nvehicles. Given unstructured language instructions, the agent first generates\ntextually described traffic scenarios using LLMs. These scenario descriptions\nare subsequently broken down into several sub-descriptions for specified\ndetails such as behaviors and locations of vehicles. The agent then\ndistinctively transforms the textually described sub-scenarios into\ndomain-specific languages, which then generate actual code for prediction and\ncontrol in simulators, facilitating the creation of diverse and complex\nscenarios within the CARLA simulation environment. A key part of our agent is a\ncomprehensive knowledge retrieval component, which efficiently translates\nspecific textual descriptions into corresponding domain-specific code snippets\nby training a knowledge database containing the scenario description and code\npairs. Extensive experimental results underscore the efficacy of ChatScene in\nimproving the safety of autonomous vehicles. For instance, the scenarios\ngenerated by ChatScene show a 15% increase in collision rates compared to\nstate-of-the-art baselines when tested against different reinforcement\nlearning-based ego vehicles.", + "Extensive experimental results underscore the efficacy of ChatScene in\nimproving the safety of autonomous vehicles. For instance, the scenarios\ngenerated by ChatScene show a 15% increase in collision rates compared to\nstate-of-the-art baselines when tested against different reinforcement\nlearning-based ego vehicles. Furthermore, we show that by using our generated\nsafety-critical scenarios to fine-tune different RL-based autonomous driving\nmodels, they can achieve a 9% reduction in collision rates, surpassing current\nSOTA methods. ChatScene effectively bridges the gap between textual\ndescriptions of traffic scenarios and practical CARLA simulations, providing a\nunified way to conveniently generate safety-critical scenarios for safety\ntesting and improvement for AVs.", + "The Segment Anything Model (SAM) marks a notable milestone in segmentation\nmodels, highlighted by its robust zero-shot capabilities and ability to handle\ndiverse prompts. SAM follows a pipeline that separates interactive segmentation\ninto image preprocessing through a large encoder and interactive inference via\na lightweight decoder, ensuring efficient real-time performance. However, SAM\nfaces stability issues in challenging samples upon this pipeline. These issues\narise from two main factors. Firstly, the image preprocessing disables SAM from\ndynamically using image-level zoom-in strategies to refocus on the target\nobject during interaction. Secondly, the lightweight decoder struggles to\nsufficiently integrate interactive information with image embeddings. To\naddress these two limitations, we propose FocSAM with a pipeline redesigned on\ntwo pivotal aspects. First, we propose Dynamic Window Multi-head Self-Attention\n(Dwin-MSA) to dynamically refocus SAM's image embeddings on the target object.\nDwin-MSA localizes attention computations around the target object, enhancing\nobject-related embeddings with minimal computational overhead.", + "First, we propose Dynamic Window Multi-head Self-Attention\n(Dwin-MSA) to dynamically refocus SAM's image embeddings on the target object.\nDwin-MSA localizes attention computations around the target object, enhancing\nobject-related embeddings with minimal computational overhead. Second, we\npropose Pixel-wise Dynamic ReLU (P-DyReLU) to enable sufficient integration of\ninteractive information from a few initial clicks that have significant impacts\non the overall segmentation results. Experimentally, FocSAM augments SAM's\ninteractive segmentation performance to match the existing state-of-the-art\nmethod in segmentation quality, requiring only about 5.6% of this method's\ninference time on CPUs.", + "Recently, a number of image-mixing-based augmentation techniques have been\nintroduced to improve the generalization of deep neural networks. In these\ntechniques, two or more randomly selected natural images are mixed together to\ngenerate an augmented image. Such methods may not only omit important portions\nof the input images but also introduce label ambiguities by mixing images\nacross labels resulting in misleading supervisory signals. To address these\nlimitations, we propose DiffuseMix, a novel data augmentation technique that\nleverages a diffusion model to reshape training images, supervised by our\nbespoke conditional prompts. First, concatenation of a partial natural image\nand its generated counterpart is obtained which helps in avoiding the\ngeneration of unrealistic images or label ambiguities. Then, to enhance\nresilience against adversarial attacks and improves safety measures, a randomly\nselected structural pattern from a set of fractal images is blended into the\nconcatenated image to form the final augmented image for training.", + "Then, to enhance\nresilience against adversarial attacks and improves safety measures, a randomly\nselected structural pattern from a set of fractal images is blended into the\nconcatenated image to form the final augmented image for training. Our\nempirical results on seven different datasets reveal that DiffuseMix achieves\nsuperior performance compared to existing state-of the-art methods on tasks\nincluding general classification,fine-grained classification, fine-tuning, data\nscarcity, and adversarial robustness. Augmented datasets and codes are\navailable here: https://diffusemix.github.io/", + "Reward finetuning has emerged as a promising approach to aligning foundation\nmodels with downstream objectives. Remarkable success has been achieved in the\nlanguage domain by using reinforcement learning (RL) to maximize rewards that\nreflect human preference. However, in the vision domain, existing RL-based\nreward finetuning methods are limited by their instability in large-scale\ntraining, rendering them incapable of generalizing to complex, unseen prompts.\nIn this paper, we propose Proximal Reward Difference Prediction (PRDP),\nenabling stable black-box reward finetuning for diffusion models for the first\ntime on large-scale prompt datasets with over 100K prompts. Our key innovation\nis the Reward Difference Prediction (RDP) objective that has the same optimal\nsolution as the RL objective while enjoying better training stability.\nSpecifically, the RDP objective is a supervised regression objective that tasks\nthe diffusion model with predicting the reward difference of generated image\npairs from their denoising trajectories. We theoretically prove that the\ndiffusion model that obtains perfect reward difference prediction is exactly\nthe maximizer of the RL objective. We further develop an online algorithm with\nproximal updates to stably optimize the RDP objective.", + "We theoretically prove that the\ndiffusion model that obtains perfect reward difference prediction is exactly\nthe maximizer of the RL objective. We further develop an online algorithm with\nproximal updates to stably optimize the RDP objective. In experiments, we\ndemonstrate that PRDP can match the reward maximization ability of\nwell-established RL-based methods in small-scale training. Furthermore, through\nlarge-scale training on text prompts from the Human Preference Dataset v2 and\nthe Pick-a-Pic v1 dataset, PRDP achieves superior generation quality on a\ndiverse set of complex, unseen prompts whereas RL-based methods completely\nfail.", + "Data-Free Meta-Learning (DFML) aims to extract knowledge from a collection of\npre-trained models without requiring the original data, presenting practical\nbenefits in contexts constrained by data privacy concerns. Current DFML methods\nprimarily focus on the data recovery from these pre-trained models. However,\nthey suffer from slow recovery speed and overlook gaps inherent in\nheterogeneous pre-trained models. In response to these challenges, we introduce\nthe Faster and Better Data-Free Meta-Learning (FREE) framework, which contains:\n(i) a meta-generator for rapidly recovering training tasks from pre-trained\nmodels; and (ii) a meta-learner for generalizing to new unseen tasks.\nSpecifically, within the module Faster Inversion via Meta-Generator, each\npre-trained model is perceived as a distinct task. The meta-generator can\nrapidly adapt to a specific task in just five steps, significantly accelerating\nthe data recovery. Furthermore, we propose Better Generalization via\nMeta-Learner and introduce an implicit gradient alignment algorithm to optimize\nthe meta-learner. This is achieved as aligned gradient directions alleviate\npotential conflicts among tasks from heterogeneous pre-trained models.", + "Furthermore, we propose Better Generalization via\nMeta-Learner and introduce an implicit gradient alignment algorithm to optimize\nthe meta-learner. This is achieved as aligned gradient directions alleviate\npotential conflicts among tasks from heterogeneous pre-trained models.\nEmpirical experiments on multiple benchmarks affirm the superiority of our\napproach, marking a notable speed-up (20$\\times$) and performance enhancement\n(1.42\\% $\\sim$ 4.78\\%) in comparison to the state-of-the-art.", + "We present Bayesian Diffusion Models (BDM), a prediction algorithm that\nperforms effective Bayesian inference by tightly coupling the top-down (prior)\ninformation with the bottom-up (data-driven) procedure via joint diffusion\nprocesses. We show the effectiveness of BDM on the 3D shape reconstruction\ntask. Compared to prototypical deep learning data-driven approaches trained on\npaired (supervised) data-labels (e.g. image-point clouds) datasets, our BDM\nbrings in rich prior information from standalone labels (e.g. point clouds) to\nimprove the bottom-up 3D reconstruction. As opposed to the standard Bayesian\nframeworks where explicit prior and likelihood are required for the inference,\nBDM performs seamless information fusion via coupled diffusion processes with\nlearned gradient computation networks. The specialty of our BDM lies in its\ncapability to engage the active and effective information exchange and fusion\nof the top-down and bottom-up processes where each itself is a diffusion\nprocess. We demonstrate state-of-the-art results on both synthetic and\nreal-world benchmarks for 3D shape reconstruction.", + "General image fusion aims at integrating important information from\nmulti-source images. However, due to the significant cross-task gap, the\nrespective fusion mechanism varies considerably in practice, resulting in\nlimited performance across subtasks. To handle this problem, we propose a novel\ntask-customized mixture of adapters (TC-MoA) for general image fusion,\nadaptively prompting various fusion tasks in a unified model. We borrow the\ninsight from the mixture of experts (MoE), taking the experts as efficient\ntuning adapters to prompt a pre-trained foundation model. These adapters are\nshared across different tasks and constrained by mutual information\nregularization, ensuring compatibility with different tasks while\ncomplementarity for multi-source images. The task-specific routing networks\ncustomize these adapters to extract task-specific information from different\nsources with dynamic dominant intensity, performing adaptive visual feature\nprompt fusion. Notably, our TC-MoA controls the dominant intensity bias for\ndifferent fusion tasks, successfully unifying multiple fusion tasks in a single\nmodel.", + "The task-specific routing networks\ncustomize these adapters to extract task-specific information from different\nsources with dynamic dominant intensity, performing adaptive visual feature\nprompt fusion. Notably, our TC-MoA controls the dominant intensity bias for\ndifferent fusion tasks, successfully unifying multiple fusion tasks in a single\nmodel. Extensive experiments show that TC-MoA outperforms the competing\napproaches in learning commonalities while retaining compatibility for general\nimage fusion (multi-modal, multi-exposure, and multi-focus), and also\ndemonstrating striking controllability on more generalization experiments. The\ncode is available at https://github.com/YangSun22/TC-MoA .", + "Knowledge Distillation (KD) has been validated as an effective model\ncompression technique for learning compact object detectors. Existing\nstate-of-the-art KD methods for object detection are mostly based on feature\nimitation. In this paper, we present a general and effective prediction\nmimicking distillation scheme, called CrossKD, which delivers the intermediate\nfeatures of the student's detection head to the teacher's detection head. The\nresulting cross-head predictions are then forced to mimic the teacher's\npredictions. This manner relieves the student's head from receiving\ncontradictory supervision signals from the annotations and the teacher's\npredictions, greatly improving the student's detection performance. Moreover,\nas mimicking the teacher's predictions is the target of KD, CrossKD offers more\ntask-oriented information in contrast with feature imitation. On MS COCO, with\nonly prediction mimicking losses applied, our CrossKD boosts the average\nprecision of GFL ResNet-50 with 1x training schedule from 40.2 to 43.7,\noutperforming all existing KD methods. In addition, our method also works well\nwhen distilling detectors with heterogeneous backbones. Code is available at\nhttps://github.com/jbwang1997/CrossKD.", + "Deep neural networks have demonstrated susceptibility to adversarial attacks.\nAdversarial defense techniques often focus on one-shot setting to maintain\nrobustness against attack. However, new attacks can emerge in sequences in\nreal-world deployment scenarios. As a result, it is crucial for a defense model\nto constantly adapt to new attacks, but the adaptation process can lead to\ncatastrophic forgetting of previously defended against attacks. In this paper,\nwe discuss for the first time the concept of continual adversarial defense\nunder a sequence of attacks, and propose a lifelong defense baseline called\nAnisotropic \\& Isotropic Replay (AIR), which offers three advantages: (1)\nIsotropic replay ensures model consistency in the neighborhood distribution of\nnew data, indirectly aligning the output preference between old and new tasks.\n(2) Anisotropic replay enables the model to learn a compromise data manifold\nwith fresh mixed semantics for further replay constraints and potential future\nattacks. (3) A straightforward regularizer mitigates the 'plasticity-stability'\ntrade-off by aligning model output between new and old tasks. Experiment\nresults demonstrate that AIR can approximate or even exceed the empirical\nperformance upper bounds achieved by Joint Training.", + "We introduce EscherNet, a multi-view conditioned diffusion model for view\nsynthesis. EscherNet learns implicit and generative 3D representations coupled\nwith a specialised camera positional encoding, allowing precise and continuous\nrelative control of the camera transformation between an arbitrary number of\nreference and target views. EscherNet offers exceptional generality,\nflexibility, and scalability in view synthesis -- it can generate more than 100\nconsistent target views simultaneously on a single consumer-grade GPU, despite\nbeing trained with a fixed number of 3 reference views to 3 target views. As a\nresult, EscherNet not only addresses zero-shot novel view synthesis, but also\nnaturally unifies single- and multi-image 3D reconstruction, combining these\ndiverse tasks into a single, cohesive framework. Our extensive experiments\ndemonstrate that EscherNet achieves state-of-the-art performance in multiple\nbenchmarks, even when compared to methods specifically tailored for each\nindividual problem. This remarkable versatility opens up new directions for\ndesigning scalable neural architectures for 3D vision. Project page:\nhttps://kxhit.github.io/EscherNet.", + "Zero-shot image captioning (IC) without well-paired image-text data can be\ndivided into two categories, training-free and text-only-training. Generally,\nthese two types of methods realize zero-shot IC by integrating pretrained\nvision-language models like CLIP for image-text similarity evaluation and a\npre-trained language model (LM) for caption generation. The main difference\nbetween them is whether using a textual corpus to train the LM. Though\nachieving attractive performance w.r.t. some metrics, existing methods often\nexhibit some common drawbacks. Training-free methods tend to produce\nhallucinations, while text-only-training often lose generalization capability.\nTo move forward, in this paper, we propose a novel Memory-Augmented zero-shot\nimage Captioning framework (MeaCap). Specifically, equipped with a textual\nmemory, we introduce a retrieve-then-filter module to get key concepts that are\nhighly related to the image. By deploying our proposed memory-augmented\nvisual-related fusion score in a keywords-to-sentence LM, MeaCap can generate\nconcept-centered captions that keep high consistency with the image with fewer\nhallucinations and more world-knowledge.", + "By deploying our proposed memory-augmented\nvisual-related fusion score in a keywords-to-sentence LM, MeaCap can generate\nconcept-centered captions that keep high consistency with the image with fewer\nhallucinations and more world-knowledge. The framework of MeaCap achieves the\nstate-of-the-art performance on a series of zero-shot IC settings. Our code is\navailable at https://github.com/joeyz0z/MeaCap.", + "An increasingly common approach for creating photo-realistic digital avatars\nis through the use of volumetric neural fields. The original neural radiance\nfield (NeRF) allowed for impressive novel view synthesis of static heads when\ntrained on a set of multi-view images, and follow up methods showed that these\nneural representations can be extended to dynamic avatars. Recently, new\nvariants also surpassed the usual drawback of baked-in illumination in neural\nrepresentations, showing that static neural avatars can be relit in any\nenvironment. In this work we simultaneously tackle both the motion and\nillumination problem, proposing a new method for relightable and animatable\nneural heads. Our method builds on a proven dynamic avatar approach based on a\nmixture of volumetric primitives, combined with a recently-proposed lightweight\nhardware setup for relightable neural fields, and includes a novel architecture\nthat allows relighting dynamic neural avatars performing unseen expressions in\nany environment, even with nearfield illumination and viewpoints.", + "Referring image segmentation (RIS) aims to precisely segment referents in\nimages through corresponding natural language expressions, yet relying on\ncost-intensive mask annotations. Weakly supervised RIS thus learns from\nimage-text pairs to pixel-level semantics, which is challenging for segmenting\nfine-grained masks. A natural approach to enhancing segmentation precision is\nto empower weakly supervised RIS with the image segmentation foundation model\nSAM. Nevertheless, we observe that simply integrating SAM yields limited\nbenefits and can even lead to performance regression due to the inevitable\nnoise issues and challenges in excessive focus on object parts. In this paper,\nwe present an innovative framework, Point PrompTing (PPT), incorporated with\nthe proposed multi-source curriculum learning strategy to address these\nchallenges. Specifically, the core of PPT is a point generator that not only\nharnesses CLIP's text-image alignment capability and SAM's powerful mask\ngeneration ability but also generates negative point prompts to address the\nnoisy and excessive focus issues inherently and effectively. In addition, we\nintroduce a curriculum learning strategy with object-centric images to help PPT\ngradually learn from simpler yet precise semantic alignment to more complex\nRIS.", + "In addition, we\nintroduce a curriculum learning strategy with object-centric images to help PPT\ngradually learn from simpler yet precise semantic alignment to more complex\nRIS. Experiments demonstrate that our PPT significantly and consistently\noutperforms prior weakly supervised techniques on mIoU by 11.34%, 14.14%, and\n6.97% across RefCOCO, RefCOCO+, and G-Ref, respectively.", + "In this paper, we make the first attempt at achieving the cross-modal (i.e.,\nimage-to-events) adaptation for event-based object recognition without\naccessing any labeled source image data owning to privacy and commercial\nissues. Tackling this novel problem is non-trivial due to the novelty of event\ncameras and the distinct modality gap between images and events. In particular,\nas only the source model is available, a hurdle is how to extract the knowledge\nfrom the source model by only using the unlabeled target event data while\nachieving knowledge transfer. To this end, we propose a novel framework, dubbed\nEventDance for this unsupervised source-free cross-modal adaptation problem.\nImportantly, inspired by event-to-video reconstruction methods, we propose a\nreconstruction-based modality bridging (RMB) module, which reconstructs\nintensity frames from events in a self-supervised manner. This makes it\npossible to build up the surrogate images to extract the knowledge (i.e.,\nlabels) from the source model.", + "This makes it\npossible to build up the surrogate images to extract the knowledge (i.e.,\nlabels) from the source model. We then propose a multi-representation knowledge\nadaptation (MKA) module that transfers the knowledge to target models learning\nevents with multiple representation types for fully exploring the\nspatiotemporal information of events. The two modules connecting the source and\ntarget models are mutually updated so as to achieve the best performance.\nExperiments on three benchmark datasets with two adaption settings show that\nEventDance is on par with prior methods utilizing the source data.", + "In the realm of medical 3D data, such as CT and MRI images, prevalent\nanisotropic resolution is characterized by high intra-slice but diminished\ninter-slice resolution. The lowered resolution between adjacent slices poses\nchallenges, hindering optimal viewing experiences and impeding the development\nof robust downstream analysis algorithms. Various volumetric super-resolution\nalgorithms aim to surmount these challenges, enhancing inter-slice resolution\nand overall 3D medical imaging quality. However, existing approaches confront\ninherent challenges: 1) often tailored to specific upsampling factors, lacking\nflexibility for diverse clinical scenarios; 2) newly generated slices\nfrequently suffer from over-smoothing, degrading fine details, and leading to\ninter-slice inconsistency. In response, this study presents CycleINR, a novel\nenhanced Implicit Neural Representation model for 3D medical data volumetric\nsuper-resolution. Leveraging the continuity of the learned implicit function,\nthe CycleINR model can achieve results with arbitrary up-sampling rates,\neliminating the need for separate training.", + "Leveraging the continuity of the learned implicit function,\nthe CycleINR model can achieve results with arbitrary up-sampling rates,\neliminating the need for separate training. Additionally, we enhance the grid\nsampling in CycleINR with a local attention mechanism and mitigate\nover-smoothing by integrating cycle-consistent loss. We introduce a new metric,\nSlice-wise Noise Level Inconsistency (SNLI), to quantitatively assess\ninter-slice noise level inconsistency. The effectiveness of our approach is\ndemonstrated through image quality evaluations on an in-house dataset and a\ndownstream task analysis on the Medical Segmentation Decathlon liver tumor\ndataset.", + "Pre-trained models with large-scale training data, such as CLIP and Stable\nDiffusion, have demonstrated remarkable performance in various high-level\ncomputer vision tasks such as image understanding and generation from language\ndescriptions. Yet, their potential for low-level tasks such as image\nrestoration remains relatively unexplored. In this paper, we explore such\nmodels to enhance image restoration. As off-the-shelf features (OSF) from\npre-trained models do not directly serve image restoration, we propose to learn\nan additional lightweight module called Pre-Train-Guided Refinement Module\n(PTG-RM) to refine restoration results of a target restoration network with\nOSF. PTG-RM consists of two components, Pre-Train-Guided Spatial-Varying\nEnhancement (PTG-SVE), and Pre-Train-Guided Channel-Spatial Attention\n(PTG-CSA). PTG-SVE enables optimal short- and long-range neural operations,\nwhile PTG-CSA enhances spatial-channel attention for restoration-related\nlearning.", + "PTG-SVE enables optimal short- and long-range neural operations,\nwhile PTG-CSA enhances spatial-channel attention for restoration-related\nlearning. Extensive experiments demonstrate that PTG-RM, with its compact size\n($<$1M parameters), effectively enhances restoration performance of various\nmodels across different tasks, including low-light enhancement, deraining,\ndeblurring, and denoising.", + "Super-resolution (SR) and image generation are important tasks in computer\nvision and are widely adopted in real-world applications. Most existing\nmethods, however, generate images only at fixed-scale magnification and suffer\nfrom over-smoothing and artifacts. Additionally, they do not offer enough\ndiversity of output images nor image consistency at different scales. Most\nrelevant work applied Implicit Neural Representation (INR) to the denoising\ndiffusion model to obtain continuous-resolution yet diverse and high-quality SR\nresults. Since this model operates in the image space, the larger the\nresolution of image is produced, the more memory and inference time is\nrequired, and it also does not maintain scale-specific consistency. We propose\na novel pipeline that can super-resolve an input image or generate from a\nrandom noise a novel image at arbitrary scales. The method consists of a\npretrained auto-encoder, a latent diffusion model, and an implicit neural\ndecoder, and their learning strategies. The proposed method adopts diffusion\nprocesses in a latent space, thus efficient, yet aligned with output image\nspace decoded by MLPs at arbitrary scales.", + "The method consists of a\npretrained auto-encoder, a latent diffusion model, and an implicit neural\ndecoder, and their learning strategies. The proposed method adopts diffusion\nprocesses in a latent space, thus efficient, yet aligned with output image\nspace decoded by MLPs at arbitrary scales. More specifically, our\narbitrary-scale decoder is designed by the symmetric decoder w/o up-scaling\nfrom the pretrained auto-encoder, and Local Implicit Image Function (LIIF) in\nseries. The latent diffusion process is learnt by the denoising and the\nalignment losses jointly. Errors in output images are backpropagated via the\nfixed decoder, improving the quality of output images. In the extensive\nexperiments using multiple public benchmarks on the two tasks i.e. image\nsuper-resolution and novel image generation at arbitrary scales, the proposed\nmethod outperforms relevant methods in metrics of image quality, diversity and\nscale consistency. It is significantly better than the relevant prior-art in\nthe inference speed and memory usage.", + "Implicit Neural Representations have gained prominence as a powerful\nframework for capturing complex data modalities, encompassing a wide range from\n3D shapes to images and audio. Within the realm of 3D shape representation,\nNeural Signed Distance Functions (SDF) have demonstrated remarkable potential\nin faithfully encoding intricate shape geometry. However, learning SDFs from 3D\npoint clouds in the absence of ground truth supervision remains a very\nchallenging task. In this paper, we propose a method to infer occupancy fields\ninstead of SDFs as they are easier to learn from sparse inputs. We leverage a\nmargin-based uncertainty measure to differentially sample from the decision\nboundary of the occupancy function and supervise the sampled boundary points\nusing the input point cloud. We further stabilize the optimization process at\nthe early stages of the training by biasing the occupancy function towards\nminimal entropy fields while maximizing its entropy at the input point cloud.\nThrough extensive experiments and evaluations, we illustrate the efficacy of\nour proposed method, highlighting its capacity to improve implicit shape\ninference with respect to baselines and the state-of-the-art using synthetic\nand real data.", + "We propose a novel method for 3D point cloud action recognition.\nUnderstanding human actions in RGB videos has been widely studied in recent\nyears, however, its 3D point cloud counterpart remains under-explored. This is\nmostly due to the inherent limitation of the point cloud data modality -- lack\nof structure, permutation invariance, and varying number of points -- which\nmakes it difficult to learn a spatio-temporal representation. To address this\nlimitation, we propose the 3DinAction pipeline that first estimates patches\nmoving in time (t-patches) as a key building block, alongside a hierarchical\narchitecture that learns an informative spatio-temporal representation. We show\nthat our method achieves improved performance on existing datasets, including\nDFAUST and IKEA ASM. Code is publicly available at\nhttps://github.com/sitzikbs/3dincaction.", + "Diffusion models have recently revolutionized the field of image synthesis\ndue to their ability to generate photorealistic images. However, one of the\nmajor drawbacks of diffusion models is that the image generation process is\ncostly. A large image-to-image network has to be applied many times to\niteratively refine an image from random noise. While many recent works propose\ntechniques to reduce the number of required steps, they generally treat the\nunderlying denoising network as a black box. In this work, we investigate the\nbehavior of the layers within the network and find that 1) the layers' output\nchanges smoothly over time, 2) the layers show distinct patterns of change, and\n3) the change from step to step is often very small. We hypothesize that many\nlayer computations in the denoising network are redundant. Leveraging this, we\nintroduce block caching, in which we reuse outputs from layer blocks of\nprevious steps to speed up inference. Furthermore, we propose a technique to\nautomatically determine caching schedules based on each block's changes over\ntimesteps.", + "Leveraging this, we\nintroduce block caching, in which we reuse outputs from layer blocks of\nprevious steps to speed up inference. Furthermore, we propose a technique to\nautomatically determine caching schedules based on each block's changes over\ntimesteps. In our experiments, we show through FID, human evaluation and\nqualitative analysis that Block Caching allows to generate images with higher\nvisual quality at the same computational cost. We demonstrate this for\ndifferent state-of-the-art models (LDM and EMU) and solvers (DDIM and DPM).", + "Medical generative models, acknowledged for their high-quality sample\ngeneration ability, have accelerated the fast growth of medical applications.\nHowever, recent works concentrate on separate medical generation models for\ndistinct medical tasks and are restricted to inadequate medical multi-modal\nknowledge, constraining medical comprehensive diagnosis. In this paper, we\npropose MedM2G, a Medical Multi-Modal Generative framework, with the key\ninnovation to align, extract, and generate medical multi-modal within a unified\nmodel. Extending beyond single or two medical modalities, we efficiently align\nmedical multi-modal through the central alignment approach in the unified\nspace. Significantly, our framework extracts valuable clinical knowledge by\npreserving the medical visual invariant of each imaging modal, thereby\nenhancing specific medical information for multi-modal generation. By\nconditioning the adaptive cross-guided parameters into the multi-flow diffusion\nframework, our model promotes flexible interactions among medical multi-modal\nfor generation. MedM2G is the first medical generative model that unifies\nmedical generation tasks of text-to-image, image-to-text, and unified\ngeneration of medical modalities (CT, MRI, X-ray).", + "MedM2G is the first medical generative model that unifies\nmedical generation tasks of text-to-image, image-to-text, and unified\ngeneration of medical modalities (CT, MRI, X-ray). It performs 5 medical\ngeneration tasks across 10 datasets, consistently outperforming various\nstate-of-the-art works.", + "In the field of class incremental learning (CIL), generative replay has\nbecome increasingly prominent as a method to mitigate the catastrophic\nforgetting, alongside the continuous improvements in generative models.\nHowever, its application in class incremental object detection (CIOD) has been\nsignificantly limited, primarily due to the complexities of scenes involving\nmultiple labels. In this paper, we propose a novel approach called stable\ndiffusion deep generative replay (SDDGR) for CIOD. Our method utilizes a\ndiffusion-based generative model with pre-trained text-to-diffusion networks to\ngenerate realistic and diverse synthetic images. SDDGR incorporates an\niterative refinement strategy to produce high-quality images encompassing old\nclasses. Additionally, we adopt an L2 knowledge distillation technique to\nimprove the retention of prior knowledge in synthetic images. Furthermore, our\napproach includes pseudo-labeling for old objects within new task images,\npreventing misclassification as background elements. Extensive experiments on\nthe COCO 2017 dataset demonstrate that SDDGR significantly outperforms existing\nalgorithms, achieving a new state-of-the-art in various CIOD scenarios. The\nsource code will be made available to the public.", + "Reconstructing dynamic objects from monocular videos is a severely\nunderconstrained and challenging problem, and recent work has approached it in\nvarious directions. However, owing to the ill-posed nature of this problem,\nthere has been no solution that can provide consistent, high-quality novel\nviews from camera positions that are significantly different from the training\nviews. In this work, we introduce Neural Parametric Gaussians (NPGs) to take on\nthis challenge by imposing a two-stage approach: first, we fit a low-rank\nneural deformation model, which then is used as regularization for non-rigid\nreconstruction in the second stage. The first stage learns the object's\ndeformations such that it preserves consistency in novel views. The second\nstage obtains high reconstruction quality by optimizing 3D Gaussians that are\ndriven by the coarse model. To this end, we introduce a local 3D Gaussian\nrepresentation, where temporally shared Gaussians are anchored in and deformed\nby local oriented volumes. The resulting combined model can be rendered as\nradiance fields, resulting in high-quality photo-realistic reconstructions of\nthe non-rigidly deforming objects.", + "The resulting combined model can be rendered as\nradiance fields, resulting in high-quality photo-realistic reconstructions of\nthe non-rigidly deforming objects. We demonstrate that NPGs achieve superior\nresults compared to previous works, especially in challenging scenarios with\nfew multi-view cues.", + "Deep learning-based monocular depth estimation (MDE), extensively applied in\nautonomous driving, is known to be vulnerable to adversarial attacks. Previous\nphysical attacks against MDE models rely on 2D adversarial patches, so they\nonly affect a small, localized region in the MDE map but fail under various\nviewpoints. To address these limitations, we propose 3D Depth Fool\n(3D$^2$Fool), the first 3D texture-based adversarial attack against MDE models.\n3D$^2$Fool is specifically optimized to generate 3D adversarial textures\nagnostic to model types of vehicles and to have improved robustness in bad\nweather conditions, such as rain and fog. Experimental results validate the\nsuperior performance of our 3D$^2$Fool across various scenarios, including\nvehicles, MDE models, weather conditions, and viewpoints. Real-world\nexperiments with printed 3D textures on physical vehicle models further\ndemonstrate that our 3D$^2$Fool can cause an MDE error of over 10 meters.", + "While fine-tuning is a de facto standard method for training deep neural\nnetworks, it still suffers from overfitting when using small target datasets.\nPrevious methods improve fine-tuning performance by maintaining knowledge of\nthe source datasets or introducing regularization terms such as contrastive\nloss. However, these methods require auxiliary source information (e.g., source\nlabels or datasets) or heavy additional computations. In this paper, we propose\na simple method called adaptive random feature regularization (AdaRand).\nAdaRand helps the feature extractors of training models to adaptively change\nthe distribution of feature vectors for downstream classification tasks without\nauxiliary source information and with reasonable computation costs. To this\nend, AdaRand minimizes the gap between feature vectors and random reference\nvectors that are sampled from class conditional Gaussian distributions.\nFurthermore, AdaRand dynamically updates the conditional distribution to follow\nthe currently updated feature extractors and balance the distance between\nclasses in feature spaces. Our experiments show that AdaRand outperforms the\nother fine-tuning regularization, which requires auxiliary source information\nand heavy computation costs.", + "Despite substantial progress, all-in-one image restoration (IR) grapples with\npersistent challenges in handling intricate real-world degradations. This paper\nintroduces MPerceiver: a novel multimodal prompt learning approach that\nharnesses Stable Diffusion (SD) priors to enhance adaptiveness,\ngeneralizability and fidelity for all-in-one image restoration. Specifically,\nwe develop a dual-branch module to master two types of SD prompts: textual for\nholistic representation and visual for multiscale detail representation. Both\nprompts are dynamically adjusted by degradation predictions from the CLIP image\nencoder, enabling adaptive responses to diverse unknown degradations. Moreover,\na plug-in detail refinement module improves restoration fidelity via direct\nencoder-to-decoder information transformation. To assess our method, MPerceiver\nis trained on 9 tasks for all-in-one IR and outperforms state-of-the-art\ntask-specific methods across most tasks. Post multitask pre-training,\nMPerceiver attains a generalized representation in low-level vision, exhibiting\nremarkable zero-shot and few-shot capabilities in unseen tasks. Extensive\nexperiments on 16 IR tasks underscore the superiority of MPerceiver in terms of\nadaptiveness, generalizability and fidelity.", + "Event cameras have recently been shown beneficial for practical vision tasks,\nsuch as action recognition, thanks to their high temporal resolution, power\nefficiency, and reduced privacy concerns. However, current research is hindered\nby 1) the difficulty in processing events because of their prolonged duration\nand dynamic actions with complex and ambiguous semantics and 2) the redundant\naction depiction of the event frame representation with fixed stacks. We find\nlanguage naturally conveys abundant semantic information, rendering it\nstunningly superior in reducing semantic uncertainty. In light of this, we\npropose ExACT, a novel approach that, for the first time, tackles event-based\naction recognition from a cross-modal conceptualizing perspective. Our ExACT\nbrings two technical contributions. Firstly, we propose an adaptive\nfine-grained event (AFE) representation to adaptively filter out the repeated\nevents for the stationary objects while preserving dynamic ones. This subtly\nenhances the performance of ExACT without extra computational cost. Then, we\npropose a conceptual reasoning-based uncertainty estimation module, which\nsimulates the recognition process to enrich the semantic representation.", + "This subtly\nenhances the performance of ExACT without extra computational cost. Then, we\npropose a conceptual reasoning-based uncertainty estimation module, which\nsimulates the recognition process to enrich the semantic representation. In\nparticular, conceptual reasoning builds the temporal relation based on the\naction semantics, and uncertainty estimation tackles the semantic uncertainty\nof actions based on the distributional representation. Experiments show that\nour ExACT achieves superior recognition accuracy of 94.83%(+2.23%),\n90.10%(+37.47%) and 67.24% on PAF, HARDVS and our SeAct datasets respectively.", + "Images captured under sub-optimal illumination conditions may contain both\nover- and under-exposures. Current approaches mainly focus on adjusting image\nbrightness, which may exacerbate the color tone distortion in under-exposed\nareas and fail to restore accurate colors in over-exposed regions. We observe\nthat over- and under-exposed regions display opposite color tone distribution\nshifts with respect to each other, which may not be easily normalized in joint\nmodeling as they usually do not have ``normal-exposed'' regions/pixels as\nreference. In this paper, we propose a novel method to enhance images with both\nover- and under-exposures by learning to estimate and correct such color\nshifts. Specifically, we first derive the color feature maps of the brightened\nand darkened versions of the input image via a UNet-based network, followed by\na pseudo-normal feature generator to produce pseudo-normal color feature maps.\nWe then propose a novel COlor Shift Estimation (COSE) module to estimate the\ncolor shifts between the derived brightened (or darkened) color feature maps\nand the pseudo-normal color feature maps.", + "We then propose a novel COlor Shift Estimation (COSE) module to estimate the\ncolor shifts between the derived brightened (or darkened) color feature maps\nand the pseudo-normal color feature maps. The COSE module corrects the\nestimated color shifts of the over- and under-exposed regions separately. We\nfurther propose a novel COlor MOdulation (COMO) module to modulate the\nseparately corrected colors in the over- and under-exposed regions to produce\nthe enhanced image. Comprehensive experiments show that our method outperforms\nexisting approaches. Project webpage: https://github.com/yiyulics/CSEC.", + "Visual scenes are naturally organized in a hierarchy, where a coarse semantic\nis recursively comprised of several fine details. Exploring such a visual\nhierarchy is crucial to recognize the complex relations of visual elements,\nleading to a comprehensive scene understanding. In this paper, we propose a\nVisual Hierarchy Mapper (Hi-Mapper), a novel approach for enhancing the\nstructured understanding of the pre-trained Deep Neural Networks (DNNs).\nHi-Mapper investigates the hierarchical organization of the visual scene by 1)\npre-defining a hierarchy tree through the encapsulation of probability\ndensities; and 2) learning the hierarchical relations in hyperbolic space with\na novel hierarchical contrastive loss. The pre-defined hierarchy tree\nrecursively interacts with the visual features of the pre-trained DNNs through\nhierarchy decomposition and encoding procedures, thereby effectively\nidentifying the visual hierarchy and enhancing the recognition of an entire\nscene. Extensive experiments demonstrate that Hi-Mapper significantly enhances\nthe representation capability of DNNs, leading to an improved performance on\nvarious tasks, including image classification and dense prediction tasks.", + "Monocular depth estimation is a fundamental computer vision task. Recovering\n3D depth from a single image is geometrically ill-posed and requires scene\nunderstanding, so it is not surprising that the rise of deep learning has led\nto a breakthrough. The impressive progress of monocular depth estimators has\nmirrored the growth in model capacity, from relatively modest CNNs to large\nTransformer architectures. Still, monocular depth estimators tend to struggle\nwhen presented with images with unfamiliar content and layout, since their\nknowledge of the visual world is restricted by the data seen during training,\nand challenged by zero-shot generalization to new domains. This motivates us to\nexplore whether the extensive priors captured in recent generative diffusion\nmodels can enable better, more generalizable depth estimation. We introduce\nMarigold, a method for affine-invariant monocular depth estimation that is\nderived from Stable Diffusion and retains its rich prior knowledge. The\nestimator can be fine-tuned in a couple of days on a single GPU using only\nsynthetic training data. It delivers state-of-the-art performance across a wide\nrange of datasets, including over 20% performance gains in specific cases.", + "The\nestimator can be fine-tuned in a couple of days on a single GPU using only\nsynthetic training data. It delivers state-of-the-art performance across a wide\nrange of datasets, including over 20% performance gains in specific cases.\nProject page: https://marigoldmonodepth.github.io.", + "To better understand the behavior of image classifiers, it is useful to\nvisualize the contribution of individual pixels to the model prediction. In\nthis study, we propose a method, MoXI ($\\textbf{Mo}$del e$\\textbf{X}$planation\nby $\\textbf{I}$nteractions), that efficiently and accurately identifies a group\nof pixels with high prediction confidence. The proposed method employs\ngame-theoretic concepts, Shapley values and interactions, taking into account\nthe effects of individual pixels and the cooperative influence of pixels on\nmodel confidence. Theoretical analysis and experiments demonstrate that our\nmethod better identifies the pixels that are highly contributing to the model\noutputs than widely-used visualization by Grad-CAM, Attention rollout, and\nShapley value. While prior studies have suffered from the exponential\ncomputational cost in the computation of Shapley value and interactions, we\nshow that this can be reduced to quadratic cost for our task. The code is\navailable at https://github.com/KosukeSumiyasu/MoXI.", + "Recently, 3D anomaly detection, a crucial problem involving fine-grained\ngeometry discrimination, is getting more attention. However, the lack of\nabundant real 3D anomaly data limits the scalability of current models. To\nenable scalable anomaly data collection, we propose a 3D anomaly synthesis\npipeline to adapt existing large-scale 3Dmodels for 3D anomaly detection.\nSpecifically, we construct a synthetic dataset, i.e., Anomaly-ShapeNet, basedon\nShapeNet. Anomaly-ShapeNet consists of 1600 point cloud samples under 40\ncategories, which provides a rich and varied collection of data, enabling\nefficient training and enhancing adaptability to industrial scenarios.\nMeanwhile,to enable scalable representation learning for 3D anomaly\nlocalization, we propose a self-supervised method, i.e., Iterative Mask\nReconstruction Network (IMRNet). During training, we propose a geometry-aware\nsample module to preserve potentially anomalous local regions during point\ncloud down-sampling. Then, we randomly mask out point patches and sent the\nvisible patches to a transformer for reconstruction-based self-supervision.", + "During training, we propose a geometry-aware\nsample module to preserve potentially anomalous local regions during point\ncloud down-sampling. Then, we randomly mask out point patches and sent the\nvisible patches to a transformer for reconstruction-based self-supervision.\nDuring testing, the point cloud repeatedly goes through the Mask Reconstruction\nNetwork, with each iteration's output becoming the next input. By merging and\ncontrasting the final reconstructed point cloud with the initial input, our\nmethod successfully locates anomalies. Experiments show that IMRNet outperforms\nprevious state-of-the-art methods, achieving 66.1% in I-AUC on Anomaly-ShapeNet\ndataset and 72.5% in I-AUC on Real3D-AD dataset. Our dataset will be released\nat https://github.com/Chopper-233/Anomaly-ShapeNet", + "Understanding how the surrounding environment changes is crucial for\nperforming downstream tasks safely and reliably in autonomous driving\napplications. Recent occupancy estimation techniques using only camera images\nas input can provide dense occupancy representations of large-scale scenes\nbased on the current observation. However, they are mostly limited to\nrepresenting the current 3D space and do not consider the future state of\nsurrounding objects along the time axis. To extend camera-only occupancy\nestimation into spatiotemporal prediction, we propose Cam4DOcc, a new benchmark\nfor camera-only 4D occupancy forecasting, evaluating the surrounding scene\nchanges in a near future. We build our benchmark based on multiple publicly\navailable datasets, including nuScenes, nuScenes-Occupancy, and Lyft-Level5,\nwhich provides sequential occupancy states of general movable and static\nobjects, as well as their 3D backward centripetal flow.", + "We build our benchmark based on multiple publicly\navailable datasets, including nuScenes, nuScenes-Occupancy, and Lyft-Level5,\nwhich provides sequential occupancy states of general movable and static\nobjects, as well as their 3D backward centripetal flow. To establish this\nbenchmark for future research with comprehensive comparisons, we introduce four\nbaseline types from diverse camera-based perception and prediction\nimplementations, including a static-world occupancy model, voxelization of\npoint cloud prediction, 2D-3D instance-based prediction, and our proposed novel\nend-to-end 4D occupancy forecasting network. Furthermore, the standardized\nevaluation protocol for preset multiple tasks is also provided to compare the\nperformance of all the proposed baselines on present and future occupancy\nestimation with respect to objects of interest in autonomous driving scenarios.\nThe dataset and our implementation of all four baselines in the proposed\nCam4DOcc benchmark will be released here: https://github.com/haomo-ai/Cam4DOcc.", + "We introduce GoMAvatar, a novel approach for real-time, memory-efficient,\nhigh-quality animatable human modeling. GoMAvatar takes as input a single\nmonocular video to create a digital avatar capable of re-articulation in new\nposes and real-time rendering from novel viewpoints, while seamlessly\nintegrating with rasterization-based graphics pipelines. Central to our method\nis the Gaussians-on-Mesh representation, a hybrid 3D model combining rendering\nquality and speed of Gaussian splatting with geometry modeling and\ncompatibility of deformable meshes. We assess GoMAvatar on ZJU-MoCap data and\nvarious YouTube videos. GoMAvatar matches or surpasses current monocular human\nmodeling algorithms in rendering quality and significantly outperforms them in\ncomputational efficiency (43 FPS) while being memory-efficient (3.63 MB per\nsubject).", + "Our understanding of the generalization capabilities of neural networks (NNs)\nis still incomplete. Prevailing explanations are based on implicit biases of\ngradient descent (GD) but they cannot account for the capabilities of models\nfrom gradient-free methods nor the simplicity bias recently observed in\nuntrained networks. This paper seeks other sources of generalization in NNs.\n Findings. To understand the inductive biases provided by architectures\nindependently from GD, we examine untrained, random-weight networks. Even\nsimple MLPs show strong inductive biases: uniform sampling in weight space\nyields a very biased distribution of functions in terms of complexity. But\nunlike common wisdom, NNs do not have an inherent \"simplicity bias\". This\nproperty depends on components such as ReLUs, residual connections, and layer\nnormalizations. Alternative architectures can be built with a bias for any\nlevel of complexity. Transformers also inherit all these properties from their\nbuilding blocks.\n Implications. We provide a fresh explanation for the success of deep learning\nindependent from gradient-based training. It points at promising avenues for\ncontrolling the solutions implemented by trained models.", + "Realistic 3D human generation from text prompts is a desirable yet\nchallenging task. Existing methods optimize 3D representations like mesh or\nneural fields via score distillation sampling (SDS), which suffers from\ninadequate fine details or excessive training time. In this paper, we propose\nan efficient yet effective framework, HumanGaussian, that generates\nhigh-quality 3D humans with fine-grained geometry and realistic appearance. Our\nkey insight is that 3D Gaussian Splatting is an efficient renderer with\nperiodic Gaussian shrinkage or growing, where such adaptive density control can\nbe naturally guided by intrinsic human structures. Specifically, 1) we first\npropose a Structure-Aware SDS that simultaneously optimizes human appearance\nand geometry. The multi-modal score function from both RGB and depth space is\nleveraged to distill the Gaussian densification and pruning process. 2)\nMoreover, we devise an Annealed Negative Prompt Guidance by decomposing SDS\ninto a noisier generative score and a cleaner classifier score, which well\naddresses the over-saturation issue.", + "2)\nMoreover, we devise an Annealed Negative Prompt Guidance by decomposing SDS\ninto a noisier generative score and a cleaner classifier score, which well\naddresses the over-saturation issue. The floating artifacts are further\neliminated based on Gaussian size in a prune-only phase to enhance generation\nsmoothness. Extensive experiments demonstrate the superior efficiency and\ncompetitive quality of our framework, rendering vivid 3D humans under diverse\nscenarios. Project Page: https://alvinliu0.github.io/projects/HumanGaussian", + "We present CosmicMan, a text-to-image foundation model specialized for\ngenerating high-fidelity human images. Unlike current general-purpose\nfoundation models that are stuck in the dilemma of inferior quality and\ntext-image misalignment for humans, CosmicMan enables generating\nphoto-realistic human images with meticulous appearance, reasonable structure,\nand precise text-image alignment with detailed dense descriptions. At the heart\nof CosmicMan's success are the new reflections and perspectives on data and\nmodels: (1) We found that data quality and a scalable data production flow are\nessential for the final results from trained models. Hence, we propose a new\ndata production paradigm, Annotate Anyone, which serves as a perpetual data\nflywheel to produce high-quality data with accurate yet cost-effective\nannotations over time. Based on this, we constructed a large-scale dataset,\nCosmicMan-HQ 1.0, with 6 Million high-quality real-world human images in a mean\nresolution of 1488x1255, and attached with precise text annotations deriving\nfrom 115 Million attributes in diverse granularities.", + "Based on this, we constructed a large-scale dataset,\nCosmicMan-HQ 1.0, with 6 Million high-quality real-world human images in a mean\nresolution of 1488x1255, and attached with precise text annotations deriving\nfrom 115 Million attributes in diverse granularities. (2) We argue that a\ntext-to-image foundation model specialized for humans must be pragmatic -- easy\nto integrate into down-streaming tasks while effective in producing\nhigh-quality human images. Hence, we propose to model the relationship between\ndense text descriptions and image pixels in a decomposed manner, and present\nDecomposed-Attention-Refocusing (Daring) training framework. It seamlessly\ndecomposes the cross-attention features in existing text-to-image diffusion\nmodel, and enforces attention refocusing without adding extra modules. Through\nDaring, we show that explicitly discretizing continuous text space into several\nbasic groups that align with human body structure is the key to tackling the\nmisalignment problem in a breeze.", + "Sign Language Translation (SLT) is a challenging task that aims to translate\nsign videos into spoken language. Inspired by the strong translation\ncapabilities of large language models (LLMs) that are trained on extensive\nmultilingual text corpora, we aim to harness off-the-shelf LLMs to handle SLT.\nIn this paper, we regularize the sign videos to embody linguistic\ncharacteristics of spoken language, and propose a novel SignLLM framework to\ntransform sign videos into a language-like representation for improved\nreadability by off-the-shelf LLMs. SignLLM comprises two key modules: (1) The\nVector-Quantized Visual Sign module converts sign videos into a sequence of\ndiscrete character-level sign tokens, and (2) the Codebook Reconstruction and\nAlignment module converts these character-level tokens into word-level sign\nrepresentations using an optimal transport formulation. A sign-text alignment\nloss further bridges the gap between sign and text tokens, enhancing semantic\ncompatibility. We achieve state-of-the-art gloss-free results on two\nwidely-used SLT benchmarks.", + "No-reference point cloud quality assessment (NR-PCQA) aims to automatically\nevaluate the perceptual quality of distorted point clouds without available\nreference, which have achieved tremendous improvements due to the utilization\nof deep neural networks. However, learning-based NR-PCQA methods suffer from\nthe scarcity of labeled data and usually perform suboptimally in terms of\ngeneralization. To solve the problem, we propose a novel contrastive\npre-training framework tailored for PCQA (CoPA), which enables the pre-trained\nmodel to learn quality-aware representations from unlabeled data. To obtain\nanchors in the representation space, we project point clouds with different\ndistortions into images and randomly mix their local patches to form mixed\nimages with multiple distortions. Utilizing the generated anchors, we constrain\nthe pre-training process via a quality-aware contrastive loss following the\nphilosophy that perceptual quality is closely related to both content and\ndistortion. Furthermore, in the model fine-tuning stage, we propose a\nsemantic-guided multi-view fusion module to effectively integrate the features\nof projected images from multiple perspectives. Extensive experiments show that\nour method outperforms the state-of-the-art PCQA methods on popular benchmarks.", + "Furthermore, in the model fine-tuning stage, we propose a\nsemantic-guided multi-view fusion module to effectively integrate the features\nof projected images from multiple perspectives. Extensive experiments show that\nour method outperforms the state-of-the-art PCQA methods on popular benchmarks.\nFurther investigations demonstrate that CoPA can also benefit existing\nlearning-based PCQA models.", + "We propose a practical approach to JPEG image decoding, utilizing a local\nimplicit neural representation with continuous cosine formulation. The JPEG\nalgorithm significantly quantizes discrete cosine transform (DCT) spectra to\nachieve a high compression rate, inevitably resulting in quality degradation\nwhile encoding an image. We have designed a continuous cosine spectrum\nestimator to address the quality degradation issue that restores the distorted\nspectrum. By leveraging local DCT formulations, our network has the privilege\nto exploit dequantization and upsampling simultaneously. Our proposed model\nenables decoding compressed images directly across different quality factors\nusing a single pre-trained model without relying on a conventional JPEG\ndecoder. As a result, our proposed network achieves state-of-the-art\nperformance in flexible color image JPEG artifact removal tasks. Our source\ncode is available at https://github.com/WooKyoungHan/JDEC.", + "Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a\nnew target domain by actively selecting a limited number of target data to\nannotate.This setting neglects the more practical scenario where training data\nare collected from multiple sources. This motivates us to target a new and\nchallenging setting of knowledge transfer that extends ADA from a single source\ndomain to multiple source domains, termed Multi-source Active Domain Adaptation\n(MADA). Not surprisingly, we find that most traditional ADA methods cannot work\ndirectly in such a setting, mainly due to the excessive domain gap introduced\nby all the source domains and thus their uncertainty-aware sample selection can\neasily become miscalibrated under the multi-domain shifts. Considering this, we\npropose a Dynamic integrated uncertainty valuation framework(Detective) that\ncomprehensively consider the domain shift between multi-source domains and\ntarget domain to detect the informative target samples. Specifically, the\nleverages a dynamic Domain Adaptation(DA) model that learns how to adapt the\nmodel's parameters to fit the union of multi-source domains. This enables an\napproximate single-source domain modeling by the dynamic model.", + "Specifically, the\nleverages a dynamic Domain Adaptation(DA) model that learns how to adapt the\nmodel's parameters to fit the union of multi-source domains. This enables an\napproximate single-source domain modeling by the dynamic model. We then\ncomprehensively measure both domain uncertainty and predictive uncertainty in\nthe target domain to detect informative target samples using evidential deep\nlearning, thereby mitigating uncertainty miscalibration. Furthermore, we\nintroduce a contextual diversity-aware calculator to enhance the diversity of\nthe selected samples. Experiments demonstrate that our solution outperforms\nexisting methods by a considerable margin on three domain adaptation\nbenchmarks.", + "3D instance segmentation (3DIS) is a crucial task, but point-level\nannotations are tedious in fully supervised settings. Thus, using bounding\nboxes (bboxes) as annotations has shown great potential. The current mainstream\napproach is a two-step process, involving the generation of pseudo-labels from\nbox annotations and the training of a 3DIS network with the pseudo-labels.\nHowever, due to the presence of intersections among bboxes, not every point has\na determined instance label, especially in overlapping areas. To generate\nhigher quality pseudo-labels and achieve more precise weakly supervised 3DIS\nresults, we propose the Box-Supervised Simulation-assisted Mean Teacher for 3D\nInstance Segmentation (BSNet), which devises a novel pseudo-labeler called\nSimulation-assisted Transformer. The labeler consists of two main components.\nThe first is Simulation-assisted Mean Teacher, which introduces Mean Teacher\nfor the first time in this task and constructs simulated samples to assist the\nlabeler in acquiring prior knowledge about overlapping areas. To better model\nlocal-global structure, we also propose Local-Global Aware Attention as the\ndecoder for teacher and student labelers.", + "To better model\nlocal-global structure, we also propose Local-Global Aware Attention as the\ndecoder for teacher and student labelers. Extensive experiments conducted on\nthe ScanNetV2 and S3DIS datasets verify the superiority of our designs. Code is\navailable at\n\\href{https://github.com/peoplelu/BSNet}{https://github.com/peoplelu/BSNet}.", + "3D object generation has undergone significant advancements, yielding\nhigh-quality results. However, fall short of achieving precise user control,\noften yielding results that do not align with user expectations, thus limiting\ntheir applicability. User-envisioning 3D object generation faces significant\nchallenges in realizing its concepts using current generative models due to\nlimited interaction capabilities. Existing methods mainly offer two approaches:\n(i) interpreting textual instructions with constrained controllability, or (ii)\nreconstructing 3D objects from 2D images. Both of them limit customization to\nthe confines of the 2D reference and potentially introduce undesirable\nartifacts during the 3D lifting process, restricting the scope for direct and\nversatile 3D modifications. In this work, we introduce Interactive3D, an\ninnovative framework for interactive 3D generation that grants users precise\ncontrol over the generative process through extensive 3D interaction\ncapabilities. Interactive3D is constructed in two cascading stages, utilizing\ndistinct 3D representations.", + "In this work, we introduce Interactive3D, an\ninnovative framework for interactive 3D generation that grants users precise\ncontrol over the generative process through extensive 3D interaction\ncapabilities. Interactive3D is constructed in two cascading stages, utilizing\ndistinct 3D representations. The first stage employs Gaussian Splatting for\ndirect user interaction, allowing modifications and guidance of the generative\ndirection at any intermediate step through (i) Adding and Removing components,\n(ii) Deformable and Rigid Dragging, (iii) Geometric Transformations, and (iv)\nSemantic Editing. Subsequently, the Gaussian splats are transformed into\nInstantNGP. We introduce a novel (v) Interactive Hash Refinement module to\nfurther add details and extract the geometry in the second stage. Our\nexperiments demonstrate that Interactive3D markedly improves the\ncontrollability and quality of 3D generation. Our project webpage is available\nat \\url{https://interactive-3d.github.io/}.", + "Vision Transformer (ViT) has emerged as a prominent architecture for various\ncomputer vision tasks. In ViT, we divide the input image into patch tokens and\nprocess them through a stack of self attention blocks. However, unlike\nConvolutional Neural Networks (CNN), ViTs simple architecture has no\ninformative inductive bias (e.g., locality,etc. ). Due to this, ViT requires a\nlarge amount of data for pre-training. Various data efficient approaches (DeiT)\nhave been proposed to train ViT on balanced datasets effectively. However,\nlimited literature discusses the use of ViT for datasets with long-tailed\nimbalances. In this work, we introduce DeiT-LT to tackle the problem of\ntraining ViTs from scratch on long-tailed datasets. In DeiT-LT, we introduce an\nefficient and effective way of distillation from CNN via distillation DIST\ntoken by using out-of-distribution images and re-weighting the distillation\nloss to enhance focus on tail classes. This leads to the learning of local\nCNN-like features in early ViT blocks, improving generalization for tail\nclasses.", + "This leads to the learning of local\nCNN-like features in early ViT blocks, improving generalization for tail\nclasses. Further, to mitigate overfitting, we propose distilling from a flat\nCNN teacher, which leads to learning low-rank generalizable features for DIST\ntokens across all ViT blocks. With the proposed DeiT-LT scheme, the\ndistillation DIST token becomes an expert on the tail classes, and the\nclassifier CLS token becomes an expert on the head classes. The experts help to\neffectively learn features corresponding to both the majority and minority\nclasses using a distinct set of tokens within the same ViT architecture. We\nshow the effectiveness of DeiT-LT for training ViT from scratch on datasets\nranging from small-scale CIFAR-10 LT to large-scale iNaturalist-2018.", + "Recent advancements in Spatial Transcriptomics (ST) technology have\nfacilitated detailed gene expression analysis within tissue contexts. However,\nthe high costs and methodological limitations of ST necessitate a more robust\npredictive model. In response, this paper introduces TRIPLEX, a novel deep\nlearning framework designed to predict spatial gene expression from Whole Slide\nImages (WSIs). TRIPLEX uniquely harnesses multi-resolution features, capturing\ncellular morphology at individual spots, the local context around these spots,\nand the global tissue organization. By integrating these features through an\neffective fusion strategy, TRIPLEX achieves accurate gene expression\nprediction. Our comprehensive benchmark study, conducted on three public ST\ndatasets and supplemented with Visium data from 10X Genomics, demonstrates that\nTRIPLEX outperforms current state-of-the-art models in Mean Squared Error\n(MSE), Mean Absolute Error (MAE), and Pearson Correlation Coefficient (PCC).\nThe model's predictions align closely with ground truth gene expression\nprofiles and tumor annotations, underscoring TRIPLEX's potential in advancing\ncancer diagnosis and treatment.", + "Modeling and visualizing relationships between tasks or datasets is an\nimportant step towards solving various meta-tasks such as dataset discovery,\nmulti-tasking, and transfer learning. However, many relationships, such as\ncontainment and transferability, are naturally asymmetric and current\napproaches for representation and visualization (e.g., t-SNE) do not readily\nsupport this. We propose Task2Box, an approach to represent tasks using box\nembeddings -- axis-aligned hyperrectangles in low dimensional spaces -- that\ncan capture asymmetric relationships between them through volumetric overlaps.\nWe show that Task2Box accurately predicts unseen hierarchical relationships\nbetween nodes in ImageNet and iNaturalist datasets, as well as transferability\nbetween tasks in the Taskonomy benchmark. We also show that box embeddings\nestimated from task representations (e.g., CLIP, Task2Vec, or attribute based)\ncan be used to predict relationships between unseen tasks more accurately than\nclassifiers trained on the same representations, as well as handcrafted\nasymmetric distances (e.g., KL divergence). This suggests that low-dimensional\nbox embeddings can effectively capture these task relationships and have the\nadded advantage of being interpretable.", + "This suggests that low-dimensional\nbox embeddings can effectively capture these task relationships and have the\nadded advantage of being interpretable. We use the approach to visualize\nrelationships among publicly available image classification datasets on popular\ndataset hosting platform called Hugging Face.", + "In this paper, we present a novel indoor 3D reconstruction method with\noccluded surface completion, given a sequence of depth readings. Prior\nstate-of-the-art (SOTA) methods only focus on the reconstruction of the visible\nareas in a scene, neglecting the invisible areas due to the occlusions, e.g.,\nthe contact surface between furniture, occluded wall and floor. Our method\ntackles the task of completing the occluded scene surfaces, resulting in a\ncomplete 3D scene mesh. The core idea of our method is learning 3D geometry\nprior from various complete scenes to infer the occluded geometry of an unseen\nscene from solely depth measurements. We design a coarse-fine hierarchical\noctree representation coupled with a dual-decoder architecture, i.e.,\nGeo-decoder and 3D Inpainter, which jointly reconstructs the complete 3D scene\ngeometry. The Geo-decoder with detailed representation at fine levels is\noptimized online for each scene to reconstruct visible surfaces. The 3D\nInpainter with abstract representation at coarse levels is trained offline\nusing various scenes to complete occluded surfaces.", + "The Geo-decoder with detailed representation at fine levels is\noptimized online for each scene to reconstruct visible surfaces. The 3D\nInpainter with abstract representation at coarse levels is trained offline\nusing various scenes to complete occluded surfaces. As a result, while the\nGeo-decoder is specialized for an individual scene, the 3D Inpainter can be\ngenerally applied across different scenes. We evaluate the proposed method on\nthe 3D Completed Room Scene (3D-CRS) and iTHOR datasets, significantly\noutperforming the SOTA methods by a gain of 16.8% and 24.2% in terms of the\ncompleteness of 3D reconstruction. 3D-CRS dataset including a complete 3D mesh\nof each scene is provided at project webpage.", + "Omnidirectional cameras are extensively used in various applications to\nprovide a wide field of vision. However, they face a challenge in synthesizing\nnovel views due to the inevitable presence of dynamic objects, including the\nphotographer, in their wide field of view. In this paper, we introduce a new\napproach called Omnidirectional Local Radiance Fields (OmniLocalRF) that can\nrender static-only scene views, removing and inpainting dynamic objects\nsimultaneously. Our approach combines the principles of local radiance fields\nwith the bidirectional optimization of omnidirectional rays. Our input is an\nomnidirectional video, and we evaluate the mutual observations of the entire\nangle between the previous and current frames. To reduce ghosting artifacts of\ndynamic objects and inpaint occlusions, we devise a multi-resolution motion\nmask prediction module. Unlike existing methods that primarily separate dynamic\ncomponents through the temporal domain, our method uses multi-resolution neural\nfeature planes for precise segmentation, which is more suitable for long\n360-degree videos. Our experiments validate that OmniLocalRF outperforms\nexisting methods in both qualitative and quantitative metrics, especially in\nscenarios with complex real-world scenes.", + "Our experiments validate that OmniLocalRF outperforms\nexisting methods in both qualitative and quantitative metrics, especially in\nscenarios with complex real-world scenes. In particular, our approach\neliminates the need for manual interaction, such as drawing motion masks by\nhand and additional pose estimation, making it a highly effective and efficient\nsolution.", + "The field of 3D detailed human mesh reconstruction has made significant\nprogress in recent years. However, current methods still face challenges when\nused in industrial applications due to unstable results, low-quality meshes,\nand a lack of UV unwrapping and skinning weights. In this paper, we present\nSHERT, a novel pipeline that can reconstruct semantic human meshes with\ntextures and high-precision details. SHERT applies semantic- and normal-based\nsampling between the detailed surface (e.g. mesh and SDF) and the corresponding\nSMPL-X model to obtain a partially sampled semantic mesh and then generates the\ncomplete semantic mesh by our specifically designed self-supervised completion\nand refinement networks. Using the complete semantic mesh as a basis, we employ\na texture diffusion model to create human textures that are driven by both\nimages and texts. Our reconstructed meshes have stable UV unwrapping,\nhigh-quality triangle meshes, and consistent semantic information. The given\nSMPL-X model provides semantic information and shape priors, allowing SHERT to\nperform well even with incorrect and incomplete inputs.", + "Our reconstructed meshes have stable UV unwrapping,\nhigh-quality triangle meshes, and consistent semantic information. The given\nSMPL-X model provides semantic information and shape priors, allowing SHERT to\nperform well even with incorrect and incomplete inputs. The semantic\ninformation also makes it easy to substitute and animate different body parts\nsuch as the face, body, and hands. Quantitative and qualitative experiments\ndemonstrate that SHERT is capable of producing high-fidelity and robust\nsemantic meshes that outperform state-of-the-art methods.", + "Federated learning facilitates the collaborative learning of a global model\nacross multiple distributed medical institutions without centralizing data.\nNevertheless, the expensive cost of annotation on local clients remains an\nobstacle to effectively utilizing local data. To mitigate this issue, federated\nactive learning methods suggest leveraging local and global model predictions\nto select a relatively small amount of informative local data for annotation.\nHowever, existing methods mainly focus on all local data sampled from the same\ndomain, making them unreliable in realistic medical scenarios with domain\nshifts among different clients. In this paper, we make the first attempt to\nassess the informativeness of local data derived from diverse domains and\npropose a novel methodology termed Federated Evidential Active Learning (FEAL)\nto calibrate the data evaluation under domain shift. Specifically, we introduce\na Dirichlet prior distribution in both local and global models to treat the\nprediction as a distribution over the probability simplex and capture both\naleatoric and epistemic uncertainties by using the Dirichlet-based evidential\nmodel. Then we employ the epistemic uncertainty to calibrate the aleatoric\nuncertainty.", + "Then we employ the epistemic uncertainty to calibrate the aleatoric\nuncertainty. Afterward, we design a diversity relaxation strategy to reduce\ndata redundancy and maintain data diversity. Extensive experiments and analysis\non five real multi-center medical image datasets demonstrate the superiority of\nFEAL over the state-of-the-art active learning methods in federated scenarios\nwith domain shifts. The code will be available at\nhttps://github.com/JiayiChen815/FEAL.", + "Recent advances in large-scale pretraining have yielded visual foundation\nmodels with strong capabilities. Not only can recent models generalize to\narbitrary images for their training task, their intermediate representations\nare useful for other visual tasks such as detection and segmentation. Given\nthat such models can classify, delineate, and localize objects in 2D, we ask\nwhether they also represent their 3D structure? In this work, we analyze the 3D\nawareness of visual foundation models. We posit that 3D awareness implies that\nrepresentations (1) encode the 3D structure of the scene and (2) consistently\nrepresent the surface across views. We conduct a series of experiments using\ntask-specific probes and zero-shot inference procedures on frozen features. Our\nexperiments reveal several limitations of the current models. Our code and\nanalysis can be found at https://github.com/mbanani/probe3d.", + "Recent advancements in personalized text-to-image (T2I) models have\nrevolutionized content creation, empowering non-experts to generate stunning\nimages with unique styles. While promising, adding realistic motions into these\npersonalized images by text poses significant challenges in preserving distinct\nstyles, high-fidelity details, and achieving motion controllability by text. In\nthis paper, we present PIA, a Personalized Image Animator that excels in\naligning with condition images, achieving motion controllability by text, and\nthe compatibility with various personalized T2I models without specific tuning.\nTo achieve these goals, PIA builds upon a base T2I model with well-trained\ntemporal alignment layers, allowing for the seamless transformation of any\npersonalized T2I model into an image animation model. A key component of PIA is\nthe introduction of the condition module, which utilizes the condition frame\nand inter-frame affinity as input to transfer appearance information guided by\nthe affinity hint for individual frame synthesis in the latent space. This\ndesign mitigates the challenges of appearance-related image alignment within\nand allows for a stronger focus on aligning with motion-related guidance.", + "A Neural Radiance Field (NeRF) encodes the specific relation of 3D geometry\nand appearance of a scene. We here ask the question whether we can transfer the\nappearance from a source NeRF onto a target 3D geometry in a semantically\nmeaningful way, such that the resulting new NeRF retains the target geometry\nbut has an appearance that is an analogy to the source NeRF. To this end, we\ngeneralize classic image analogies from 2D images to NeRFs. We leverage\ncorrespondence transfer along semantic affinity that is driven by semantic\nfeatures from large, pre-trained 2D image models to achieve multi-view\nconsistent appearance transfer. Our method allows exploring the mix-and-match\nproduct space of 3D geometry and appearance. We show that our method\noutperforms traditional stylization-based methods and that a large majority of\nusers prefer our method over several typical baselines.", + "Recent breakthroughs in vision-language models (VLMs) start a new page in the\nvision community. The VLMs provide stronger and more generalizable feature\nembeddings compared to those from ImageNet-pretrained models, thanks to the\ntraining on the large-scale Internet image-text pairs. However, despite the\namazing achievement from the VLMs, vanilla Vision Transformers (ViTs) remain\nthe default choice for the image encoder. Although pure transformer proves its\neffectiveness in the text encoding area, it remains questionable whether it is\nalso the case for image encoding, especially considering that various types of\nnetworks are proposed on the ImageNet benchmark, which, unfortunately, are\nrarely studied in VLMs. Due to small data/model scale, the original conclusions\nof model design on ImageNet can be limited and biased. In this paper, we aim at\nbuilding an evaluation protocol of vision models in the vision-language era\nunder the contrastive language-image pretraining (CLIP) framework. We provide a\ncomprehensive way to benchmark different vision models, covering their\nzero-shot performance and scalability in both model and training data sizes.", + "We provide a\ncomprehensive way to benchmark different vision models, covering their\nzero-shot performance and scalability in both model and training data sizes. To\nthis end, we introduce ViTamin, a new vision models tailored for VLMs.\nViTamin-L significantly outperforms ViT-L by 2.0% ImageNet zero-shot accuracy,\nwhen using the same publicly available DataComp-1B dataset and the same\nOpenCLIP training scheme. ViTamin-L presents promising results on 60 diverse\nbenchmarks, including classification, retrieval, open-vocabulary detection and\nsegmentation, and large multi-modal models. When further scaling up the model\nsize, our ViTamin-XL with only 436M parameters attains 82.9% ImageNet zero-shot\naccuracy, surpassing 82.0% achieved by EVA-E that has ten times more parameters\n(4.4B).", + "Audio-visual segmentation (AVS) is a challenging task that involves\naccurately segmenting sounding objects based on audio-visual cues. The\neffectiveness of audio-visual learning critically depends on achieving accurate\ncross-modal alignment between sound and visual objects. Successful audio-visual\nlearning requires two essential components: 1) a challenging dataset with\nhigh-quality pixel-level multi-class annotated images associated with audio\nfiles, and 2) a model that can establish strong links between audio information\nand its corresponding visual object. However, these requirements are only\npartially addressed by current methods, with training sets containing biased\naudio-visual data, and models that generalise poorly beyond this biased\ntraining set. In this work, we propose a new cost-effective strategy to build\nchallenging and relatively unbiased high-quality audio-visual segmentation\nbenchmarks. We also propose a new informative sample mining method for\naudio-visual supervised contrastive learning to leverage discriminative\ncontrastive samples to enforce cross-modal understanding. We show empirical\nresults that demonstrate the effectiveness of our benchmark. Furthermore,\nexperiments conducted on existing AVS datasets and on our new benchmark show\nthat our method achieves state-of-the-art (SOTA) segmentation accuracy.", + "Federated learning (FL) promotes decentralized training while prioritizing\ndata confidentiality. However, its application on resource-constrained devices\nis challenging due to the high demand for computation and memory resources to\ntrain deep learning models. Neural network pruning techniques, such as dynamic\npruning, could enhance model efficiency, but directly adopting them in FL still\nposes substantial challenges, including post-pruning performance degradation,\nhigh activation memory usage, etc. To address these challenges, we propose\nFedMef, a novel and memory-efficient federated dynamic pruning framework.\nFedMef comprises two key components. First, we introduce the budget-aware\nextrusion that maintains pruning efficiency while preserving post-pruning\nperformance by salvaging crucial information from parameters marked for pruning\nwithin a given budget. Second, we propose scaled activation pruning to\neffectively reduce activation memory footprints, which is particularly\nbeneficial for deploying FL to memory-limited devices. Extensive experiments\ndemonstrate the effectiveness of our proposed FedMef. In particular, it\nachieves a significant reduction of 28.5% in memory footprint compared to\nstate-of-the-art methods while obtaining superior accuracy.", + "Computer vision tasks typically involve describing what is present in an\nimage (e.g. classification, detection, segmentation, and captioning). We study\na visual common sense task that requires understanding what is not present.\nSpecifically, given an image (e.g. of a living room) and name of an object\n(\"cushion\"), a vision system is asked to predict semantically-meaningful\nregions (masks or bounding boxes) in the image where that object could be\nplaced or is likely be placed by humans (e.g. on the sofa). We call this task:\nSemantic Placement (SP) and believe that such common-sense visual understanding\nis critical for assitive robots (tidying a house), and AR devices\n(automatically rendering an object in the user's space). Studying the invisible\nis hard. Datasets for image description are typically constructed by curating\nrelevant images and asking humans to annotate the contents of the image;\nneither of those two steps are straightforward for objects not present in the\nimage.", + "Studying the invisible\nis hard. Datasets for image description are typically constructed by curating\nrelevant images and asking humans to annotate the contents of the image;\nneither of those two steps are straightforward for objects not present in the\nimage. We overcome this challenge by operating in the opposite direction: we\nstart with an image of an object in context from web, and then remove that\nobject from the image via inpainting. This automated pipeline converts\nunstructured web data into a dataset comprising pairs of images with/without\nthe object. Using this, we collect a novel dataset, with ${\\sim}1.3$M images\nacross $9$ object categories, and train a SP prediction model called CLIP-UNet.\nCLIP-UNet outperforms existing VLMs and baselines that combine semantic priors\nwith object detectors on real-world and simulated images. In our user studies,\nwe find that the SP masks predicted by CLIP-UNet are favored $43.7\\%$ and\n$31.3\\%$ times when comparing against the $4$ SP baselines on real and\nsimulated images.", + "In our user studies,\nwe find that the SP masks predicted by CLIP-UNet are favored $43.7\\%$ and\n$31.3\\%$ times when comparing against the $4$ SP baselines on real and\nsimulated images. In addition, we demonstrate leveraging SP mask predictions\nfrom CLIP-UNet enables downstream applications like building tidying robots in\nindoor environments.", + "Image-based virtual try-on is an increasingly important task for online\nshopping. It aims to synthesize images of a specific person wearing a specified\ngarment. Diffusion model-based approaches have recently become popular, as they\nare excellent at image synthesis tasks. However, these approaches usually\nemploy additional image encoders and rely on the cross-attention mechanism for\ntexture transfer from the garment to the person image, which affects the\ntry-on's efficiency and fidelity. To address these issues, we propose an\nTexture-Preserving Diffusion (TPD) model for virtual try-on, which enhances the\nfidelity of the results and introduces no additional image encoders.\nAccordingly, we make contributions from two aspects. First, we propose to\nconcatenate the masked person and reference garment images along the spatial\ndimension and utilize the resulting image as the input for the diffusion\nmodel's denoising UNet. This enables the original self-attention layers\ncontained in the diffusion model to achieve efficient and accurate texture\ntransfer. Second, we propose a novel diffusion-based method that predicts a\nprecise inpainting mask based on the person and reference garment images,\nfurther enhancing the reliability of the try-on results.", + "This enables the original self-attention layers\ncontained in the diffusion model to achieve efficient and accurate texture\ntransfer. Second, we propose a novel diffusion-based method that predicts a\nprecise inpainting mask based on the person and reference garment images,\nfurther enhancing the reliability of the try-on results. In addition, we\nintegrate mask prediction and image synthesis into a single compact model. The\nexperimental results show that our approach can be applied to various try-on\ntasks, e.g., garment-to-person and person-to-person try-ons, and significantly\noutperforms state-of-the-art methods on popular VITON, VITON-HD databases.", + "Domain Generalization (DG) aims to resolve distribution shifts between source\nand target domains, and current DG methods are default to the setting that data\nfrom source and target domains share identical categories. Nevertheless, there\nexists unseen classes from target domains in practical scenarios. To address\nthis issue, Open Set Domain Generalization (OSDG) has emerged and several\nmethods have been exclusively proposed. However, most existing methods adopt\ncomplex architectures with slight improvement compared with DG methods.\nRecently, vision-language models (VLMs) have been introduced in DG following\nthe fine-tuning paradigm, but consume huge training overhead with large vision\nmodels. Therefore, in this paper, we innovate to transfer knowledge from VLMs\nto lightweight vision models and improve the robustness by introducing\nPerturbation Distillation (PD) from three perspectives, including Score, Class\nand Instance (SCI), named SCI-PD. Moreover, previous methods are oriented by\nthe benchmarks with identical and fixed splits, ignoring the divergence between\nsource domains.", + "Moreover, previous methods are oriented by\nthe benchmarks with identical and fixed splits, ignoring the divergence between\nsource domains. These methods are revealed to suffer from sharp performance\ndecay with our proposed new benchmark Hybrid Domain Generalization (HDG) and a\nnovel metric $H^{2}$-CV, which construct various splits to comprehensively\nassess the robustness of algorithms. Extensive experiments demonstrate that our\nmethod outperforms state-of-the-art algorithms on multiple datasets, especially\nimproving the robustness when confronting data scarcity.", + "We introduce SODA, a self-supervised diffusion model, designed for\nrepresentation learning. The model incorporates an image encoder, which\ndistills a source view into a compact representation, that, in turn, guides the\ngeneration of related novel views. We show that by imposing a tight bottleneck\nbetween the encoder and a denoising decoder, and leveraging novel view\nsynthesis as a self-supervised objective, we can turn diffusion models into\nstrong representation learners, capable of capturing visual semantics in an\nunsupervised manner. To the best of our knowledge, SODA is the first diffusion\nmodel to succeed at ImageNet linear-probe classification, and, at the same\ntime, it accomplishes reconstruction, editing and synthesis tasks across a wide\nrange of datasets. Further investigation reveals the disentangled nature of its\nemergent latent space, that serves as an effective interface to control and\nmanipulate the model's produced images. All in all, we aim to shed light on the\nexciting and promising potential of diffusion models, not only for image\ngeneration, but also for learning rich and robust representations.", + "Event camera has recently received much attention for low-light image\nenhancement (LIE) thanks to their distinct advantages, such as high dynamic\nrange. However, current research is prohibitively restricted by the lack of\nlarge-scale, real-world, and spatial-temporally aligned event-image datasets.\nTo this end, we propose a real-world (indoor and outdoor) dataset comprising\nover 30K pairs of images and events under both low and normal illumination\nconditions. To achieve this, we utilize a robotic arm that traces a consistent\nnon-linear trajectory to curate the dataset with spatial alignment precision\nunder 0.03mm. We then introduce a matching alignment strategy, rendering 90% of\nour dataset with errors less than 0.01s. Based on the dataset, we propose a\nnovel event-guided LIE approach, called EvLight, towards robust performance in\nreal-world low-light scenes. Specifically, we first design the multi-scale\nholistic fusion branch to extract holistic structural and textural information\nfrom both events and images.", + "Based on the dataset, we propose a\nnovel event-guided LIE approach, called EvLight, towards robust performance in\nreal-world low-light scenes. Specifically, we first design the multi-scale\nholistic fusion branch to extract holistic structural and textural information\nfrom both events and images. To ensure robustness against variations in the\nregional illumination and noise, we then introduce a Signal-to-Noise-Ratio\n(SNR)-guided regional feature selection to selectively fuse features of images\nfrom regions with high SNR and enhance those with low SNR by extracting\nregional structure information from events. Extensive experiments on our\ndataset and the synthetic SDSD dataset demonstrate our EvLight significantly\nsurpasses the frame-based methods. Code and datasets are available at\nhttps://vlislab22.github.io/eg-lowlight/.", + "Understanding illumination and reducing the need for supervision pose a\nsignificant challenge in low-light enhancement. Current approaches are highly\nsensitive to data usage during training and illumination-specific\nhyper-parameters, limiting their ability to handle unseen scenarios. In this\npaper, we propose a new zero-reference low-light enhancement framework\ntrainable solely with normal light images. To accomplish this, we devise an\nillumination-invariant prior inspired by the theory of physical light transfer.\nThis prior serves as the bridge between normal and low-light images. Then, we\ndevelop a prior-to-image framework trained without low-light data. During\ntesting, this framework is able to restore our illumination-invariant prior\nback to images, automatically achieving low-light enhancement. Within this\nframework, we leverage a pretrained generative diffusion model for model\nability, introduce a bypass decoder to handle detail distortion, as well as\noffer a lightweight version for practicality. Extensive experiments demonstrate\nour framework's superiority in various scenarios as well as good\ninterpretability, robustness, and efficiency. Code is available on our project\nhomepage: http://daooshee.github.io/QuadPrior-Website/", + "The emergence of Neural Radiance Fields (NeRF) has greatly impacted 3D scene\nmodeling and novel-view synthesis. As a kind of visual media for 3D scene\nrepresentation, compression with high rate-distortion performance is an eternal\ntarget. Motivated by advances in neural compression and neural field\nrepresentation, we propose NeRFCodec, an end-to-end NeRF compression framework\nthat integrates non-linear transform, quantization, and entropy coding for\nmemory-efficient scene representation. Since training a non-linear transform\ndirectly on a large scale of NeRF feature planes is impractical, we discover\nthat pre-trained neural 2D image codec can be utilized for compressing the\nfeatures when adding content-specific parameters. Specifically, we reuse neural\n2D image codec but modify its encoder and decoder heads, while keeping the\nother parts of the pre-trained decoder frozen. This allows us to train the full\npipeline via supervision of rendering loss and entropy loss, yielding the\nrate-distortion balance by updating the content-specific parameters. At test\ntime, the bitstreams containing latent code, feature decoder head, and other\nside information are transmitted for communication.", + "This allows us to train the full\npipeline via supervision of rendering loss and entropy loss, yielding the\nrate-distortion balance by updating the content-specific parameters. At test\ntime, the bitstreams containing latent code, feature decoder head, and other\nside information are transmitted for communication. Experimental results\ndemonstrate our method outperforms existing NeRF compression methods, enabling\nhigh-quality novel view synthesis with a memory budget of 0.5 MB.", + "Dataset distillation reduces the storage and computational consumption of\ntraining a network by generating a small surrogate dataset that encapsulates\nrich information of the original large-scale one. However, previous\ndistillation methods heavily rely on the sample-wise iterative optimization\nscheme. As the images-per-class (IPC) setting or image resolution grows larger,\nthe necessary computation will demand overwhelming time and resources. In this\nwork, we intend to incorporate generative diffusion techniques for computing\nthe surrogate dataset. Observing that key factors for constructing an effective\nsurrogate dataset are representativeness and diversity, we design additional\nminimax criteria in the generative training to enhance these facets for the\ngenerated images of diffusion models. We present a theoretical model of the\nprocess as hierarchical diffusion control demonstrating the flexibility of the\ndiffusion process to target these criteria without jeopardizing the\nfaithfulness of the sample to the desired distribution. The proposed method\nachieves state-of-the-art validation performance while demanding much less\ncomputational resources. Under the 100-IPC setting on ImageWoof, our method\nrequires less than one-twentieth the distillation time of previous methods, yet\nyields even better performance.", + "The proposed method\nachieves state-of-the-art validation performance while demanding much less\ncomputational resources. Under the 100-IPC setting on ImageWoof, our method\nrequires less than one-twentieth the distillation time of previous methods, yet\nyields even better performance. Source code and generated data are available in\nhttps://github.com/vimar-gu/MinimaxDiffusion.", + "We introduce Posterior Distillation Sampling (PDS), a novel optimization\nmethod for parametric image editing based on diffusion models. Existing\noptimization-based methods, which leverage the powerful 2D prior of diffusion\nmodels to handle various parametric images, have mainly focused on generation.\nUnlike generation, editing requires a balance between conforming to the target\nattribute and preserving the identity of the source content. Recent 2D image\nediting methods have achieved this balance by leveraging the stochastic latent\nencoded in the generative process of diffusion models. To extend the editing\ncapabilities of diffusion models shown in pixel space to parameter space, we\nreformulate the 2D image editing method into an optimization form named PDS.\nPDS matches the stochastic latents of the source and the target, enabling the\nsampling of targets in diverse parameter spaces that align with a desired\nattribute while maintaining the source's identity. We demonstrate that this\noptimization resembles running a generative process with the target attribute,\nbut aligning this process with the trajectory of the source's generative\nprocess. Extensive editing results in Neural Radiance Fields and Scalable\nVector Graphics representations demonstrate that PDS is capable of sampling\ntargets to fulfill the aforementioned balance across various parameter spaces.", + "Human hands are highly articulated and versatile at handling objects. Jointly\nestimating the 3D poses of a hand and the object it manipulates from a\nmonocular camera is challenging due to frequent occlusions. Thus, existing\nmethods often rely on intermediate 3D shape representations to increase\nperformance. These representations are typically explicit, such as 3D point\nclouds or meshes, and thus provide information in the direct surroundings of\nthe intermediate hand pose estimate. To address this, we introduce HOISDF, a\nSigned Distance Field (SDF) guided hand-object pose estimation network, which\njointly exploits hand and object SDFs to provide a global, implicit\nrepresentation over the complete reconstruction volume. Specifically, the role\nof the SDFs is threefold: equip the visual encoder with implicit shape\ninformation, help to encode hand-object interactions, and guide the hand and\nobject pose regression via SDF-based sampling and by augmenting the feature\nrepresentations. We show that HOISDF achieves state-of-the-art results on\nhand-object pose estimation benchmarks (DexYCB and HO3Dv2). Code is available\nat https://github.com/amathislab/HOISDF", + "In video super-resolution, it is common to use a frame-wise alignment to\nsupport the propagation of information over time. The role of alignment is\nwell-studied for low-level enhancement in video, but existing works overlook a\ncritical step -- resampling. We show through extensive experiments that for\nalignment to be effective, the resampling should preserve the reference\nfrequency spectrum while minimizing spatial distortions. However, most existing\nworks simply use a default choice of bilinear interpolation for resampling even\nthough bilinear interpolation has a smoothing effect and hinders\nsuper-resolution. From these observations, we propose an implicit\nresampling-based alignment. The sampling positions are encoded by a sinusoidal\npositional encoding, while the value is estimated with a coordinate network and\na window-based cross-attention. We show that bilinear interpolation inherently\nattenuates high-frequency information while an MLP-based coordinate network can\napproximate more frequencies. Experiments on synthetic and real-world datasets\nshow that alignment with our proposed implicit resampling enhances the\nperformance of state-of-the-art frameworks with minimal impact on both compute\nand parameters.", + "We present DiffPortrait3D, a conditional diffusion model that is capable of\nsynthesizing 3D-consistent photo-realistic novel views from as few as a single\nin-the-wild portrait. Specifically, given a single RGB input, we aim to\nsynthesize plausible but consistent facial details rendered from novel camera\nviews with retained both identity and facial expression. In lieu of\ntime-consuming optimization and fine-tuning, our zero-shot method generalizes\nwell to arbitrary face portraits with unposed camera views, extreme facial\nexpressions, and diverse artistic depictions. At its core, we leverage the\ngenerative prior of 2D diffusion models pre-trained on large-scale image\ndatasets as our rendering backbone, while the denoising is guided with\ndisentangled attentive control of appearance and camera pose. To achieve this,\nwe first inject the appearance context from the reference image into the\nself-attention layers of the frozen UNets. The rendering view is then\nmanipulated with a novel conditional control module that interprets the camera\npose by watching a condition image of a crossed subject from the same view.", + "To achieve this,\nwe first inject the appearance context from the reference image into the\nself-attention layers of the frozen UNets. The rendering view is then\nmanipulated with a novel conditional control module that interprets the camera\npose by watching a condition image of a crossed subject from the same view.\nFurthermore, we insert a trainable cross-view attention module to enhance view\nconsistency, which is further strengthened with a novel 3D-aware noise\ngeneration process during inference. We demonstrate state-of-the-art results\nboth qualitatively and quantitatively on our challenging in-the-wild and\nmulti-view benchmarks.", + "Recent advances in unsupervised learning have demonstrated the ability of\nlarge vision models to achieve promising results on downstream tasks by\npre-training on large amount of unlabelled data. Such pre-training techniques\nhave also been explored recently in the remote sensing domain due to the\navailability of large amount of unlabelled data. Different from standard\nnatural image datasets, remote sensing data is acquired from various sensor\ntechnologies and exhibit diverse range of scale variations as well as\nmodalities. Existing satellite image pre-training methods either ignore the\nscale information present in the remote sensing imagery or restrict themselves\nto use only a single type of data modality. In this paper, we re-visit\ntransformers pre-training and leverage multi-scale information that is\neffectively utilized with multiple modalities. Our proposed approach, named\nSatMAE++, performs multi-scale pre-training and utilizes convolution based\nupsampling blocks to reconstruct the image at higher scales making it\nextensible to include more scales. Compared to existing works, the proposed\nSatMAE++ with multi-scale pre-training is equally effective for both optical as\nwell as multi-spectral imagery.", + "Compared to existing works, the proposed\nSatMAE++ with multi-scale pre-training is equally effective for both optical as\nwell as multi-spectral imagery. Extensive experiments on six datasets reveal\nthe merits of proposed contributions, leading to state-of-the-art performance\non all datasets. SatMAE++ achieves mean average precision (mAP) gain of 2.5\\%\nfor multi-label classification task on BigEarthNet dataset. Our code and\npre-trained models are available at \\url{https://github.com/techmn/satmae_pp}.", + "Weakly-Supervised Scene Graph Generation (WSSGG) research has recently\nemerged as an alternative to the fully-supervised approach that heavily relies\non costly annotations. In this regard, studies on WSSGG have utilized image\ncaptions to obtain unlocalized triplets while primarily focusing on grounding\nthe unlocalized triplets over image regions. However, they have overlooked the\ntwo issues involved in the triplet formation process from the captions: 1)\nSemantic over-simplification issue arises when extracting triplets from\ncaptions, where fine-grained predicates in captions are undesirably converted\ninto coarse-grained predicates, resulting in a long-tailed predicate\ndistribution, and 2) Low-density scene graph issue arises when aligning the\ntriplets in the caption with entity/predicate classes of interest, where many\ntriplets are discarded and not used in training, leading to insufficient\nsupervision.", + "To tackle the two issues, we propose a new approach, i.e., Large\nLanguage Model for weakly-supervised SGG (LLM4SGG), where we mitigate the two\nissues by leveraging the LLM's in-depth understanding of language and reasoning\nability during the extraction of triplets from captions and alignment of\nentity/predicate classes with target data. To further engage the LLM in these\nprocesses, we adopt the idea of Chain-of-Thought and the in-context few-shot\nlearning strategy. To validate the effectiveness of LLM4SGG, we conduct\nextensive experiments on Visual Genome and GQA datasets, showing significant\nimprovements in both Recall@K and mean Recall@K compared to the\nstate-of-the-art WSSGG methods. A further appeal is that LLM4SGG is\ndata-efficient, enabling effective model training with a small amount of\ntraining images.", + "Parameter-efficient fine-tuning (PEFT) is an effective methodology to unleash\nthe potential of large foundation models in novel scenarios with limited\ntraining data. In the computer vision community, PEFT has shown effectiveness\nin image classification, but little research has studied its ability for image\nsegmentation. Fine-tuning segmentation models usually require a heavier\nadjustment of parameters to align the proper projection directions in the\nparameter space for new scenarios. This raises a challenge to existing PEFT\nalgorithms, as they often inject a limited number of individual parameters into\neach block, which prevents substantial adjustment of the projection direction\nof the parameter space due to the limitation of Hidden Markov Chain along\nblocks. In this paper, we equip PEFT with a cross-block orchestration mechanism\nto enable the adaptation of the Segment Anything Model (SAM) to various\ndownstream scenarios. We introduce a novel inter-block communication module,\nwhich integrates a learnable relation matrix to facilitate communication among\ndifferent coefficient sets of each PEFT block's parameter space.", + "We introduce a novel inter-block communication module,\nwhich integrates a learnable relation matrix to facilitate communication among\ndifferent coefficient sets of each PEFT block's parameter space. Moreover, we\npropose an intra-block enhancement module, which introduces a linear projection\nhead whose weights are generated from a hyper-complex layer, further enhancing\nthe impact of the adjustment of projection directions on the entire parameter\nspace. Extensive experiments on diverse benchmarks demonstrate that our\nproposed approach consistently improves the segmentation performance\nsignificantly on novel scenarios with only around 1K additional parameters.", + "Novel-view synthesis of specular objects like shiny metals or glossy paints\nremains a significant challenge. Not only the glossy appearance but also global\nillumination effects, including reflections of other objects in the\nenvironment, are critical components to faithfully reproduce a scene. In this\npaper, we present Neural Directional Encoding (NDE), a view-dependent\nappearance encoding of neural radiance fields (NeRF) for rendering specular\nobjects. NDE transfers the concept of feature-grid-based spatial encoding to\nthe angular domain, significantly improving the ability to model high-frequency\nangular signals. In contrast to previous methods that use encoding functions\nwith only angular input, we additionally cone-trace spatial features to obtain\na spatially varying directional encoding, which addresses the challenging\ninterreflection effects. Extensive experiments on both synthetic and real\ndatasets show that a NeRF model with NDE (1) outperforms the state of the art\non view synthesis of specular objects, and (2) works with small networks to\nallow fast (real-time) inference. The project webpage and source code are\navailable at: \\url{https://lwwu2.github.io/nde/}.", + "We introduce a novel approach to single image denoising based on the Blind\nSpot Denoising principle, which we call MAsked and SHuffled Blind Spot\nDenoising (MASH). We focus on the case of correlated noise, which often plagues\nreal images. MASH is the result of a careful analysis to determine the\nrelationships between the level of blindness (masking) of the input and the\n(unknown) noise correlation. Moreover, we introduce a shuffling technique to\nweaken the local correlation of noise, which in turn yields an additional\ndenoising performance improvement. We evaluate MASH via extensive experiments\non real-world noisy image datasets. We demonstrate on par or better results\ncompared to existing self-supervised denoising methods.", + "Unsupervised domain adaptation (UDA) for semantic segmentation aims to\ntransfer the pixel-wise knowledge from the labeled source domain to the\nunlabeled target domain. However, current UDA methods typically assume a shared\nlabel space between source and target, limiting their applicability in\nreal-world scenarios where novel categories may emerge in the target domain. In\nthis paper, we introduce Open-Set Domain Adaptation for Semantic Segmentation\n(OSDA-SS) for the first time, where the target domain includes unknown classes.\nWe identify two major problems in the OSDA-SS scenario as follows: 1) the\nexisting UDA methods struggle to predict the exact boundary of the unknown\nclasses, and 2) they fail to accurately predict the shape of the unknown\nclasses. To address these issues, we propose Boundary and Unknown Shape-Aware\nopen-set domain adaptation, coined BUS. Our BUS can accurately discern the\nboundaries between known and unknown classes in a contrastive manner using a\nnovel dilation-erosion-based contrastive loss.", + "To address these issues, we propose Boundary and Unknown Shape-Aware\nopen-set domain adaptation, coined BUS. Our BUS can accurately discern the\nboundaries between known and unknown classes in a contrastive manner using a\nnovel dilation-erosion-based contrastive loss. In addition, we propose\nOpenReMix, a new domain mixing augmentation method that guides our model to\neffectively learn domain and size-invariant features for improving the shape\ndetection of the known and unknown classes. Through extensive experiments, we\ndemonstrate that our proposed BUS effectively detects unknown classes in the\nchallenging OSDA-SS scenario compared to the previous methods by a large\nmargin. The code is available at https://github.com/KHU-AGI/BUS.", + "We present a method that uses a text-to-image model to generate consistent\ncontent across multiple image scales, enabling extreme semantic zooms into a\nscene, e.g., ranging from a wide-angle landscape view of a forest to a macro\nshot of an insect sitting on one of the tree branches. We achieve this through\na joint multi-scale diffusion sampling approach that encourages consistency\nacross different scales while preserving the integrity of each individual\nsampling process. Since each generated scale is guided by a different text\nprompt, our method enables deeper levels of zoom than traditional\nsuper-resolution methods that may struggle to create new contextual structure\nat vastly different scales. We compare our method qualitatively with\nalternative techniques in image super-resolution and outpainting, and show that\nour method is most effective at generating consistent multi-scale content.", + "Contrastive learning has emerged as a promising paradigm for 3D open-world\nunderstanding, i.e., aligning point cloud representation to image and text\nembedding space individually. In this paper, we introduce MixCon3D, a simple\nyet effective method aiming to sculpt holistic 3D representation in contrastive\nlanguage-image-3D pre-training. In contrast to point cloud only, we develop the\n3D object-level representation from complementary perspectives, e.g.,\nmulti-view rendered images with the point cloud. Then, MixCon3D performs\nlanguage-3D contrastive learning, comprehensively depicting real-world 3D\nobjects and bolstering text alignment. Additionally, we pioneer the first\nthorough investigation of various training recipes for the 3D contrastive\nlearning paradigm, building a solid baseline with improved performance.\nExtensive experiments conducted on three representative benchmarks reveal that\nour method significantly improves over the baseline, surpassing the previous\nstate-of-the-art performance on the challenging 1,156-category Objaverse-LVIS\ndataset by 5.7%.", + "Extensive experiments conducted on three representative benchmarks reveal that\nour method significantly improves over the baseline, surpassing the previous\nstate-of-the-art performance on the challenging 1,156-category Objaverse-LVIS\ndataset by 5.7%. The versatility of MixCon3D is showcased in applications such\nas text-to-3D retrieval and point cloud captioning, further evidencing its\nefficacy in diverse scenarios. The code is available at\nhttps://github.com/UCSC-VLAA/MixCon3D.", + "Despite diffusion models' superior capabilities in modeling complex\ndistributions, there are still non-trivial distributional discrepancies between\ngenerated and ground-truth images, which has resulted in several notable\nproblems in image generation, including missing object errors in text-to-image\ngeneration and low image quality. Existing methods that attempt to address\nthese problems mostly do not tend to address the fundamental cause behind these\nproblems, which is the distributional discrepancies, and hence achieve\nsub-optimal results. In this paper, we propose a particle filtering framework\nthat can effectively address both problems by explicitly reducing the\ndistributional discrepancies. Specifically, our method relies on a set of\nexternal guidance, including a small set of real images and a pre-trained\nobject detector, to gauge the distribution gap, and then design the resampling\nweight accordingly to correct the gap. Experiments show that our methods can\neffectively correct missing object errors and improve image quality in various\nimage generation tasks. Notably, our method outperforms the existing strongest\nbaseline by 5% in object occurrence and 1.0 in FID on MS-COCO.", + "Experiments show that our methods can\neffectively correct missing object errors and improve image quality in various\nimage generation tasks. Notably, our method outperforms the existing strongest\nbaseline by 5% in object occurrence and 1.0 in FID on MS-COCO. Our code is\npublicly available at\nhttps://github.com/UCSB-NLP-Chang/diffusion_resampling.git.", + "Monocular 3D object detection poses a significant challenge in 3D scene\nunderstanding due to its inherently ill-posed nature in monocular depth\nestimation. Existing methods heavily rely on supervised learning using abundant\n3D labels, typically obtained through expensive and labor-intensive annotation\non LiDAR point clouds. To tackle this problem, we propose a novel weakly\nsupervised 3D object detection framework named VSRD (Volumetric Silhouette\nRendering for Detection) to train 3D object detectors without any 3D\nsupervision but only weak 2D supervision. VSRD consists of multi-view 3D\nauto-labeling and subsequent training of monocular 3D object detectors using\nthe pseudo labels generated in the auto-labeling stage. In the auto-labeling\nstage, we represent the surface of each instance as a signed distance field\n(SDF) and render its silhouette as an instance mask through our proposed\ninstance-aware volumetric silhouette rendering.", + "In the auto-labeling\nstage, we represent the surface of each instance as a signed distance field\n(SDF) and render its silhouette as an instance mask through our proposed\ninstance-aware volumetric silhouette rendering. To directly optimize the 3D\nbounding boxes through rendering, we decompose the SDF of each instance into\nthe SDF of a cuboid and the residual distance field (RDF) that represents the\nresidual from the cuboid. This mechanism enables us to optimize the 3D bounding\nboxes in an end-to-end manner by comparing the rendered instance masks with the\nground truth instance masks. The optimized 3D bounding boxes serve as effective\ntraining data for 3D object detection. We conduct extensive experiments on the\nKITTI-360 dataset, demonstrating that our method outperforms the existing\nweakly supervised 3D object detection methods. The code is available at\nhttps://github.com/skmhrk1209/VSRD.", + "In this paper, we study the problem of generalizable synthetic image\ndetection, aiming to detect forgery images from diverse generative methods,\ne.g., GANs and diffusion models. Cutting-edge solutions start to explore the\nbenefits of pre-trained models, and mainly follow the fixed paradigm of solely\ntraining an attached classifier, e.g., combining frozen CLIP-ViT with a\nlearnable linear layer in UniFD. However, our analysis shows that such a fixed\nparadigm is prone to yield detectors with insufficient learning regarding\nforgery representations. We attribute the key challenge to the lack of forgery\nadaptation, and present a novel forgery-aware adaptive transformer approach,\nnamely FatFormer. Based on the pre-trained vision-language spaces of CLIP,\nFatFormer introduces two core designs for the adaption to build generalized\nforgery representations. First, motivated by the fact that both image and\nfrequency analysis are essential for synthetic image detection, we develop a\nforgery-aware adapter to adapt image features to discern and integrate local\nforgery traces within image and frequency domains.", + "First, motivated by the fact that both image and\nfrequency analysis are essential for synthetic image detection, we develop a\nforgery-aware adapter to adapt image features to discern and integrate local\nforgery traces within image and frequency domains. Second, we find that\nconsidering the contrastive objectives between adapted image features and text\nprompt embeddings, a previously overlooked aspect, results in a nontrivial\ngeneralization improvement. Accordingly, we introduce language-guided alignment\nto supervise the forgery adaptation with image and text prompts in FatFormer.\nExperiments show that, by coupling these two designs, our approach tuned on\n4-class ProGAN data attains a remarkable detection performance, achieving an\naverage of 98% accuracy to unseen GANs, and surprisingly generalizes to unseen\ndiffusion models with 95% accuracy.", + "This paper presents an innovative framework designed to train an image\ndeblurring algorithm tailored to a specific camera device. This algorithm works\nby transforming a blurry input image, which is challenging to deblur, into\nanother blurry image that is more amenable to deblurring. The transformation\nprocess, from one blurry state to another, leverages unpaired data consisting\nof sharp and blurry images captured by the target camera device. Learning this\nblur-to-blur transformation is inherently simpler than direct blur-to-sharp\nconversion, as it primarily involves modifying blur patterns rather than the\nintricate task of reconstructing fine image details. The efficacy of the\nproposed approach has been demonstrated through comprehensive experiments on\nvarious benchmarks, where it significantly outperforms state-of-the-art methods\nboth quantitatively and qualitatively. Our code and data are available at\nhttps://zero1778.github.io/blur2blur/", + "Point cloud analysis has achieved outstanding performance by transferring\npoint cloud pre-trained models. However, existing methods for model adaptation\nusually update all model parameters, i.e., full fine-tuning paradigm, which is\ninefficient as it relies on high computational costs (e.g., training GPU\nmemory) and massive storage space. In this paper, we aim to study\nparameter-efficient transfer learning for point cloud analysis with an ideal\ntrade-off between task performance and parameter efficiency. To achieve this\ngoal, we freeze the parameters of the default pre-trained models and then\npropose the Dynamic Adapter, which generates a dynamic scale for each token,\nconsidering the token significance to the downstream task. We further\nseamlessly integrate Dynamic Adapter with Prompt Tuning (DAPT) by constructing\nInternal Prompts, capturing the instance-specific features for interaction.\nExtensive experiments conducted on five challenging datasets demonstrate that\nthe proposed DAPT achieves superior performance compared to the full\nfine-tuning counterparts while significantly reducing the trainable parameters\nand training GPU memory by 95% and 35%, respectively. Code is available at\nhttps://github.com/LMD0311/DAPT.", + "To build a cross-modal latent space between 3D human motion and language,\nacquiring large-scale and high-quality human motion data is crucial. However,\nunlike the abundance of image data, the scarcity of motion data has limited the\nperformance of existing motion-language models. To counter this, we introduce\n\"motion patches\", a new representation of motion sequences, and propose using\nVision Transformers (ViT) as motion encoders via transfer learning, aiming to\nextract useful knowledge from the image domain and apply it to the motion\ndomain. These motion patches, created by dividing and sorting skeleton joints\nbased on body parts in motion sequences, are robust to varying skeleton\nstructures, and can be regarded as color image patches in ViT. We find that\ntransfer learning with pre-trained weights of ViT obtained through training\nwith 2D image data can boost the performance of motion analysis, presenting a\npromising direction for addressing the issue of limited motion data.", + "We find that\ntransfer learning with pre-trained weights of ViT obtained through training\nwith 2D image data can boost the performance of motion analysis, presenting a\npromising direction for addressing the issue of limited motion data. Our\nextensive experiments show that the proposed motion patches, used jointly with\nViT, achieve state-of-the-art performance in the benchmarks of text-to-motion\nretrieval, and other novel challenging tasks, such as cross-skeleton\nrecognition, zero-shot motion classification, and human interaction\nrecognition, which are currently impeded by the lack of data.", + "Eliminating image blur produced by various kinds of motion has been a\nchallenging problem. Dominant approaches rely heavily on model capacity to\nremove blurring by reconstructing residual from blurry observation in feature\nspace. These practices not only prevent the capture of spatially variable\nmotion in the real world but also ignore the tailored handling of various\nmotions in image space. In this paper, we propose a novel real-world deblurring\nfiltering model called the Motion-adaptive Separable Collaborative (MISC)\nFilter. In particular, we use a motion estimation network to capture motion\ninformation from neighborhoods, thereby adaptively estimating spatially-variant\nmotion flow, mask, kernels, weights, and offsets to obtain the MISC Filter. The\nMISC Filter first aligns the motion-induced blurring patterns to the motion\nmiddle along the predicted flow direction, and then collaboratively filters the\naligned image through the predicted kernels, weights, and offsets to generate\nthe output. This design can handle more generalized and complex motion in a\nspatially differentiated manner. Furthermore, we analyze the relationships\nbetween the motion estimation network and the residual reconstruction network.", + "This design can handle more generalized and complex motion in a\nspatially differentiated manner. Furthermore, we analyze the relationships\nbetween the motion estimation network and the residual reconstruction network.\nExtensive experiments on four widely used benchmarks demonstrate that our\nmethod provides an effective solution for real-world motion blur removal and\nachieves state-of-the-art performance. Code is available at\nhttps://github.com/ChengxuLiu/MISCFilter", + "Simulation is an invaluable tool for radio-frequency system designers that\nenables rapid prototyping of various algorithms for imaging, target detection,\nclassification, and tracking. However, simulating realistic radar scans is a\nchallenging task that requires an accurate model of the scene, radio frequency\nmaterial properties, and a corresponding radar synthesis function. Rather than\nspecifying these models explicitly, we propose DART - Doppler Aided Radar\nTomography, a Neural Radiance Field-inspired method which uses radar-specific\nphysics to create a reflectance and transmittance-based rendering pipeline for\nrange-Doppler images. We then evaluate DART by constructing a custom data\ncollection platform and collecting a novel radar dataset together with accurate\nposition and instantaneous velocity measurements from lidar-based localization.\nIn comparison to state-of-the-art baselines, DART synthesizes superior radar\nrange-Doppler images from novel views across all datasets and additionally can\nbe used to generate high quality tomographic images.", + "In this work, we introduce Wonder3D, a novel method for efficiently\ngenerating high-fidelity textured meshes from single-view images.Recent methods\nbased on Score Distillation Sampling (SDS) have shown the potential to recover\n3D geometry from 2D diffusion priors, but they typically suffer from\ntime-consuming per-shape optimization and inconsistent geometry. In contrast,\ncertain works directly produce 3D information via fast network inferences, but\ntheir results are often of low quality and lack geometric details. To\nholistically improve the quality, consistency, and efficiency of image-to-3D\ntasks, we propose a cross-domain diffusion model that generates multi-view\nnormal maps and the corresponding color images. To ensure consistency, we\nemploy a multi-view cross-domain attention mechanism that facilitates\ninformation exchange across views and modalities. Lastly, we introduce a\ngeometry-aware normal fusion algorithm that extracts high-quality surfaces from\nthe multi-view 2D representations. Our extensive evaluations demonstrate that\nour method achieves high-quality reconstruction results, robust generalization,\nand reasonably good efficiency compared to prior works.", + "Real-world vision tasks frequently suffer from the appearance of unexpected\nadverse weather conditions, including rain, haze, snow, and raindrops. In the\nlast decade, convolutional neural networks and vision transformers have yielded\noutstanding results in single-weather video removal. However, due to the\nabsence of appropriate adaptation, most of them fail to generalize to other\nweather conditions. Although ViWS-Net is proposed to remove adverse weather\nconditions in videos with a single set of pre-trained weights, it is seriously\nblinded by seen weather at train-time and degenerates when coming to unseen\nweather during test-time. In this work, we introduce test-time adaptation into\nadverse weather removal in videos, and propose the first framework that\nintegrates test-time adaptation into the iterative diffusion reverse process.\nSpecifically, we devise a diffusion-based network with a novel temporal noise\nmodel to efficiently explore frame-correlated information in degraded video\nclips at training stage. During inference stage, we introduce a proxy task\nnamed Diffusion Tubelet Self-Calibration to learn the primer distribution of\ntest video stream and optimize the model by approximating the temporal noise\nmodel for online adaptation.", + "During inference stage, we introduce a proxy task\nnamed Diffusion Tubelet Self-Calibration to learn the primer distribution of\ntest video stream and optimize the model by approximating the temporal noise\nmodel for online adaptation. Experimental results, on benchmark datasets,\ndemonstrate that our Test-Time Adaptation method with Diffusion-based\nnetwork(Diff-TTA) outperforms state-of-the-art methods in terms of restoring\nvideos degraded by seen weather conditions. Its generalizable capability is\nalso validated with unseen weather conditions in both synthesized and\nreal-world videos.", + "Protein representation learning is a challenging task that aims to capture\nthe structure and function of proteins from their amino acid sequences.\nPrevious methods largely ignored the fact that not all amino acids are equally\nimportant for protein folding and activity. In this article, we propose a\nneural clustering framework that can automatically discover the critical\ncomponents of a protein by considering both its primary and tertiary structure\ninformation. Our framework treats a protein as a graph, where each node\nrepresents an amino acid and each edge represents a spatial or sequential\nconnection between amino acids. We then apply an iterative clustering strategy\nto group the nodes into clusters based on their 1D and 3D positions and assign\nscores to each cluster. We select the highest-scoring clusters and use their\nmedoid nodes for the next iteration of clustering, until we obtain a\nhierarchical and informative representation of the protein. We evaluate on four\nprotein-related tasks: protein fold classification, enzyme reaction\nclassification, gene ontology term prediction, and enzyme commission number\nprediction. Experimental results demonstrate that our method achieves\nstate-of-the-art performance.", + "This paper presents a simple but performant semi-supervised semantic\nsegmentation approach, called CorrMatch. Previous approaches mostly employ\ncomplicated training strategies to leverage unlabeled data but overlook the\nrole of correlation maps in modeling the relationships between pairs of\nlocations. We observe that the correlation maps not only enable clustering\npixels of the same category easily but also contain good shape information,\nwhich previous works have omitted. Motivated by these, we aim to improve the\nuse efficiency of unlabeled data by designing two novel label propagation\nstrategies. First, we propose to conduct pixel propagation by modeling the\npairwise similarities of pixels to spread the high-confidence pixels and dig\nout more. Then, we perform region propagation to enhance the pseudo labels with\naccurate class-agnostic masks extracted from the correlation maps. CorrMatch\nachieves great performance on popular segmentation benchmarks. Taking the\nDeepLabV3+ with ResNet-101 backbone as our segmentation model, we receive a\n76%+ mIoU score on the Pascal VOC 2012 dataset with only 92 annotated images.\nCode is available at https://github.com/BBBBchan/CorrMatch.", + "Lifting 2D diffusion for 3D generation is a challenging problem due to the\nlack of geometric prior and the complex entanglement of materials and lighting\nin natural images. Existing methods have shown promise by first creating the\ngeometry through score-distillation sampling (SDS) applied to rendered surface\nnormals, followed by appearance modeling. However, relying on a 2D RGB\ndiffusion model to optimize surface normals is suboptimal due to the\ndistribution discrepancy between natural images and normals maps, leading to\ninstability in optimization. In this paper, recognizing that the normal and\ndepth information effectively describe scene geometry and be automatically\nestimated from images, we propose to learn a generalizable Normal-Depth\ndiffusion model for 3D generation. We achieve this by training on the\nlarge-scale LAION dataset together with the generalizable image-to-depth and\nnormal prior models. In an attempt to alleviate the mixed illumination effects\nin the generated materials, we introduce an albedo diffusion model to impose\ndata-driven constraints on the albedo component.", + "We achieve this by training on the\nlarge-scale LAION dataset together with the generalizable image-to-depth and\nnormal prior models. In an attempt to alleviate the mixed illumination effects\nin the generated materials, we introduce an albedo diffusion model to impose\ndata-driven constraints on the albedo component. Our experiments show that when\nintegrated into existing text-to-3D pipelines, our models significantly enhance\nthe detail richness, achieving state-of-the-art results. Our project page is\nhttps://aigc3d.github.io/richdreamer/.", + "Rigging and skinning clothed human avatars is a challenging task and\ntraditionally requires a lot of manual work and expertise. Recent methods\naddressing it either generalize across different characters or focus on\ncapturing the dynamics of a single character observed under different pose\nconfigurations. However, the former methods typically predict solely static\nskinning weights, which perform poorly for highly articulated poses, and the\nlatter ones either require dense 3D character scans in different poses or\ncannot generate an explicit mesh with vertex correspondence over time. To\naddress these challenges, we propose a fully automated approach for creating a\nfully rigged character with pose-dependent skinning weights, which can be\nsolely learned from multi-view video. Therefore, we first acquire a rigged\ntemplate, which is then statically skinned. Next, a coordinate-based MLP learns\na skinning weights field parameterized over the position in a canonical pose\nspace and the respective pose. Moreover, we introduce our pose- and\nview-dependent appearance field allowing us to differentiably render and\nsupervise the posed mesh using multi-view imagery. We show that our approach\noutperforms state-of-the-art while not relying on dense 4D scans.", + "Zero-shot referring expression comprehension aims at localizing bounding\nboxes in an image corresponding to provided textual prompts, which requires:\n(i) a fine-grained disentanglement of complex visual scene and textual context,\nand (ii) a capacity to understand relationships among disentangled entities.\nUnfortunately, existing large vision-language alignment (VLA) models, e.g.,\nCLIP, struggle with both aspects so cannot be directly used for this task. To\nmitigate this gap, we leverage large foundation models to disentangle both\nimages and texts into triplets in the format of (subject, predicate, object).\nAfter that, grounding is accomplished by calculating the structural similarity\nmatrix between visual and textual triplets with a VLA model, and subsequently\npropagate it to an instance-level similarity matrix. Furthermore, to equip VLA\nmodels with the ability of relationship understanding, we design a\ntriplet-matching objective to fine-tune the VLA models on a collection of\ncurated dataset containing abundant entity relationships. Experiments\ndemonstrate that our visual grounding performance increase of up to 19.5% over\nthe SOTA zero-shot model on RefCOCO/+/g.", + "Experiments\ndemonstrate that our visual grounding performance increase of up to 19.5% over\nthe SOTA zero-shot model on RefCOCO/+/g. On the more challenging Who's Waldo\ndataset, our zero-shot approach achieves comparable accuracy to the fully\nsupervised model. Code is available at\nhttps://github.com/Show-han/Zeroshot_REC.", + "Prompt learning has emerged as an effective and data-efficient technique in\nlarge Vision-Language Models (VLMs). However, when adapting VLMs to specialized\ndomains such as remote sensing and medical imaging, domain prompt learning\nremains underexplored. While large-scale domain-specific foundation models can\nhelp tackle this challenge, their concentration on a single vision level makes\nit challenging to prompt both vision and language modalities. To overcome this,\nwe propose to leverage domain-specific knowledge from domain-specific\nfoundation models to transfer the robust recognition ability of VLMs from\ngeneralized to specialized domains, using quaternion networks. Specifically,\nthe proposed method involves using domain-specific vision features from\ndomain-specific foundation models to guide the transformation of generalized\ncontextual embeddings from the language branch into a specialized space within\nthe quaternion networks. Moreover, we present a hierarchical approach that\ngenerates vision prompt features by analyzing intermodal relationships between\nhierarchical language prompt features and domain-specific vision features. In\nthis way, quaternion networks can effectively mine the intermodal relationships\nin the specific domain, facilitating domain-specific vision-language\ncontrastive learning. Extensive experiments on domain-specific datasets show\nthat our proposed method achieves new state-of-the-art results in prompt\nlearning.", + "The systematic evaluation and understanding of computer vision models under\nvarying conditions require large amounts of data with comprehensive and\ncustomized labels, which real-world vision datasets rarely satisfy. While\ncurrent synthetic data generators offer a promising alternative, particularly\nfor embodied AI tasks, they often fall short for computer vision tasks due to\nlow asset and rendering quality, limited diversity, and unrealistic physical\nproperties. We introduce the BEHAVIOR Vision Suite (BVS), a set of tools and\nassets to generate fully customized synthetic data for systematic evaluation of\ncomputer vision models, based on the newly developed embodied AI benchmark,\nBEHAVIOR-1K. BVS supports a large number of adjustable parameters at the scene\nlevel (e.g., lighting, object placement), the object level (e.g., joint\nconfiguration, attributes such as \"filled\" and \"folded\"), and the camera level\n(e.g., field of view, focal length). Researchers can arbitrarily vary these\nparameters during data generation to perform controlled experiments.", + "Researchers can arbitrarily vary these\nparameters during data generation to perform controlled experiments. We\nshowcase three example application scenarios: systematically evaluating the\nrobustness of models across different continuous axes of domain shift,\nevaluating scene understanding models on the same set of images, and training\nand evaluating simulation-to-real transfer for a novel vision task: unary and\nbinary state prediction. Project website:\nhttps://behavior-vision-suite.github.io/", + "Recent advancements in 3D reconstruction from single images have been driven\nby the evolution of generative models. Prominent among these are methods based\non Score Distillation Sampling (SDS) and the adaptation of diffusion models in\nthe 3D domain. Despite their progress, these techniques often face limitations\ndue to slow optimization or rendering processes, leading to extensive training\nand optimization times. In this paper, we introduce a novel approach for\nsingle-view reconstruction that efficiently generates a 3D model from a single\nimage via feed-forward inference. Our method utilizes two transformer-based\nnetworks, namely a point decoder and a triplane decoder, to reconstruct 3D\nobjects using a hybrid Triplane-Gaussian intermediate representation. This\nhybrid representation strikes a balance, achieving a faster rendering speed\ncompared to implicit representations while simultaneously delivering superior\nrendering quality than explicit representations. The point decoder is designed\nfor generating point clouds from single images, offering an explicit\nrepresentation which is then utilized by the triplane decoder to query Gaussian\nfeatures for each point. This design choice addresses the challenges associated\nwith directly regressing explicit 3D Gaussian attributes characterized by their\nnon-structural nature.", + "This design choice addresses the challenges associated\nwith directly regressing explicit 3D Gaussian attributes characterized by their\nnon-structural nature. Subsequently, the 3D Gaussians are decoded by an MLP to\nenable rapid rendering through splatting. Both decoders are built upon a\nscalable, transformer-based architecture and have been efficiently trained on\nlarge-scale 3D datasets. The evaluations conducted on both synthetic datasets\nand real-world images demonstrate that our method not only achieves higher\nquality but also ensures a faster runtime in comparison to previous\nstate-of-the-art techniques. Please see our project page at\nhttps://zouzx.github.io/TriplaneGaussian/.", + "The advances in the Neural Radiance Fields (NeRF) research offer extensive\napplications in diverse domains, but protecting their copyrights has not yet\nbeen researched in depth. Recently, NeRF watermarking has been considered one\nof the pivotal solutions for safely deploying NeRF-based 3D representations.\nHowever, existing methods are designed to apply only to implicit or explicit\nNeRF representations. In this work, we introduce an innovative watermarking\nmethod that can be employed in both representations of NeRF. This is achieved\nby fine-tuning NeRF to embed binary messages in the rendering process. In\ndetail, we propose utilizing the discrete wavelet transform in the NeRF space\nfor watermarking. Furthermore, we adopt a deferred back-propagation technique\nand introduce a combination with the patch-wise loss to improve rendering\nquality and bit accuracy with minimum trade-offs. We evaluate our method in\nthree different aspects: capacity, invisibility, and robustness of the embedded\nwatermarks in the 2D-rendered images. Our method achieves state-of-the-art\nperformance with faster training speed over the compared state-of-the-art\nmethods.", + "Knowledge distillation methods have recently shown to be a promising\ndirection to speedup the synthesis of large-scale diffusion models by requiring\nonly a few inference steps. While several powerful distillation methods were\nrecently proposed, the overall quality of student samples is typically lower\ncompared to the teacher ones, which hinders their practical usage. In this\nwork, we investigate the relative quality of samples produced by the teacher\ntext-to-image diffusion model and its distilled student version. As our main\nempirical finding, we discover that a noticeable portion of student samples\nexhibit superior fidelity compared to the teacher ones, despite the\n\"approximate\" nature of the student. Based on this finding, we propose an\nadaptive collaboration between student and teacher diffusion models for\neffective text-to-image synthesis. Specifically, the distilled model produces\nthe initial sample, and then an oracle decides whether it needs further\nimprovements with a slow teacher model. Extensive experiments demonstrate that\nthe designed pipeline surpasses state-of-the-art text-to-image alternatives for\nvarious inference budgets in terms of human preference. Furthermore, the\nproposed approach can be naturally used in popular applications such as\ntext-guided image editing and controllable generation.", + "Recently, efficient Vision Transformers have shown great performance with low\nlatency on resource-constrained devices. Conventionally, they use 4x4 patch\nembeddings and a 4-stage structure at the macro level, while utilizing\nsophisticated attention with multi-head configuration at the micro level. This\npaper aims to address computational redundancy at all design levels in a\nmemory-efficient manner. We discover that using larger-stride patchify stem not\nonly reduces memory access costs but also achieves competitive performance by\nleveraging token representations with reduced spatial redundancy from the early\nstages. Furthermore, our preliminary analyses suggest that attention layers in\nthe early stages can be substituted with convolutions, and several attention\nheads in the latter stages are computationally redundant. To handle this, we\nintroduce a single-head attention module that inherently prevents head\nredundancy and simultaneously boosts accuracy by parallelly combining global\nand local information. Building upon our solutions, we introduce SHViT, a\nSingle-Head Vision Transformer that obtains the state-of-the-art speed-accuracy\ntradeoff.", + "Building upon our solutions, we introduce SHViT, a\nSingle-Head Vision Transformer that obtains the state-of-the-art speed-accuracy\ntradeoff. For example, on ImageNet-1k, our SHViT-S4 is 3.3x, 8.1x, and 2.4x\nfaster than MobileViTv2 x1.0 on GPU, CPU, and iPhone12 mobile device,\nrespectively, while being 1.3% more accurate. For object detection and instance\nsegmentation on MS COCO using Mask-RCNN head, our model achieves performance\ncomparable to FastViT-SA12 while exhibiting 3.8x and 2.0x lower backbone\nlatency on GPU and mobile device, respectively.", + "Reconstructing High Dynamic Range (HDR) video from image sequences captured\nwith alternating exposures is challenging, especially in the presence of large\ncamera or object motion. Existing methods typically align low dynamic range\nsequences using optical flow or attention mechanism for deghosting. However,\nthey often struggle to handle large complex motions and are computationally\nexpensive. To address these challenges, we propose a robust and efficient flow\nestimator tailored for real-time HDR video reconstruction, named HDRFlow.\nHDRFlow has three novel designs: an HDR-domain alignment loss (HALoss), an\nefficient flow network with a multi-size large kernel (MLK), and a new HDR flow\ntraining scheme. The HALoss supervises our flow network to learn an\nHDR-oriented flow for accurate alignment in saturated and dark regions. The MLK\ncan effectively model large motions at a negligible cost. In addition, we\nincorporate synthetic data, Sintel, into our training dataset, utilizing both\nits provided forward flow and backward flow generated by us to supervise our\nflow network, enhancing our performance in large motion regions. Extensive\nexperiments demonstrate that our HDRFlow outperforms previous methods on\nstandard benchmarks.", + "Extensive\nexperiments demonstrate that our HDRFlow outperforms previous methods on\nstandard benchmarks. To the best of our knowledge, HDRFlow is the first\nreal-time HDR video reconstruction method for video sequences captured with\nalternating exposures, capable of processing 720p resolution inputs at 25ms.", + "Can we capture shape and reflectance in stealth? Such capability would be\nvaluable for many application domains in vision, xR, robotics, and HCI. We\nintroduce structured polarization for invisible depth and reflectance sensing\n(SPIDeRS), the first depth and reflectance sensing method using patterns of\npolarized light. The key idea is to modulate the angle of linear polarization\n(AoLP) of projected light at each pixel. The use of polarization makes it\ninvisible and lets us recover not only depth but also directly surface normals\nand even reflectance. We implement SPIDeRS with a liquid crystal spatial light\nmodulator (SLM) and a polarimetric camera. We derive a novel method for\nrobustly extracting the projected structured polarization pattern from the\npolarimetric object appearance. We evaluate the effectiveness of SPIDeRS by\napplying it to a number of real-world objects. The results show that our method\nsuccessfully reconstructs object shapes of various materials and is robust to\ndiffuse reflection and ambient light. We also demonstrate relighting using\nrecovered surface normals and reflectance. We believe SPIDeRS opens a new\navenue of polarization use in visual sensing.", + "We present SuperNormal, a fast, high-fidelity approach to multi-view 3D\nreconstruction using surface normal maps. With a few minutes, SuperNormal\nproduces detailed surfaces on par with 3D scanners. We harness volume rendering\nto optimize a neural signed distance function (SDF) powered by multi-resolution\nhash encoding. To accelerate training, we propose directional finite difference\nand patch-based ray marching to approximate the SDF gradients numerically.\nWhile not compromising reconstruction quality, this strategy is nearly twice as\nefficient as analytical gradients and about three times faster than\naxis-aligned finite difference. Experiments on the benchmark dataset\ndemonstrate the superiority of SuperNormal in efficiency and accuracy compared\nto existing multi-view photometric stereo methods. On our captured objects,\nSuperNormal produces more fine-grained geometry than recent neural 3D\nreconstruction methods.", + "We present Image Sculpting, a new framework for editing 2D images by\nincorporating tools from 3D geometry and graphics. This approach differs\nmarkedly from existing methods, which are confined to 2D spaces and typically\nrely on textual instructions, leading to ambiguity and limited control. Image\nSculpting converts 2D objects into 3D, enabling direct interaction with their\n3D geometry. Post-editing, these objects are re-rendered into 2D, merging into\nthe original image to produce high-fidelity results through a coarse-to-fine\nenhancement process. The framework supports precise, quantifiable, and\nphysically-plausible editing options such as pose editing, rotation,\ntranslation, 3D composition, carving, and serial addition. It marks an initial\nstep towards combining the creative freedom of generative models with the\nprecision of graphics pipelines.", + "Building accurate maps is a key building block to enable reliable\nlocalization, planning, and navigation of autonomous vehicles. We propose a\nnovel approach for building accurate maps of dynamic environments utilizing a\nsequence of LiDAR scans. To this end, we propose encoding the 4D scene into a\nnovel spatio-temporal implicit neural map representation by fitting a\ntime-dependent truncated signed distance function to each point. Using our\nrepresentation, we extract the static map by filtering the dynamic parts. Our\nneural representation is based on sparse feature grids, a globally shared\ndecoder, and time-dependent basis functions, which we jointly optimize in an\nunsupervised fashion. To learn this representation from a sequence of LiDAR\nscans, we design a simple yet efficient loss function to supervise the map\noptimization in a piecewise way. We evaluate our approach on various scenes\ncontaining moving objects in terms of the reconstruction quality of static maps\nand the segmentation of dynamic point clouds. The experimental results\ndemonstrate that our method is capable of removing the dynamic part of the\ninput point clouds while reconstructing accurate and complete 3D maps,\noutperforming several state-of-the-art methods.", + "The experimental results\ndemonstrate that our method is capable of removing the dynamic part of the\ninput point clouds while reconstructing accurate and complete 3D maps,\noutperforming several state-of-the-art methods. Codes are available at:\nhttps://github.com/PRBonn/4dNDF", + "We present FoundationPose, a unified foundation model for 6D object pose\nestimation and tracking, supporting both model-based and model-free setups. Our\napproach can be instantly applied at test-time to a novel object without\nfine-tuning, as long as its CAD model is given, or a small number of reference\nimages are captured. We bridge the gap between these two setups with a neural\nimplicit representation that allows for effective novel view synthesis, keeping\nthe downstream pose estimation modules invariant under the same unified\nframework. Strong generalizability is achieved via large-scale synthetic\ntraining, aided by a large language model (LLM), a novel transformer-based\narchitecture, and contrastive learning formulation. Extensive evaluation on\nmultiple public datasets involving challenging scenarios and objects indicate\nour unified approach outperforms existing methods specialized for each task by\na large margin. In addition, it even achieves comparable results to\ninstance-level methods despite the reduced assumptions. Project page:\nhttps://nvlabs.github.io/FoundationPose/", + "Recent developments in face restoration have achieved remarkable results in\nproducing high-quality and lifelike outputs. The stunning results however often\nfail to be faithful with respect to the identity of the person as the models\nlack necessary context. In this paper, we explore the potential of personalized\nface restoration with diffusion models. In our approach a restoration model is\npersonalized using a few images of the identity, leading to tailored\nrestoration with respect to the identity while retaining fine-grained details.\nBy using independent trainable blocks for personalization, the rich prior of a\nbase restoration model can be exploited to its fullest. To avoid the model\nrelying on parts of identity left in the conditioning low-quality images, a\ngenerative regularizer is employed. With a learnable parameter, the model\nlearns to balance between the details generated based on the input image and\nthe degree of personalization. Moreover, we improve the training pipeline of\nface restoration models to enable an alignment-free approach. We showcase the\nrobust capabilities of our approach in several real-world scenarios with\nmultiple identities, demonstrating our method's ability to generate\nfine-grained details with faithful restoration.", + "Moreover, we improve the training pipeline of\nface restoration models to enable an alignment-free approach. We showcase the\nrobust capabilities of our approach in several real-world scenarios with\nmultiple identities, demonstrating our method's ability to generate\nfine-grained details with faithful restoration. In the user study we evaluate\nthe perceptual quality and faithfulness of the genereated details, with our\nmethod being voted best 61% of the time compared to the second best with 25% of\nthe votes.", + "We present TextureDreamer, a novel image-guided texture synthesis method to\ntransfer relightable textures from a small number of input images (3 to 5) to\ntarget 3D shapes across arbitrary categories. Texture creation is a pivotal\nchallenge in vision and graphics. Industrial companies hire experienced artists\nto manually craft textures for 3D assets. Classical methods require densely\nsampled views and accurately aligned geometry, while learning-based methods are\nconfined to category-specific shapes within the dataset. In contrast,\nTextureDreamer can transfer highly detailed, intricate textures from real-world\nenvironments to arbitrary objects with only a few casually captured images,\npotentially significantly democratizing texture creation. Our core idea,\npersonalized geometry-aware score distillation (PGSD), draws inspiration from\nrecent advancements in diffuse models, including personalized modeling for\ntexture information extraction, variational score distillation for detailed\nappearance synthesis, and explicit geometry guidance with ControlNet. Our\nintegration and several essential modifications substantially improve the\ntexture quality. Experiments on real images spanning different categories show\nthat TextureDreamer can successfully transfer highly realistic, semantic\nmeaningful texture to arbitrary objects, surpassing the visual quality of\nprevious state-of-the-art.", + "Autonomous driving is a complex and challenging task that aims at safe motion\nplanning through scene understanding and reasoning. While vision-only\nautonomous driving methods have recently achieved notable performance, through\nenhanced scene understanding, several key issues, including lack of reasoning,\nlow generalization performance and long-tail scenarios, still need to be\naddressed. In this paper, we present VLP, a novel Vision-Language-Planning\nframework that exploits language models to bridge the gap between linguistic\nunderstanding and autonomous driving. VLP enhances autonomous driving systems\nby strengthening both the source memory foundation and the self-driving car's\ncontextual understanding. VLP achieves state-of-the-art end-to-end planning\nperformance on the challenging NuScenes dataset by achieving 35.9\\% and 60.5\\%\nreduction in terms of average L2 error and collision rates, respectively,\ncompared to the previous best method. Moreover, VLP shows improved performance\nin challenging long-tail scenarios and strong generalization capabilities when\nfaced with new urban environments.", + "Recent thrilling progress in large-scale text-to-image (T2I) models has\nunlocked unprecedented synthesis quality of AI-generated content (AIGC)\nincluding image generation, 3D and video composition. Further, personalized\ntechniques enable appealing customized production of a novel concept given only\nseveral images as reference. However, an intriguing problem persists: Is it\npossible to capture multiple, novel concepts from one single reference image?\nIn this paper, we identify that existing approaches fail to preserve visual\nconsistency with the reference image and eliminate cross-influence from\nconcepts. To alleviate this, we propose an attention calibration mechanism to\nimprove the concept-level understanding of the T2I model. Specifically, we\nfirst introduce new learnable modifiers bound with classes to capture\nattributes of multiple concepts. Then, the classes are separated and\nstrengthened following the activation of the cross-attention operation,\nensuring comprehensive and self-contained concepts. Additionally, we suppress\nthe attention activation of different classes to mitigate mutual influence\namong concepts. Together, our proposed method, dubbed DisenDiff, can learn\ndisentangled multiple concepts from one single image and produce novel\ncustomized images with learned concepts.", + "Additionally, we suppress\nthe attention activation of different classes to mitigate mutual influence\namong concepts. Together, our proposed method, dubbed DisenDiff, can learn\ndisentangled multiple concepts from one single image and produce novel\ncustomized images with learned concepts. We demonstrate that our method\noutperforms the current state of the art in both qualitative and quantitative\nevaluations. More importantly, our proposed techniques are compatible with LoRA\nand inpainting pipelines, enabling more interactive experiences.", + "Generative AI (GenAI) is transforming creative workflows through the\ncapability to synthesize and manipulate images via high-level prompts. Yet\ncreatives are not well supported to receive recognition or reward for the use\nof their content in GenAI training. To this end, we propose ProMark, a causal\nattribution technique to attribute a synthetically generated image to its\ntraining data concepts like objects, motifs, templates, artists, or styles. The\nconcept information is proactively embedded into the input training images\nusing imperceptible watermarks, and the diffusion models (unconditional or\nconditional) are trained to retain the corresponding watermarks in generated\nimages. We show that we can embed as many as $2^{16}$ unique watermarks into\nthe training data, and each training image can contain more than one watermark.\nProMark can maintain image quality whilst outperforming correlation-based\nattribution. Finally, several qualitative examples are presented, providing the\nconfidence that the presence of the watermark conveys a causative relationship\nbetween training data and synthetic images.", + "While GAN-based models have been successful in image stylization tasks, they\noften struggle with structure preservation while stylizing a wide range of\ninput images. Recently, diffusion models have been adopted for image\nstylization but still lack the capability to maintain the original quality of\ninput images. Building on this, we propose OSASIS: a novel one-shot stylization\nmethod that is robust in structure preservation. We show that OSASIS is able to\neffectively disentangle the semantics from the structure of an image, allowing\nit to control the level of content and style implemented to a given input. We\napply OSASIS to various experimental settings, including stylization with\nout-of-domain reference images and stylization with text-driven manipulation.\nResults show that OSASIS outperforms other stylization methods, especially for\ninput images that were rarely encountered during training, providing a\npromising solution to stylization via diffusion models.", + "Multimodal Large Language Models (MLLMs) have excelled in 2D image-text\ncomprehension and image generation, but their understanding of the 3D world is\nnotably deficient, limiting progress in 3D language understanding and\ngeneration. To solve this problem, we introduce GPT4Point, an innovative\ngroundbreaking point-language multimodal model designed specifically for\nunified 3D object understanding and generation within the MLLM framework.\nGPT4Point as a powerful 3D MLLM seamlessly can execute a variety of point-text\nreference tasks such as point-cloud captioning and Q&A. Additionally, GPT4Point\nis equipped with advanced capabilities for controllable 3D generation, it can\nget high-quality results through a low-quality point-text feature maintaining\nthe geometric shapes and colors. To support the expansive needs of 3D\nobject-text pairs, we develop Pyramid-XL, a point-language dataset annotation\nengine. It constructs a large-scale database over 1M objects of varied text\ngranularity levels from the Objaverse-XL dataset, essential for training\nGPT4Point.", + "It constructs a large-scale database over 1M objects of varied text\ngranularity levels from the Objaverse-XL dataset, essential for training\nGPT4Point. A comprehensive benchmark has been proposed to evaluate 3D\npoint-language understanding capabilities. In extensive evaluations, GPT4Point\nhas demonstrated superior performance in understanding and generation.", + "We present \"SemCity,\" a 3D diffusion model for semantic scene generation in\nreal-world outdoor environments. Most 3D diffusion models focus on generating a\nsingle object, synthetic indoor scenes, or synthetic outdoor scenes, while the\ngeneration of real-world outdoor scenes is rarely addressed. In this paper, we\nconcentrate on generating a real-outdoor scene through learning a diffusion\nmodel on a real-world outdoor dataset. In contrast to synthetic data,\nreal-outdoor datasets often contain more empty spaces due to sensor\nlimitations, causing challenges in learning real-outdoor distributions. To\naddress this issue, we exploit a triplane representation as a proxy form of\nscene distributions to be learned by our diffusion model. Furthermore, we\npropose a triplane manipulation that integrates seamlessly with our triplane\ndiffusion model. The manipulation improves our diffusion model's applicability\nin a variety of downstream tasks related to outdoor scene generation such as\nscene inpainting, scene outpainting, and semantic scene completion refinements.\nIn experimental results, we demonstrate that our triplane diffusion model shows\nmeaningful generation results compared with existing work in a real-outdoor\ndataset, SemanticKITTI.", + "In experimental results, we demonstrate that our triplane diffusion model shows\nmeaningful generation results compared with existing work in a real-outdoor\ndataset, SemanticKITTI. We also show our triplane manipulation facilitates\nseamlessly adding, removing, or modifying objects within a scene. Further, it\nalso enables the expansion of scenes toward a city-level scale. Finally, we\nevaluate our method on semantic scene completion refinements where our\ndiffusion model enhances predictions of semantic scene completion networks by\nlearning scene distribution. Our code is available at\nhttps://github.com/zoomin-lee/SemCity.", + "Recent progress in self-supervised representation learning has resulted in\nmodels that are capable of extracting image features that are not only\neffective at encoding image level, but also pixel-level, semantics. These\nfeatures have been shown to be effective for dense visual semantic\ncorrespondence estimation, even outperforming fully-supervised methods.\nNevertheless, current self-supervised approaches still fail in the presence of\nchallenging image characteristics such as symmetries and repeated parts. To\naddress these limitations, we propose a new approach for semantic\ncorrespondence estimation that supplements discriminative self-supervised\nfeatures with 3D understanding via a weak geometric spherical prior. Compared\nto more involved 3D pipelines, our model only requires weak viewpoint\ninformation, and the simplicity of our spherical representation enables us to\ninject informative geometric priors into the model during training. We propose\na new evaluation metric that better accounts for repeated part and\nsymmetry-induced mistakes. We present results on the challenging SPair-71k\ndataset, where we show that our approach demonstrates is capable of\ndistinguishing between symmetric views and repeated parts across many object\ncategories, and also demonstrate that we can generalize to unseen classes on\nthe AwA dataset.", + "With the emergence of pre-trained vision-language models like CLIP, how to\nadapt them to various downstream classification tasks has garnered significant\nattention in recent research. The adaptation strategies can be typically\ncategorized into three paradigms: zero-shot adaptation, few-shot adaptation,\nand the recently-proposed training-free few-shot adaptation. Most existing\napproaches are tailored for a specific setting and can only cater to one or two\nof these paradigms. In this paper, we introduce a versatile adaptation approach\nthat can effectively work under all three settings. Specifically, we propose\nthe dual memory networks that comprise dynamic and static memory components.\nThe static memory caches training data knowledge, enabling training-free\nfew-shot adaptation, while the dynamic memory preserves historical test\nfeatures online during the testing process, allowing for the exploration of\nadditional data insights beyond the training set. This novel capability\nenhances model performance in the few-shot setting and enables model usability\nin the absence of training data. The two memory networks employ the same\nflexible memory interactive strategy, which can operate in a training-free mode\nand can be further enhanced by incorporating learnable projection layers.", + "This novel capability\nenhances model performance in the few-shot setting and enables model usability\nin the absence of training data. The two memory networks employ the same\nflexible memory interactive strategy, which can operate in a training-free mode\nand can be further enhanced by incorporating learnable projection layers. Our\napproach is tested across 11 datasets under the three task settings.\nRemarkably, in the zero-shot scenario, it outperforms existing methods by over\n3\\% and even shows superior results against methods utilizing external training\ndata. Additionally, our method exhibits robust performance against natural\ndistribution shifts. Codes are available at \\url{https://github.com/YBZh/DMN}.", + "We introduce a framework for intrinsic latent diffusion models operating\ndirectly on the surfaces of 3D shapes, with the goal of synthesizing\nhigh-quality textures. Our approach is underpinned by two contributions: field\nlatents, a latent representation encoding textures as discrete vector fields on\nthe mesh vertices, and field latent diffusion models, which learn to denoise a\ndiffusion process in the learned latent space on the surface. We consider a\nsingle-textured-mesh paradigm, where our models are trained to generate\nvariations of a given texture on a mesh. We show the synthesized textures are\nof superior fidelity compared those from existing single-textured-mesh\ngenerative models. Our models can also be adapted for user-controlled editing\ntasks such as inpainting and label-guided generation. The efficacy of our\napproach is due in part to the equivariance of our proposed framework under\nisometries, allowing our models to seamlessly reproduce details across locally\nsimilar regions and opening the door to a notion of generative texture\ntransfer.", + "Multimodal Large Language Models (MLLMs) have endowed LLMs with the ability\nto perceive and understand multi-modal signals. However, most of the existing\nMLLMs mainly adopt vision encoders pretrained on coarsely aligned image-text\npairs, leading to insufficient extraction and reasoning of visual knowledge. To\naddress this issue, we devise a dual-Level vIsual knOwledge eNhanced Multimodal\nLarge Language Model (LION), which empowers the MLLM by injecting visual\nknowledge in two levels. 1) Progressive incorporation of fine-grained\nspatial-aware visual knowledge. We design a vision aggregator cooperated with\nregion-level vision-language (VL) tasks to incorporate fine-grained\nspatial-aware visual knowledge into the MLLM. To alleviate the conflict between\nimage-level and region-level VL tasks during incorporation, we devise a\ndedicated stage-wise instruction-tuning strategy with mixture-of-adapters. This\nprogressive incorporation scheme contributes to the mutual promotion between\nthese two kinds of VL tasks. 2) Soft prompting of high-level semantic visual\nevidence.", + "This\nprogressive incorporation scheme contributes to the mutual promotion between\nthese two kinds of VL tasks. 2) Soft prompting of high-level semantic visual\nevidence. We facilitate the MLLM with high-level semantic visual evidence by\nleveraging diverse image tags. To mitigate the potential influence caused by\nimperfect predicted tags, we propose a soft prompting method by embedding a\nlearnable token into the tailored text instruction. Comprehensive experiments\non several multi-modal benchmarks demonstrate the superiority of our model\n(e.g., improvement of 5% accuracy on VSR and 3% CIDEr on TextCaps over\nInstructBLIP, 5% accuracy on RefCOCOg over Kosmos-2).", + "The goal of selective prediction is to allow an a model to abstain when it\nmay not be able to deliver a reliable prediction, which is important in\nsafety-critical contexts. Existing approaches to selective prediction typically\nrequire access to the internals of a model, require retraining a model or study\nonly unimodal models. However, the most powerful models (e.g. GPT-4) are\ntypically only available as black boxes with inaccessible internals, are not\nretrainable by end-users, and are frequently used for multimodal tasks. We\nstudy the possibility of selective prediction for vision-language models in a\nrealistic, black-box setting. We propose using the principle of\n\\textit{neighborhood consistency} to identify unreliable responses from a\nblack-box vision-language model in question answering tasks. We hypothesize\nthat given only a visual question and model response, the consistency of the\nmodel's responses over the neighborhood of a visual question will indicate\nreliability. It is impossible to directly sample neighbors in feature space in\na black-box setting. Instead, we show that it is possible to use a smaller\nproxy model to approximately sample from the neighborhood.", + "It is impossible to directly sample neighbors in feature space in\na black-box setting. Instead, we show that it is possible to use a smaller\nproxy model to approximately sample from the neighborhood. We find that\nneighborhood consistency can be used to identify model responses to visual\nquestions that are likely unreliable, even in adversarial settings or settings\nthat are out-of-distribution to the proxy model.", + "Advancements in 3D instance segmentation have traditionally been tethered to\nthe availability of annotated datasets, limiting their application to a narrow\nspectrum of object categories. Recent efforts have sought to harness\nvision-language models like CLIP for open-set semantic reasoning, yet these\nmethods struggle to distinguish between objects of the same categories and rely\non specific prompts that are not universally applicable. In this paper, we\nintroduce SAI3D, a novel zero-shot 3D instance segmentation approach that\nsynergistically leverages geometric priors and semantic cues derived from\nSegment Anything Model (SAM). Our method partitions a 3D scene into geometric\nprimitives, which are then progressively merged into 3D instance segmentations\nthat are consistent with the multi-view SAM masks. Moreover, we design a\nhierarchical region-growing algorithm with a dynamic thresholding mechanism,\nwhich largely improves the robustness of finegrained 3D scene parsing.Empirical\nevaluations on ScanNet, Matterport3D and the more challenging ScanNet++\ndatasets demonstrate the superiority of our approach.", + "Moreover, we design a\nhierarchical region-growing algorithm with a dynamic thresholding mechanism,\nwhich largely improves the robustness of finegrained 3D scene parsing.Empirical\nevaluations on ScanNet, Matterport3D and the more challenging ScanNet++\ndatasets demonstrate the superiority of our approach. Notably, SAI3D\noutperforms existing open-vocabulary baselines and even surpasses\nfully-supervised methods in class-agnostic segmentation on ScanNet++. Our\nproject page is at https://yd-yin.github.io/SAI3D.", + "Test-time adaptation (TTA) aims at adapting a model pre-trained on the\nlabeled source domain to the unlabeled target domain. Existing methods usually\nfocus on improving TTA performance under covariate shifts, while neglecting\nsemantic shifts. In this paper, we delve into a realistic open-set TTA setting\nwhere the target domain may contain samples from unknown classes. Many\nstate-of-the-art closed-set TTA methods perform poorly when applied to open-set\nscenarios, which can be attributed to the inaccurate estimation of data\ndistribution and model confidence. To address these issues, we propose a simple\nbut effective framework called unified entropy optimization (UniEnt), which is\ncapable of simultaneously adapting to covariate-shifted in-distribution (csID)\ndata and detecting covariate-shifted out-of-distribution (csOOD) data.\nSpecifically, UniEnt first mines pseudo-csID and pseudo-csOOD samples from test\ndata, followed by entropy minimization on the pseudo-csID data and entropy\nmaximization on the pseudo-csOOD data. Furthermore, we introduce UniEnt+ to\nalleviate the noise caused by hard data partition leveraging sample-level\nconfidence.", + "Furthermore, we introduce UniEnt+ to\nalleviate the noise caused by hard data partition leveraging sample-level\nconfidence. Extensive experiments on CIFAR benchmarks and Tiny-ImageNet-C show\nthe superiority of our framework. The code is available at\nhttps://github.com/gaozhengqing/UniEnt", + "Coordinate based implicit neural representations have gained rapid popularity\nin recent years as they have been successfully used in image, geometry and\nscene modeling tasks. In this work, we present a novel use case for such\nimplicit representations in the context of learning anatomically constrained\nface models. Actor specific anatomically constrained face models are the state\nof the art in both facial performance capture and performance retargeting.\nDespite their practical success, these anatomical models are slow to evaluate\nand often require extensive data capture to be built. We propose the anatomical\nimplicit face model; an ensemble of implicit neural networks that jointly learn\nto model the facial anatomy and the skin surface with high-fidelity, and can\nreadily be used as a drop in replacement to conventional blendshape models.\nGiven an arbitrary set of skin surface meshes of an actor and only a neutral\nshape with estimated skull and jaw bones, our method can recover a dense\nanatomical substructure which constrains every point on the facial surface. We\ndemonstrate the usefulness of our approach in several tasks ranging from shape\nfitting, shape editing, and performance retargeting.", + "Class-Incremental Learning (CIL) requires a learning system to continually\nlearn new classes without forgetting. Despite the strong performance of\nPre-Trained Models (PTMs) in CIL, a critical issue persists: learning new\nclasses often results in the overwriting of old ones. Excessive modification of\nthe network causes forgetting, while minimal adjustments lead to an inadequate\nfit for new classes. As a result, it is desired to figure out a way of\nefficient model updating without harming former knowledge. In this paper, we\npropose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL. To enable model\nupdating without conflict, we train a distinct lightweight adapter module for\neach new task, aiming to create task-specific subspaces. These adapters span a\nhigh-dimensional feature space, enabling joint decision-making across multiple\nsubspaces. As data evolves, the expanding subspaces render the old class\nclassifiers incompatible with new-stage spaces. Correspondingly, we design a\nsemantic-guided prototype complement strategy that synthesizes old classes' new\nfeatures without using any old class instance.", + "As data evolves, the expanding subspaces render the old class\nclassifiers incompatible with new-stage spaces. Correspondingly, we design a\nsemantic-guided prototype complement strategy that synthesizes old classes' new\nfeatures without using any old class instance. Extensive experiments on seven\nbenchmark datasets verify EASE's state-of-the-art performance. Code is\navailable at: https://github.com/sun-hailong/CVPR24-Ease", + "Identifying the chemical structure from a graphical representation, or image,\nof a molecule is a challenging pattern recognition task that would greatly\nbenefit drug development. Yet, existing methods for chemical structure\nrecognition do not typically generalize well, and show diminished effectiveness\nwhen confronted with domains where data is sparse, or costly to generate, such\nas hand-drawn molecule images. To address this limitation, we propose a new\nchemical structure recognition tool that delivers state-of-the-art performance\nand can adapt to new domains with a limited number of data samples and\nsupervision. Unlike previous approaches, our method provides atom-level\nlocalization, and can therefore segment the image into the different atoms and\nbonds. Our model is the first model to perform OCSR with atom-level entity\ndetection with only SMILES supervision. Through rigorous and extensive\nbenchmarking, we demonstrate the preeminence of our chemical structure\nrecognition approach in terms of data efficiency, accuracy, and atom-level\nentity prediction.", + "Camera-based person re-identification (ReID) systems have been widely applied\nin the field of public security. However, cameras often lack the perception of\n3D morphological information of human and are susceptible to various\nlimitations, such as inadequate illumination, complex background, and personal\nprivacy. In this paper, we propose a LiDAR-based ReID framework, ReID3D, that\nutilizes pre-training strategy to retrieve features of 3D body shape and\nintroduces Graph-based Complementary Enhancement Encoder for extracting\ncomprehensive features. Due to the lack of LiDAR datasets, we build LReID, the\nfirst LiDAR-based person ReID dataset, which is collected in several outdoor\nscenes with variations in natural conditions. Additionally, we introduce\nLReID-sync, a simulated pedestrian dataset designed for pre-training encoders\nwith tasks of point cloud completion and shape parameter learning. Extensive\nexperiments on LReID show that ReID3D achieves exceptional performance with a\nrank-1 accuracy of 94.0, highlighting the significant potential of LiDAR in\naddressing person ReID tasks.", + "Extensive\nexperiments on LReID show that ReID3D achieves exceptional performance with a\nrank-1 accuracy of 94.0, highlighting the significant potential of LiDAR in\naddressing person ReID tasks. To the best of our knowledge, we are the first to\npropose a solution for LiDAR-based ReID. The code and datasets will be released\nsoon.", + "As an important pillar of underwater intelligence, Marine Animal Segmentation\n(MAS) involves segmenting animals within marine environments. Previous methods\ndon't excel in extracting long-range contextual features and overlook the\nconnectivity between discrete pixels. Recently, Segment Anything Model (SAM)\noffers a universal framework for general segmentation tasks. Unfortunately,\ntrained with natural images, SAM does not obtain the prior knowledge from\nmarine images. In addition, the single-position prompt of SAM is very\ninsufficient for prior guidance. To address these issues, we propose a novel\nfeature learning framework, named Dual-SAM for high-performance MAS. To this\nend, we first introduce a dual structure with SAM's paradigm to enhance feature\nlearning of marine images. Then, we propose a Multi-level Coupled Prompt (MCP)\nstrategy to instruct comprehensive underwater prior information, and enhance\nthe multi-level features of SAM's encoder with adapters. Subsequently, we\ndesign a Dilated Fusion Attention Module (DFAM) to progressively integrate\nmulti-level features from SAM's encoder.", + "Subsequently, we\ndesign a Dilated Fusion Attention Module (DFAM) to progressively integrate\nmulti-level features from SAM's encoder. Finally, instead of directly\npredicting the masks of marine animals, we propose a Criss-Cross Connectivity\nPrediction (C$^3$P) paradigm to capture the inter-connectivity between discrete\npixels. With dual decoders, it generates pseudo-labels and achieves mutual\nsupervision for complementary feature representations, resulting in\nconsiderable improvements over previous techniques. Extensive experiments\nverify that our proposed method achieves state-of-the-art performances on five\nwidely-used MAS datasets. The code is available at\nhttps://github.com/Drchip61/Dual_SAM.", + "Video and audio content creation serves as the core technique for the movie\nindustry and professional users. Recently, existing diffusion-based methods\ntackle video and audio generation separately, which hinders the technique\ntransfer from academia to industry. In this work, we aim at filling the gap,\nwith a carefully designed optimization-based framework for cross-visual-audio\nand joint-visual-audio generation. We observe the powerful generation ability\nof off-the-shelf video or audio generation models. Thus, instead of training\nthe giant models from scratch, we propose to bridge the existing strong models\nwith a shared latent representation space. Specifically, we propose a\nmultimodality latent aligner with the pre-trained ImageBind model. Our latent\naligner shares a similar core as the classifier guidance that guides the\ndiffusion denoising process during inference time. Through carefully designed\noptimization strategy and loss functions, we show the superior performance of\nour method on joint video-audio generation, visual-steered audio generation,\nand audio-steered visual generation tasks. The project website can be found at\nhttps://yzxing87.github.io/Seeing-and-Hearing/", + "We develop a theory for the representation of opaque solids as volumes.\nStarting from a stochastic representation of opaque solids as random indicator\nfunctions, we prove the conditions under which such solids can be modeled using\nexponential volumetric transport. We also derive expressions for the volumetric\nattenuation coefficient as a functional of the probability distributions of the\nunderlying indicator functions. We generalize our theory to account for\nisotropic and anisotropic scattering at different parts of the solid, and for\nrepresentations of opaque solids as stochastic implicit surfaces. We derive our\nvolumetric representation from first principles, which ensures that it\nsatisfies physical constraints such as reciprocity and reversibility. We use\nour theory to explain, compare, and correct previous volumetric\nrepresentations, as well as propose meaningful extensions that lead to improved\nperformance in 3D reconstruction tasks.", + "The pretraining-finetuning paradigm has gained popularity in various computer\nvision tasks. In this paradigm, the emergence of active finetuning arises due\nto the abundance of large-scale data and costly annotation requirements. Active\nfinetuning involves selecting a subset of data from an unlabeled pool for\nannotation, facilitating subsequent finetuning. However, the use of a limited\nnumber of training samples can lead to a biased distribution, potentially\nresulting in model overfitting. In this paper, we propose a new method called\nActiveDC for the active finetuning tasks. Firstly, we select samples for\nannotation by optimizing the distribution similarity between the subset to be\nselected and the entire unlabeled pool in continuous space. Secondly, we\ncalibrate the distribution of the selected samples by exploiting implicit\ncategory information in the unlabeled pool. The feature visualization provides\nan intuitive sense of the effectiveness of our approach to distribution\ncalibration. We conducted extensive experiments on three image classification\ndatasets with different sampling ratios. The results indicate that ActiveDC\nconsistently outperforms the baseline performance in all image classification\ntasks.", + "The feature visualization provides\nan intuitive sense of the effectiveness of our approach to distribution\ncalibration. We conducted extensive experiments on three image classification\ndatasets with different sampling ratios. The results indicate that ActiveDC\nconsistently outperforms the baseline performance in all image classification\ntasks. The improvement is particularly significant when the sampling ratio is\nlow, with performance gains of up to 10%. Our code will be released.", + "Machine learning holds tremendous promise for transforming the fundamental\npractice of scientific discovery by virtue of its data-driven nature. With the\never-increasing stream of research data collection, it would be appealing to\nautonomously explore patterns and insights from observational data for\ndiscovering novel classes of phenotypes and concepts. However, in the\nbiomedical domain, there are several challenges inherently presented in the\ncumulated data which hamper the progress of novel class discovery. The\nnon-i.i.d. data distribution accompanied by the severe imbalance among\ndifferent groups of classes essentially leads to ambiguous and biased semantic\nrepresentations. In this work, we present a geometry-constrained probabilistic\nmodeling treatment to resolve the identified issues. First, we propose to\nparameterize the approximated posterior of instance embedding as a marginal von\nMisesFisher distribution to account for the interference of distributional\nlatent bias. Then, we incorporate a suite of critical geometric properties to\nimpose proper constraints on the layout of constructed embedding space, which\nin turn minimizes the uncontrollable risk for unknown class learning and\nstructuring. Furthermore, a spectral graph-theoretic method is devised to\nestimate the number of potential novel classes.", + "Furthermore, a spectral graph-theoretic method is devised to\nestimate the number of potential novel classes. It inherits two intriguing\nmerits compared to existent approaches, namely high computational efficiency\nand flexibility for taxonomy-adaptive estimation. Extensive experiments across\nvarious biomedical scenarios substantiate the effectiveness and general\napplicability of our method.", + "In this era, the success of large language models and text-to-image models\ncan be attributed to the driving force of large-scale datasets. However, in the\nrealm of 3D vision, while remarkable progress has been made with models trained\non large-scale synthetic and real-captured object data like Objaverse and\nMVImgNet, a similar level of progress has not been observed in the domain of\nhuman-centric tasks partially due to the lack of a large-scale human dataset.\nExisting datasets of high-fidelity 3D human capture continue to be mid-sized\ndue to the significant challenges in acquiring large-scale high-quality 3D\nhuman data. To bridge this gap, we present MVHumanNet, a dataset that comprises\nmulti-view human action sequences of 4,500 human identities. The primary focus\nof our work is on collecting human data that features a large number of diverse\nidentities and everyday clothing using a multi-view human capture system, which\nfacilitates easily scalable data collection.", + "The primary focus\nof our work is on collecting human data that features a large number of diverse\nidentities and everyday clothing using a multi-view human capture system, which\nfacilitates easily scalable data collection. Our dataset contains 9,000 daily\noutfits, 60,000 motion sequences and 645 million frames with extensive\nannotations, including human masks, camera parameters, 2D and 3D keypoints,\nSMPL/SMPLX parameters, and corresponding textual descriptions. To explore the\npotential of MVHumanNet in various 2D and 3D visual tasks, we conducted pilot\nstudies on view-consistent action recognition, human NeRF reconstruction,\ntext-driven view-unconstrained human image generation, as well as 2D\nview-unconstrained human image and 3D avatar generation. Extensive experiments\ndemonstrate the performance improvements and effective applications enabled by\nthe scale provided by MVHumanNet. As the current largest-scale 3D human\ndataset, we hope that the release of MVHumanNet data with annotations will\nfoster further innovations in the domain of 3D human-centric tasks at scale.", + "Federated learning often suffers from slow and unstable convergence due to\nthe heterogeneous characteristics of participating client datasets. Such a\ntendency is aggravated when the client participation ratio is low since the\ninformation collected from the clients has large variations. To address this\nchallenge, we propose a simple but effective federated learning framework,\nwhich improves the consistency across clients and facilitates the convergence\nof the server model. This is achieved by making the server broadcast a global\nmodel with a lookahead gradient. This strategy enables the proposed approach to\nconvey the projected global update information to participants effectively\nwithout additional client memory and extra communication costs. We also\nregularize local updates by aligning each client with the overshot global model\nto reduce bias and improve the stability of our algorithm. We provide the\ntheoretical convergence rate of our algorithm and demonstrate remarkable\nperformance gains in terms of accuracy and communication efficiency compared to\nthe state-of-the-art methods, especially with low client participation rates.\nThe source code is available at our project page.", + "Skeleton-based action recognition has attracted lots of research attention.\nRecently, to build an accurate skeleton-based action recognizer, a variety of\nworks have been proposed. Among them, some works use large model architectures\nas backbones of their recognizers to boost the skeleton data representation\ncapability, while some other works pre-train their recognizers on external data\nto enrich the knowledge. In this work, we observe that large language models\nwhich have been extensively used in various natural language processing tasks\ngenerally hold both large model architectures and rich implicit knowledge.\nMotivated by this, we propose a novel LLM-AR framework, in which we investigate\ntreating the Large Language Model as an Action Recognizer. In our framework, we\npropose a linguistic projection process to project each input action signal\n(i.e., each skeleton sequence) into its ``sentence format'' (i.e., an ``action\nsentence''). Moreover, we also incorporate our framework with several designs\nto further facilitate this linguistic projection process. Extensive experiments\ndemonstrate the efficacy of our proposed framework.", + "Generative models have been very popular in the recent years for their image\ngeneration capabilities. GAN-based models are highly regarded for their\ndisentangled latent space, which is a key feature contributing to their success\nin controlled image editing. On the other hand, diffusion models have emerged\nas powerful tools for generating high-quality images. However, the latent space\nof diffusion models is not as thoroughly explored or understood. Existing\nmethods that aim to explore the latent space of diffusion models usually relies\non text prompts to pinpoint specific semantics. However, this approach may be\nrestrictive in areas such as art, fashion, or specialized fields like medicine,\nwhere suitable text prompts might not be available or easy to conceive thus\nlimiting the scope of existing work. In this paper, we propose an unsupervised\nmethod to discover latent semantics in text-to-image diffusion models without\nrelying on text prompts. Our method takes a small set of unlabeled images from\nspecific domains, such as faces or cats, and a pre-trained diffusion model, and\ndiscovers diverse semantics in unsupervised fashion using a contrastive\nlearning objective.", + "Our method takes a small set of unlabeled images from\nspecific domains, such as faces or cats, and a pre-trained diffusion model, and\ndiscovers diverse semantics in unsupervised fashion using a contrastive\nlearning objective. Moreover, the learned directions can be applied\nsimultaneously, either within the same domain (such as various types of facial\nedits) or across different domains (such as applying cat and face edits within\nthe same image) without interfering with each other. Our extensive experiments\nshow that our method achieves highly disentangled edits, outperforming existing\napproaches in both diffusion-based and GAN-based latent space editing methods.", + "Neural radiance fields have achieved remarkable performance in modeling the\nappearance of 3D scenes. However, existing approaches still struggle with the\nview-dependent appearance of glossy surfaces, especially under complex lighting\nof indoor environments. Unlike existing methods, which typically assume distant\nlighting like an environment map, we propose a learnable Gaussian directional\nencoding to better model the view-dependent effects under near-field lighting\nconditions. Importantly, our new directional encoding captures the\nspatially-varying nature of near-field lighting and emulates the behavior of\nprefiltered environment maps. As a result, it enables the efficient evaluation\nof preconvolved specular color at any 3D location with varying roughness\ncoefficients. We further introduce a data-driven geometry prior that helps\nalleviate the shape radiance ambiguity in reflection modeling. We show that our\nGaussian directional encoding and geometry prior significantly improve the\nmodeling of challenging specular reflections in neural radiance fields, which\nhelps decompose appearance into more physically meaningful components.", + "In subject-driven text-to-image synthesis, the synthesis process tends to be\nheavily influenced by the reference images provided by users, often overlooking\ncrucial attributes detailed in the text prompt. In this work, we propose\nSubject-Agnostic Guidance (SAG), a simple yet effective solution to remedy the\nproblem. We show that through constructing a subject-agnostic condition and\napplying our proposed dual classifier-free guidance, one could obtain outputs\nconsistent with both the given subject and input text prompts. We validate the\nefficacy of our approach through both optimization-based and encoder-based\nmethods. Additionally, we demonstrate its applicability in second-order\ncustomization methods, where an encoder-based model is fine-tuned with\nDreamBooth. Our approach is conceptually simple and requires only minimal code\nmodifications, but leads to substantial quality improvements, as evidenced by\nour evaluations and user studies.", + "Large language models (LLMs) are fine-tuned using human comparison data with\nReinforcement Learning from Human Feedback (RLHF) methods to make them better\naligned with users' preferences. In contrast to LLMs, human preference learning\nhas not been widely explored in text-to-image diffusion models; the best\nexisting approach is to fine-tune a pretrained model using carefully curated\nhigh quality images and captions to improve visual appeal and text alignment.\nWe propose Diffusion-DPO, a method to align diffusion models to human\npreferences by directly optimizing on human comparison data. Diffusion-DPO is\nadapted from the recently developed Direct Preference Optimization (DPO), a\nsimpler alternative to RLHF which directly optimizes a policy that best\nsatisfies human preferences under a classification objective. We re-formulate\nDPO to account for a diffusion model notion of likelihood, utilizing the\nevidence lower bound to derive a differentiable objective. Using the Pick-a-Pic\ndataset of 851K crowdsourced pairwise preferences, we fine-tune the base model\nof the state-of-the-art Stable Diffusion XL (SDXL)-1.0 model with\nDiffusion-DPO.", + "Using the Pick-a-Pic\ndataset of 851K crowdsourced pairwise preferences, we fine-tune the base model\nof the state-of-the-art Stable Diffusion XL (SDXL)-1.0 model with\nDiffusion-DPO. Our fine-tuned base model significantly outperforms both base\nSDXL-1.0 and the larger SDXL-1.0 model consisting of an additional refinement\nmodel in human evaluation, improving visual appeal and prompt alignment. We\nalso develop a variant that uses AI feedback and has comparable performance to\ntraining on human preferences, opening the door for scaling of diffusion model\nalignment methods.", + "Advanced life forms, sustained by the synergistic interaction of neural\ncognitive mechanisms, continually acquire and transfer knowledge throughout\ntheir lifespan. In contrast, contemporary machine learning paradigms exhibit\nlimitations in emulating the facets of continual learning (CL). Nonetheless,\nthe emergence of large language models (LLMs) presents promising avenues for\nrealizing CL via interactions with these models. Drawing on Complementary\nLearning System theory, this paper presents a novel Interactive Continual\nLearning (ICL) framework, enabled by collaborative interactions among models of\nvarious sizes. Specifically, we assign the ViT model as System1 and multimodal\nLLM as System2. To enable the memory module to deduce tasks from class\ninformation and enhance Set2Set retrieval, we propose the Class-Knowledge-Task\nMulti-Head Attention (CKT-MHA). Additionally, to improve memory retrieval in\nSystem1 through enhanced geometric representation, we introduce the CL-vMF\nmechanism, based on the von Mises-Fisher (vMF) distribution.", + "Additionally, to improve memory retrieval in\nSystem1 through enhanced geometric representation, we introduce the CL-vMF\nmechanism, based on the von Mises-Fisher (vMF) distribution. Meanwhile, we\nintroduce the von Mises-Fisher Outlier Detection and Interaction (vMF-ODI)\nstrategy to identify hard examples, thus enhancing collaboration between\nSystem1 and System2 for complex reasoning realization. Comprehensive evaluation\nof our proposed ICL demonstrates significant resistance to forgetting and\nsuperior performance relative to existing methods. Code is available at\ngithub.com/ICL.", + "We introduce a 3D-aware diffusion model, ZeroNVS, for single-image novel view\nsynthesis for in-the-wild scenes. While existing methods are designed for\nsingle objects with masked backgrounds, we propose new techniques to address\nchallenges introduced by in-the-wild multi-object scenes with complex\nbackgrounds. Specifically, we train a generative prior on a mixture of data\nsources that capture object-centric, indoor, and outdoor scenes. To address\nissues from data mixture such as depth-scale ambiguity, we propose a novel\ncamera conditioning parameterization and normalization scheme. Further, we\nobserve that Score Distillation Sampling (SDS) tends to truncate the\ndistribution of complex backgrounds during distillation of 360-degree scenes,\nand propose \"SDS anchoring\" to improve the diversity of synthesized novel\nviews. Our model sets a new state-of-the-art result in LPIPS on the DTU dataset\nin the zero-shot setting, even outperforming methods specifically trained on\nDTU. We further adapt the challenging Mip-NeRF 360 dataset as a new benchmark\nfor single-image novel view synthesis, and demonstrate strong performance in\nthis setting.", + "We further adapt the challenging Mip-NeRF 360 dataset as a new benchmark\nfor single-image novel view synthesis, and demonstrate strong performance in\nthis setting. Our code and data are at http://kylesargent.github.io/zeronvs/", + "The inherent generative power of denoising diffusion models makes them\nwell-suited for image restoration tasks where the objective is to find the\noptimal high-quality image within the generative space that closely resembles\nthe input image. We propose a method to adapt a pretrained diffusion model for\nimage restoration by simply adding noise to the input image to be restored and\nthen denoise. Our method is based on the observation that the space of a\ngenerative model needs to be constrained. We impose this constraint by\nfinetuning the generative model with a set of anchor images that capture the\ncharacteristics of the input image. With the constrained space, we can then\nleverage the sampling strategy used for generation to do image restoration. We\nevaluate against previous methods and show superior performances on multiple\nreal-world restoration datasets in preserving identity and image quality. We\nalso demonstrate an important and practical application on personalized\nrestoration, where we use a personal album as the anchor images to constrain\nthe generative space. This approach allows us to produce results that\naccurately preserve high-frequency details, which previous works are unable to\ndo. Project webpage: https://gen2res.github.io.", + "Continual Learning (CL) enables machine learning models to learn from\ncontinuously shifting new training data in absence of data from old tasks.\nRecently, pretrained vision transformers combined with prompt tuning have shown\npromise for overcoming catastrophic forgetting in CL. These approaches rely on\na pool of learnable prompts which can be inefficient in sharing knowledge\nacross tasks leading to inferior performance. In addition, the lack of\nfine-grained layer specific prompts does not allow these to fully express the\nstrength of the prompts for CL. We address these limitations by proposing\nConvPrompt, a novel convolutional prompt creation mechanism that maintains\nlayer-wise shared embeddings, enabling both layer-specific learning and better\nconcept transfer across tasks. The intelligent use of convolution enables us to\nmaintain a low parameter overhead without compromising performance. We further\nleverage Large Language Models to generate fine-grained text descriptions of\neach category which are used to get task similarity and dynamically decide the\nnumber of prompts to be learned. Extensive experiments demonstrate the\nsuperiority of ConvPrompt and improves SOTA by ~3% with significantly less\nparameter overhead. We also perform strong ablation over various modules to\ndisentangle the importance of different components.", + "Building a generalist agent that can interact with the world is the\nintriguing target of AI systems, thus spurring the research for embodied\nnavigation, where an agent is required to navigate according to instructions or\nrespond to queries. Despite the major progress attained, previous works\nprimarily focus on task-specific agents and lack generalizability to unseen\nscenarios. Recently, LLMs have presented remarkable capabilities across various\nfields, and provided a promising opportunity for embodied navigation. Drawing\non this, we propose the first generalist model for embodied navigation,\nNaviLLM. It adapts LLMs to embodied navigation by introducing schema-based\ninstruction. The schema-based instruction flexibly casts various tasks into\ngeneration problems, thereby unifying a wide range of tasks. This approach\nallows us to integrate diverse data sources from various datasets into the\ntraining, equipping NaviLLM with a wide range of capabilities required by\nembodied navigation. We conduct extensive experiments to evaluate the\nperformance and generalizability of our model. The experimental results\ndemonstrate that our unified model achieves state-of-the-art performance on\nCVDN, SOON, and ScanQA.", + "We conduct extensive experiments to evaluate the\nperformance and generalizability of our model. The experimental results\ndemonstrate that our unified model achieves state-of-the-art performance on\nCVDN, SOON, and ScanQA. Specifically, it surpasses the previous\nstats-of-the-art method by a significant margin of 29% in goal progress on\nCVDN. Moreover, our model also demonstrates strong generalizability and\npresents impressive results on unseen tasks, e.g., embodied question answering\nand 3D captioning.", + "Motion capture from a limited number of body-worn sensors, such as inertial\nmeasurement units (IMUs) and pressure insoles, has important applications in\nhealth, human performance, and entertainment. Recent work has focused on\naccurately reconstructing whole-body motion from a specific sensor\nconfiguration using six IMUs. While a common goal across applications is to use\nthe minimal number of sensors to achieve required accuracy, the optimal\narrangement of the sensors might differ from application to application. We\npropose a single diffusion model, DiffusionPoser, which reconstructs human\nmotion in real-time from an arbitrary combination of sensors, including IMUs\nplaced at specified locations, and, pressure insoles. Unlike existing methods,\nour model grants users the flexibility to determine the number and arrangement\nof sensors tailored to the specific activity of interest, without the need for\nretraining. A novel autoregressive inferencing scheme ensures real-time motion\nreconstruction that closely aligns with measured sensor signals. The generative\nnature of DiffusionPoser ensures realistic behavior, even for\ndegrees-of-freedom not directly measured. Qualitative results can be found on\nour website: https://diffusionposer.github.io/.", + "Understanding how we grasp objects with our hands has important applications\nin areas like robotics and mixed reality. However, this challenging problem\nrequires accurate modeling of the contact between hands and objects. To capture\ngrasps, existing methods use skeletons, meshes, or parametric models that does\nnot represent hand shape accurately resulting in inaccurate contacts. We\npresent MANUS, a method for Markerless Hand-Object Grasp Capture using\nArticulated 3D Gaussians. We build a novel articulated 3D Gaussians\nrepresentation that extends 3D Gaussian splatting for high-fidelity\nrepresentation of articulating hands. Since our representation uses Gaussian\nprimitives, it enables us to efficiently and accurately estimate contacts\nbetween the hand and the object. For the most accurate results, our method\nrequires tens of camera views that current datasets do not provide. We\ntherefore build MANUS-Grasps, a new dataset that contains hand-object grasps\nviewed from 50+ cameras across 30+ scenes, 3 subjects, and comprising over 7M\nframes.", + "We\ntherefore build MANUS-Grasps, a new dataset that contains hand-object grasps\nviewed from 50+ cameras across 30+ scenes, 3 subjects, and comprising over 7M\nframes. In addition to extensive qualitative results, we also show that our\nmethod outperforms others on a quantitative contact evaluation method that uses\npaint transfer from the object to the hand.", + "In image restoration (IR), leveraging semantic priors from segmentation\nmodels has been a common approach to improve performance. The recent segment\nanything model (SAM) has emerged as a powerful tool for extracting advanced\nsemantic priors to enhance IR tasks. However, the computational cost of SAM is\nprohibitive for IR, compared to existing smaller IR models. The incorporation\nof SAM for extracting semantic priors considerably hampers the model inference\nefficiency. To address this issue, we propose a general framework to distill\nSAM's semantic knowledge to boost exiting IR models without interfering with\ntheir inference process. Specifically, our proposed framework consists of the\nsemantic priors fusion (SPF) scheme and the semantic priors distillation (SPD)\nscheme. SPF fuses two kinds of information between the restored image predicted\nby the original IR model and the semantic mask predicted by SAM for the refined\nrestored image. SPD leverages a self-distillation manner to distill the fused\nsemantic priors to boost the performance of original IR models.", + "SPF fuses two kinds of information between the restored image predicted\nby the original IR model and the semantic mask predicted by SAM for the refined\nrestored image. SPD leverages a self-distillation manner to distill the fused\nsemantic priors to boost the performance of original IR models. Additionally,\nwe design a semantic-guided relation (SGR) module for SPD, which ensures\nsemantic feature representation space consistency to fully distill the priors.\nWe demonstrate the effectiveness of our framework across multiple IR models and\ntasks, including deraining, deblurring, and denoising.", + "Geometric knowledge has been shown to be beneficial for the stereo matching\ntask. However, prior attempts to integrate geometric insights into stereo\nmatching algorithms have largely focused on geometric knowledge from single\nimages while crucial cross-view factors such as occlusion and matching\nuniqueness have been overlooked. To address this gap, we propose a novel\nIntra-view and Cross-view Geometric knowledge learning Network (ICGNet),\nspecifically crafted to assimilate both intra-view and cross-view geometric\nknowledge. ICGNet harnesses the power of interest points to serve as a channel\nfor intra-view geometric understanding. Simultaneously, it employs the\ncorrespondences among these points to capture cross-view geometric\nrelationships. This dual incorporation empowers the proposed ICGNet to leverage\nboth intra-view and cross-view geometric knowledge in its learning process,\nsubstantially improving its ability to estimate disparities. Our extensive\nexperiments demonstrate the superiority of the ICGNet over contemporary leading\nmodels.", + "Domain generalization aims to solve the challenge of Out-of-Distribution\n(OOD) generalization by leveraging common knowledge learned from multiple\ntraining domains to generalize to unseen test domains. To accurately evaluate\nthe OOD generalization ability, it is required that test data information is\nunavailable. However, the current domain generalization protocol may still have\npotential test data information leakage. This paper examines the risks of test\ndata information leakage from two aspects of the current evaluation protocol:\nsupervised pretraining on ImageNet and oracle model selection. We propose\nmodifications to the current protocol that we should employ self-supervised\npretraining or train from scratch instead of employing the current supervised\npretraining, and we should use multiple test domains. These would result in a\nmore precise evaluation of OOD generalization ability. We also rerun the\nalgorithms with the modified protocol and introduce new leaderboards to\nencourage future research in domain generalization with a fairer comparison.", + "Black-Box Knowledge Distillation (B2KD) is a formulated problem for\ncloud-to-edge model compression with invisible data and models hosted on the\nserver. B2KD faces challenges such as limited Internet exchange and edge-cloud\ndisparity of data distributions. In this paper, we formalize a two-step\nworkflow consisting of deprivatization and distillation, and theoretically\nprovide a new optimization direction from logits to cell boundary different\nfrom direct logits alignment. With its guidance, we propose a new method\nMapping-Emulation KD (MEKD) that distills a black-box cumbersome model into a\nlightweight one. Our method does not differentiate between treating soft or\nhard responses, and consists of: 1) deprivatization: emulating the inverse\nmapping of the teacher function with a generator, and 2) distillation: aligning\nlow-dimensional logits of the teacher and student models by reducing the\ndistance of high-dimensional image points. For different teacher-student pairs,\nour method yields inspiring distillation performance on various benchmarks, and\noutperforms the previous state-of-the-art approaches.", + "Generating large-scale 3D scenes cannot simply apply existing 3D object\nsynthesis technique since 3D scenes usually hold complex spatial configurations\nand consist of a number of objects at varying scales. We thus propose a\npractical and efficient 3D representation that incorporates an equivariant\nradiance field with the guidance of a bird's-eye view (BEV) map. Concretely,\nobjects of synthesized 3D scenes could be easily manipulated through steering\nthe corresponding BEV maps. Moreover, by adequately incorporating positional\nencoding and low-pass filters into the generator, the representation becomes\nequivariant to the given BEV map. Such equivariance allows us to produce\nlarge-scale, even infinite-scale, 3D scenes via synthesizing local scenes and\nthen stitching them with smooth consistency. Extensive experiments on 3D scene\ndatasets demonstrate the effectiveness of our approach. Our project website is\nat https://zqh0253.github.io/BerfScene/.", + "While existing methods for 3D face reconstruction from in-the-wild images\nexcel at recovering the overall face shape, they commonly miss subtle, extreme,\nasymmetric, or rarely observed expressions. We improve upon these methods with\nSMIRK (Spatial Modeling for Image-based Reconstruction of Kinesics), which\nfaithfully reconstructs expressive 3D faces from images. We identify two key\nlimitations in existing methods: shortcomings in their self-supervised training\nformulation, and a lack of expression diversity in the training images. For\ntraining, most methods employ differentiable rendering to compare a predicted\nface mesh with the input image, along with a plethora of additional loss\nfunctions. This differentiable rendering loss not only has to provide\nsupervision to optimize for 3D face geometry, camera, albedo, and lighting,\nwhich is an ill-posed optimization problem, but the domain gap between\nrendering and input image further hinders the learning process. Instead, SMIRK\nreplaces the differentiable rendering with a neural rendering module that,\ngiven the rendered predicted mesh geometry, and sparsely sampled pixels of the\ninput image, generates a face image.", + "Instead, SMIRK\nreplaces the differentiable rendering with a neural rendering module that,\ngiven the rendered predicted mesh geometry, and sparsely sampled pixels of the\ninput image, generates a face image. As the neural rendering gets color\ninformation from sampled image pixels, supervising with neural rendering-based\nreconstruction loss can focus solely on the geometry. Further, it enables us to\ngenerate images of the input identity with varying expressions while training.\nThese are then utilized as input to the reconstruction model and used as\nsupervision with ground truth geometry. This effectively augments the training\ndata and enhances the generalization for diverse expressions. Our qualitative,\nquantitative and particularly our perceptual evaluations demonstrate that SMIRK\nachieves the new state-of-the art performance on accurate expression\nreconstruction. Project webpage: https://georgeretsi.github.io/smirk/.", + "Vehicle-to-everything (V2X) is a popular topic in the field of Autonomous\nDriving in recent years. Vehicle-infrastructure cooperation (VIC) becomes one\nof the important research area. Due to the complexity of traffic conditions\nsuch as blind spots and occlusion, it greatly limits the perception\ncapabilities of single-view roadside sensing systems. To further enhance the\naccuracy of roadside perception and provide better information to the vehicle\nside, in this paper, we constructed holographic intersections with various\nlayouts to build a large-scale multi-sensor holographic vehicle-infrastructure\ncooperation dataset, called HoloVIC. Our dataset includes 3 different types of\nsensors (Camera, Lidar, Fisheye) and employs 4 sensor-layouts based on the\ndifferent intersections. Each intersection is equipped with 6-18 sensors to\ncapture synchronous data. While autonomous vehicles pass through these\nintersections for collecting VIC data. HoloVIC contains in total on 100k+\nsynchronous frames from different sensors. Additionally, we annotated 3D\nbounding boxes based on Camera, Fisheye, and Lidar.", + "While autonomous vehicles pass through these\nintersections for collecting VIC data. HoloVIC contains in total on 100k+\nsynchronous frames from different sensors. Additionally, we annotated 3D\nbounding boxes based on Camera, Fisheye, and Lidar. We also associate the IDs\nof the same objects across different devices and consecutive frames in\nsequence. Based on HoloVIC, we formulated four tasks to facilitate the\ndevelopment of related research. We also provide benchmarks for these tasks.", + "The Segment Anything Model (SAM) has garnered significant attention for its\nversatile segmentation abilities and intuitive prompt-based interface. However,\nits application in medical imaging presents challenges, requiring either\nsubstantial training costs and extensive medical datasets for full model\nfine-tuning or high-quality prompts for optimal performance. This paper\nintroduces H-SAM: a prompt-free adaptation of SAM tailored for efficient\nfine-tuning of medical images via a two-stage hierarchical decoding procedure.\nIn the initial stage, H-SAM employs SAM's original decoder to generate a prior\nprobabilistic mask, guiding a more intricate decoding process in the second\nstage. Specifically, we propose two key designs: 1) A class-balanced,\nmask-guided self-attention mechanism addressing the unbalanced label\ndistribution, enhancing image embedding; 2) A learnable mask cross-attention\nmechanism spatially modulating the interplay among different image regions\nbased on the prior mask. Moreover, the inclusion of a hierarchical pixel\ndecoder in H-SAM enhances its proficiency in capturing fine-grained and\nlocalized details. This approach enables SAM to effectively integrate learned\nmedical priors, facilitating enhanced adaptation for medical image segmentation\nwith limited samples.", + "Moreover, the inclusion of a hierarchical pixel\ndecoder in H-SAM enhances its proficiency in capturing fine-grained and\nlocalized details. This approach enables SAM to effectively integrate learned\nmedical priors, facilitating enhanced adaptation for medical image segmentation\nwith limited samples. Our H-SAM demonstrates a 4.78% improvement in average\nDice compared to existing prompt-free SAM variants for multi-organ segmentation\nusing only 10% of 2D slices. Notably, without using any unlabeled data, H-SAM\neven outperforms state-of-the-art semi-supervised models relying on extensive\nunlabeled training data across various medical datasets. Our code is available\nat https://github.com/Cccccczh404/H-SAM.", + "Adjusting camera exposure in arbitrary lighting conditions is the first step\nto ensure the functionality of computer vision applications. Poorly adjusted\ncamera exposure often leads to critical failure and performance degradation.\nTraditional camera exposure control methods require multiple convergence steps\nand time-consuming processes, making them unsuitable for dynamic lighting\nconditions. In this paper, we propose a new camera exposure control framework\nthat rapidly controls camera exposure while performing real-time processing by\nexploiting deep reinforcement learning. The proposed framework consists of four\ncontributions: 1) a simplified training ground to simulate real-world's diverse\nand dynamic lighting changes, 2) flickering and image attribute-aware reward\ndesign, along with lightweight state design for real-time processing, 3) a\nstatic-to-dynamic lighting curriculum to gradually improve the agent's\nexposure-adjusting capability, and 4) domain randomization techniques to\nalleviate the limitation of the training ground and achieve seamless\ngeneralization in the wild.As a result, our proposed method rapidly reaches a\ndesired exposure level within five steps with real-time processing (1 ms).\nAlso, the acquired images are well-exposed and show superiority in various\ncomputer vision tasks, such as feature extraction and object detection.", + "We introduce the \\method, an ultra-efficient approach for monocular 3D object\nreconstruction. Splatter Image is based on Gaussian Splatting, which allows\nfast and high-quality reconstruction of 3D scenes from multiple images. We\napply Gaussian Splatting to monocular reconstruction by learning a neural\nnetwork that, at test time, performs reconstruction in a feed-forward manner,\nat 38 FPS. Our main innovation is the surprisingly straightforward design of\nthis network, which, using 2D operators, maps the input image to one 3D\nGaussian per pixel. The resulting set of Gaussians thus has the form an image,\nthe Splatter Image. We further extend the method take several images as input\nvia cross-view attention. Owning to the speed of the renderer (588 FPS), we use\na single GPU for training while generating entire images at each iteration to\noptimize perceptual metrics like LPIPS. On several synthetic, real,\nmulti-category and large-scale benchmark datasets, we achieve better results in\nterms of PSNR, LPIPS, and other metrics while training and evaluating much\nfaster than prior works.", + "On several synthetic, real,\nmulti-category and large-scale benchmark datasets, we achieve better results in\nterms of PSNR, LPIPS, and other metrics while training and evaluating much\nfaster than prior works. Code, models, demo and more results are available at\nhttps://szymanowiczs.github.io/splatter-image.", + "From content moderation to wildlife conservation, the number of applications\nthat require models to recognize nuanced or subjective visual concepts is\ngrowing. Traditionally, developing classifiers for such concepts requires\nsubstantial manual effort measured in hours, days, or even months to identify\nand annotate data needed for training. Even with recently proposed Agile\nModeling techniques, which enable rapid bootstrapping of image classifiers,\nusers are still required to spend 30 minutes or more of monotonous, repetitive\ndata labeling just to train a single classifier. Drawing on Fiske's Cognitive\nMiser theory, we propose a new framework that alleviates manual effort by\nreplacing human labeling with natural language interactions, reducing the total\neffort required to define a concept by an order of magnitude: from labeling\n2,000 images to only 100 plus some natural language interactions. Our framework\nleverages recent advances in foundation models, both large language models and\nvision-language models, to carve out the concept space through conversation and\nby automatically labeling training data points. Most importantly, our framework\neliminates the need for crowd-sourced annotations. Moreover, our framework\nultimately produces lightweight classification models that are deployable in\ncost-sensitive scenarios.", + "Most importantly, our framework\neliminates the need for crowd-sourced annotations. Moreover, our framework\nultimately produces lightweight classification models that are deployable in\ncost-sensitive scenarios. Across 15 subjective concepts and across 2 public\nimage classification datasets, our trained models outperform traditional Agile\nModeling as well as state-of-the-art zero-shot classification models like\nALIGN, CLIP, CuPL, and large visual question-answering models like PaLI-X.", + "Backdoor attack poses a significant security threat to Deep Learning\napplications. Existing attacks are often not evasive to established backdoor\ndetection techniques. This susceptibility primarily stems from the fact that\nthese attacks typically leverage a universal trigger pattern or transformation\nfunction, such that the trigger can cause misclassification for any input. In\nresponse to this, recent papers have introduced attacks using sample-specific\ninvisible triggers crafted through special transformation functions. While\nthese approaches manage to evade detection to some extent, they reveal\nvulnerability to existing backdoor mitigation techniques. To address and\nenhance both evasiveness and resilience, we introduce a novel backdoor attack\nLOTUS. Specifically, it leverages a secret function to separate samples in the\nvictim class into a set of partitions and applies unique triggers to different\npartitions. Furthermore, LOTUS incorporates an effective trigger focusing\nmechanism, ensuring only the trigger corresponding to the partition can induce\nthe backdoor behavior. Extensive experimental results show that LOTUS can\nachieve high attack success rate across 4 datasets and 7 model structures, and\neffectively evading 13 backdoor detection and mitigation techniques. The code\nis available at https://github.com/Megum1/LOTUS.", + "Object pose refinement is essential for robust object pose estimation.\nPrevious work has made significant progress towards instance-level object pose\nrefinement. Yet, category-level pose refinement is a more challenging problem\ndue to large shape variations within a category and the discrepancies between\nthe target object and the shape prior. To address these challenges, we\nintroduce a novel architecture for category-level object pose refinement. Our\napproach integrates an HS-layer and learnable affine transformations, which\naims to enhance the extraction and alignment of geometric information.\nAdditionally, we introduce a cross-cloud transformation mechanism that\nefficiently merges diverse data sources. Finally, we push the limits of our\nmodel by incorporating the shape prior information for translation and size\nerror prediction. We conducted extensive experiments to demonstrate the\neffectiveness of the proposed framework. Through extensive quantitative\nexperiments, we demonstrate significant improvement over the baseline method by\na large margin across all metrics.", + "Confronting the challenges of data scarcity and advanced motion synthesis in\nhuman-scene interaction modeling, we introduce the TRUMANS dataset alongside a\nnovel HSI motion synthesis method. TRUMANS stands as the most comprehensive\nmotion-captured HSI dataset currently available, encompassing over 15 hours of\nhuman interactions across 100 indoor scenes. It intricately captures whole-body\nhuman motions and part-level object dynamics, focusing on the realism of\ncontact. This dataset is further scaled up by transforming physical\nenvironments into exact virtual models and applying extensive augmentations to\nappearance and motion for both humans and objects while maintaining interaction\nfidelity. Utilizing TRUMANS, we devise a diffusion-based autoregressive model\nthat efficiently generates HSI sequences of any length, taking into account\nboth scene context and intended actions. In experiments, our approach shows\nremarkable zero-shot generalizability on a range of 3D scene datasets (e.g.,\nPROX, Replica, ScanNet, ScanNet++), producing motions that closely mimic\noriginal motion-captured sequences, as confirmed by quantitative experiments\nand human studies.", + "Single-point annotation in visual tasks, with the goal of minimizing\nlabelling costs, is becoming increasingly prominent in research. Recently,\nvisual foundation models, such as Segment Anything (SAM), have gained\nwidespread usage due to their robust zero-shot capabilities and exceptional\nannotation performance. However, SAM's class-agnostic output and high\nconfidence in local segmentation introduce 'semantic ambiguity', posing a\nchallenge for precise category-specific segmentation. In this paper, we\nintroduce a cost-effective category-specific segmenter using SAM. To tackle\nthis challenge, we have devised a Semantic-Aware Instance Segmentation Network\n(SAPNet) that integrates Multiple Instance Learning (MIL) with matching\ncapability and SAM with point prompts. SAPNet strategically selects the most\nrepresentative mask proposals generated by SAM to supervise segmentation, with\na specific focus on object category information. Moreover, we introduce the\nPoint Distance Guidance and Box Mining Strategy to mitigate inherent\nchallenges: 'group' and 'local' issues in weakly supervised segmentation. These\nstrategies serve to further enhance the overall segmentation performance.", + "Moreover, we introduce the\nPoint Distance Guidance and Box Mining Strategy to mitigate inherent\nchallenges: 'group' and 'local' issues in weakly supervised segmentation. These\nstrategies serve to further enhance the overall segmentation performance. The\nexperimental results on Pascal VOC and COCO demonstrate the promising\nperformance of our proposed SAPNet, emphasizing its semantic matching\ncapabilities and its potential to advance point-prompted instance segmentation.\nThe code will be made publicly available.", + "This paper proposes Group Activity Feature (GAF) learning in which features\nof multi-person activity are learned as a compact latent vector. Unlike prior\nwork in which the manual annotation of group activities is required for\nsupervised learning, our method learns the GAF through person attribute\nprediction without group activity annotations. By learning the whole network in\nan end-to-end manner so that the GAF is required for predicting the person\nattributes of people in a group, the GAF is trained as the features of\nmulti-person activity. As a person attribute, we propose to use a person's\naction class and appearance features because the former is easy to annotate due\nto its simpleness, and the latter requires no manual annotation. In addition,\nwe introduce a location-guided attribute prediction to disentangle the complex\nGAF for extracting the features of each target person properly. Various\nexperimental results validate that our method outperforms SOTA methods\nquantitatively and qualitatively on two public datasets. Visualization of our\nGAF also demonstrates that our method learns the GAF representing fined-grained\ngroup activity classes. Code: https://github.com/chihina/GAFL-CVPR2024.", + "Human-centric 3D scene understanding has recently drawn increasing attention,\ndriven by its critical impact on robotics. However, human-centric real-life\nscenarios are extremely diverse and complicated, and humans have intricate\nmotions and interactions. With limited labeled data, supervised methods are\ndifficult to generalize to general scenarios, hindering real-life applications.\nMimicking human intelligence, we propose an unsupervised 3D detection method\nfor human-centric scenarios by transferring the knowledge from synthetic human\ninstances to real scenes. To bridge the gap between the distinct data\nrepresentations and feature distributions of synthetic models and real point\nclouds, we introduce novel modules for effective instance-to-scene\nrepresentation transfer and synthetic-to-real feature alignment. Remarkably,\nour method exhibits superior performance compared to current state-of-the-art\ntechniques, achieving 87.8% improvement in mAP and closely approaching the\nperformance of fully supervised methods (62.15 mAP vs. 69.02 mAP) on HuCenLife\nDataset.", + "Brain decoding, a pivotal field in neuroscience, aims to reconstruct stimuli\nfrom acquired brain signals, primarily utilizing functional magnetic resonance\nimaging (fMRI). Currently, brain decoding is confined to a\nper-subject-per-model paradigm, limiting its applicability to the same\nindividual for whom the decoding model is trained. This constraint stems from\nthree key challenges: 1) the inherent variability in input dimensions across\nsubjects due to differences in brain size; 2) the unique intrinsic neural\npatterns, influencing how different individuals perceive and process sensory\ninformation; 3) limited data availability for new subjects in real-world\nscenarios hampers the performance of decoding models. In this paper, we present\na novel approach, MindBridge, that achieves cross-subject brain decoding by\nemploying only one model. Our proposed framework establishes a generic paradigm\ncapable of addressing these challenges by introducing biological-inspired\naggregation function and novel cyclic fMRI reconstruction mechanism for\nsubject-invariant representation learning. Notably, by cycle reconstruction of\nfMRI, MindBridge can enable novel fMRI synthesis, which also can serve as\npseudo data augmentation.", + "Notably, by cycle reconstruction of\nfMRI, MindBridge can enable novel fMRI synthesis, which also can serve as\npseudo data augmentation. Within the framework, we also devise a novel\nreset-tuning method for adapting a pretrained model to a new subject.\nExperimental results demonstrate MindBridge's ability to reconstruct images for\nmultiple subjects, which is competitive with dedicated subject-specific models.\nFurthermore, with limited data for a new subject, we achieve a high level of\ndecoding accuracy, surpassing that of subject-specific models. This advancement\nin cross-subject brain decoding suggests promising directions for wider\napplications in neuroscience and indicates potential for more efficient\nutilization of limited fMRI data in real-world scenarios. Project page:\nhttps://littlepure2333.github.io/MindBridge", + "Creating high-dynamic videos such as motion-rich actions and sophisticated\nvisual effects poses a significant challenge in the field of artificial\nintelligence. Unfortunately, current state-of-the-art video generation methods,\nprimarily focusing on text-to-video generation, tend to produce video clips\nwith minimal motions despite maintaining high fidelity. We argue that relying\nsolely on text instructions is insufficient and suboptimal for video\ngeneration. In this paper, we introduce PixelDance, a novel approach based on\ndiffusion models that incorporates image instructions for both the first and\nlast frames in conjunction with text instructions for video generation.\nComprehensive experimental results demonstrate that PixelDance trained with\npublic data exhibits significantly better proficiency in synthesizing videos\nwith complex scenes and intricate motions, setting a new standard for video\ngeneration.", + "Recent advances in generative diffusion models have enabled the previously\nunfeasible capability of generating 3D assets from a single input image or a\ntext prompt. In this work, we aim to enhance the quality and functionality of\nthese models for the task of creating controllable, photorealistic human\navatars. We achieve this by integrating a 3D morphable model into the\nstate-of-the-art multi-view-consistent diffusion approach. We demonstrate that\naccurate conditioning of a generative pipeline on the articulated 3D model\nenhances the baseline model performance on the task of novel view synthesis\nfrom a single image. More importantly, this integration facilitates a seamless\nand accurate incorporation of facial expression and body pose control into the\ngeneration process. To the best of our knowledge, our proposed framework is the\nfirst diffusion model to enable the creation of fully 3D-consistent,\nanimatable, and photorealistic human avatars from a single image of an unseen\nsubject; extensive quantitative and qualitative evaluations demonstrate the\nadvantages of our approach over existing state-of-the-art avatar creation\nmodels on both novel view and novel expression synthesis tasks. The code for\nour project is publicly available.", + "In magnetic resonance imaging (MRI), slice-to-volume reconstruction (SVR)\nrefers to computational reconstruction of an unknown 3D magnetic resonance\nvolume from stacks of 2D slices corrupted by motion. While promising, current\nSVR methods require multiple slice stacks for accurate 3D reconstruction,\nleading to long scans and limiting their use in time-sensitive applications\nsuch as fetal fMRI. Here, we propose a SVR method that overcomes the\nshortcomings of previous work and produces state-of-the-art reconstructions in\nthe presence of extreme inter-slice motion. Inspired by the recent success of\nsingle-view depth estimation methods, we formulate SVR as a single-stack motion\nestimation task and train a fully convolutional network to predict a motion\nstack for a given slice stack, producing a 3D reconstruction as a byproduct of\nthe predicted motion. Extensive experiments on the SVR of adult and fetal\nbrains demonstrate that our fully convolutional method is twice as accurate as\nprevious SVR methods. Our code is available at github.com/seannz/svr.", + "Text-to-image (T2I) generative models have recently emerged as a powerful\ntool, enabling the creation of photo-realistic images and giving rise to a\nmultitude of applications. However, the effective integration of T2I models\ninto fundamental image classification tasks remains an open question. A\nprevalent strategy to bolster image classification performance is through\naugmenting the training set with synthetic images generated by T2I models. In\nthis study, we scrutinize the shortcomings of both current generative and\nconventional data augmentation techniques. Our analysis reveals that these\nmethods struggle to produce images that are both faithful (in terms of\nforeground objects) and diverse (in terms of background contexts) for\ndomain-specific concepts. To tackle this challenge, we introduce an innovative\ninter-class data augmentation method known as Diff-Mix\n(https://github.com/Zhicaiwww/Diff-Mix), which enriches the dataset by\nperforming image translations between classes. Our empirical results\ndemonstrate that Diff-Mix achieves a better balance between faithfulness and\ndiversity, leading to a marked improvement in performance across diverse image\nclassification scenarios, including few-shot, conventional, and long-tail\nclassifications for domain-specific datasets.", + "We present a generative approach to forecast long-term future human behavior\nin 3D, requiring only weak supervision from readily available 2D human action\ndata. This is a fundamental task enabling many downstream applications. The\nrequired ground-truth data is hard to capture in 3D (mocap suits, expensive\nsetups) but easy to acquire in 2D (simple RGB cameras). Thus, we design our\nmethod to only require 2D RGB data at inference time while being able to\ngenerate 3D human motion sequences. We use a differentiable 2D projection\nscheme in an autoregressive manner for weak supervision, and an adversarial\nloss for 3D regularization. Our method predicts long and complex human behavior\nsequences (e.g., cooking, assembly) consisting of multiple sub-actions. We\ntackle this in a semantically hierarchical manner, jointly predicting\nhigh-level coarse action labels together with their low-level fine-grained\nrealizations as characteristic 3D human poses. We observe that these two action\nrepresentations are coupled in nature, and joint prediction benefits both\naction and pose forecasting.", + "We observe that these two action\nrepresentations are coupled in nature, and joint prediction benefits both\naction and pose forecasting. Our experiments demonstrate the complementary\nnature of joint action and 3D pose prediction: our joint approach outperforms\neach task treated individually, enables robust longer-term sequence prediction,\nand improves over alternative approaches to forecast actions and characteristic\n3D poses.", + "In the realm of computer vision, Neural Fields have gained prominence as a\ncontemporary tool harnessing neural networks for signal representation. Despite\nthe remarkable progress in adapting these networks to solve a variety of\nproblems, the field still lacks a comprehensive theoretical framework. This\narticle aims to address this gap by delving into the intricate interplay\nbetween initialization and activation, providing a foundational basis for the\nrobust optimization of Neural Fields. Our theoretical insights reveal a\ndeep-seated connection among network initialization, architectural choices, and\nthe optimization process, emphasizing the need for a holistic approach when\ndesigning cutting-edge Neural Fields.", + "3D instance segmentation is fundamental to geometric understanding of the\nworld around us. Existing methods for instance segmentation of 3D scenes rely\non supervision from expensive, manual 3D annotations. We propose UnScene3D, the\nfirst fully unsupervised 3D learning approach for class-agnostic 3D instance\nsegmentation of indoor scans. UnScene3D first generates pseudo masks by\nleveraging self-supervised color and geometry features to find potential object\nregions. We operate on a basis of geometric oversegmentation, enabling\nefficient representation and learning on high-resolution 3D data. The coarse\nproposals are then refined through self-training our model on its predictions.\nOur approach improves over state-of-the-art unsupervised 3D instance\nsegmentation methods by more than 300% Average Precision score, demonstrating\neffective instance segmentation even in challenging, cluttered 3D scenes.", + "Model quantization is widely used to compress and accelerate deep neural\nnetworks. However, recent studies have revealed the feasibility of weaponizing\nmodel quantization via implanting quantization-conditioned backdoors (QCBs).\nThese special backdoors stay dormant on released full-precision models but will\ncome into effect after standard quantization. Due to the peculiarity of QCBs,\nexisting defenses have minor effects on reducing their threats or are even\ninfeasible. In this paper, we conduct the first in-depth analysis of QCBs. We\nreveal that the activation of existing QCBs primarily stems from the nearest\nrounding operation and is closely related to the norms of neuron-wise\ntruncation errors (i.e., the difference between the continuous full-precision\nweights and its quantized version). Motivated by these insights, we propose\nError-guided Flipped Rounding with Activation Preservation (EFRAP), an\neffective and practical defense against QCBs. Specifically, EFRAP learns a\nnon-nearest rounding strategy with neuron-wise error norm and layer-wise\nactivation preservation guidance, flipping the rounding strategies of neurons\ncrucial for backdoor effects but with minimal impact on clean accuracy.", + "Specifically, EFRAP learns a\nnon-nearest rounding strategy with neuron-wise error norm and layer-wise\nactivation preservation guidance, flipping the rounding strategies of neurons\ncrucial for backdoor effects but with minimal impact on clean accuracy.\nExtensive evaluations on benchmark datasets demonstrate that our EFRAP can\ndefeat state-of-the-art QCB attacks under various settings. Code is available\nat https://github.com/AntigoneRandy/QuantBackdoor_EFRAP.", + "The realism of digital avatars is crucial in enabling telepresence\napplications with self-expression and customization. While physical simulations\ncan produce realistic motions for clothed humans, they require high-quality\ngarment assets with associated physical parameters for cloth simulations.\nHowever, manually creating these assets and calibrating their parameters is\nlabor-intensive and requires specialized expertise. Current methods focus on\nreconstructing geometry, but don't generate complete assets for physics-based\napplications. To address this gap, we propose \\papername,~a novel approach that\nperforms body and garment co-optimization using differentiable simulation. By\nintegrating physical simulation into the optimization loop and accounting for\nthe complex nonlinear behavior of cloth and its intricate interaction with the\nbody, our framework recovers body and garment geometry and extracts important\nmaterial parameters in a physically plausible way. Our experiments demonstrate\nthat our approach generates realistic clothing and body shape suitable for\ndownstream applications. We provide additional insights and results on our\nwebpage: https://people.csail.mit.edu/liyifei/publication/diffavatar/", + "Generalization to new domains not seen during training is one of the\nlong-standing challenges in deploying neural networks in real-world\napplications. Existing generalization techniques either necessitate external\nimages for augmentation, and/or aim at learning invariant representations by\nimposing various alignment constraints. Large-scale pretraining has recently\nshown promising generalization capabilities, along with the potential of\nbinding different modalities. For instance, the advent of vision-language\nmodels like CLIP has opened the doorway for vision models to exploit the\ntextual modality. In this paper, we introduce a simple framework for\ngeneralizing semantic segmentation networks by employing language as the source\nof randomization. Our recipe comprises three key ingredients: (i) the\npreservation of the intrinsic CLIP robustness through minimal fine-tuning, (ii)\nlanguage-driven local style augmentation, and (iii) randomization by locally\nmixing the source and augmented styles during training. Extensive experiments\nreport state-of-the-art results on various generalization benchmarks. Code is\naccessible at https://github.com/astra-vision/FAMix .", + "Diffusion models are just at a tipping point for image super-resolution task.\nNevertheless, it is not trivial to capitalize on diffusion models for video\nsuper-resolution which necessitates not only the preservation of visual\nappearance from low-resolution to high-resolution videos, but also the temporal\nconsistency across video frames. In this paper, we propose a novel approach,\npursuing Spatial Adaptation and Temporal Coherence (SATeCo), for video\nsuper-resolution. SATeCo pivots on learning spatial-temporal guidance from\nlow-resolution videos to calibrate both latent-space high-resolution video\ndenoising and pixel-space video reconstruction. Technically, SATeCo freezes all\nthe parameters of the pre-trained UNet and VAE, and only optimizes two\ndeliberately-designed spatial feature adaptation (SFA) and temporal feature\nalignment (TFA) modules, in the decoder of UNet and VAE. SFA modulates frame\nfeatures via adaptively estimating affine parameters for each pixel,\nguaranteeing pixel-wise guidance for high-resolution frame synthesis.", + "SFA modulates frame\nfeatures via adaptively estimating affine parameters for each pixel,\nguaranteeing pixel-wise guidance for high-resolution frame synthesis. TFA\ndelves into feature interaction within a 3D local window (tubelet) through\nself-attention, and executes cross-attention between tubelet and its\nlow-resolution counterpart to guide temporal feature alignment. Extensive\nexperiments conducted on the REDS4 and Vid4 datasets demonstrate the\neffectiveness of our approach.", + "Monocular Depth Estimation (MDE) is a fundamental problem in computer vision\nwith numerous applications. Recently, LIDAR-supervised methods have achieved\nremarkable per-pixel depth accuracy in outdoor scenes. However, significant\nerrors are typically found in the proximity of depth discontinuities, i.e.,\ndepth edges, which often hinder the performance of depth-dependent applications\nthat are sensitive to such inaccuracies, e.g., novel view synthesis and\naugmented reality. Since direct supervision for the location of depth edges is\ntypically unavailable in sparse LIDAR-based scenes, encouraging the MDE model\nto produce correct depth edges is not straightforward. To the best of our\nknowledge this paper is the first attempt to address the depth edges issue for\nLIDAR-supervised scenes. In this work we propose to learn to detect the\nlocation of depth edges from densely-supervised synthetic data, and use it to\ngenerate supervision for the depth edges in the MDE training. To quantitatively\nevaluate our approach, and due to the lack of depth edges GT in LIDAR-based\nscenes, we manually annotated subsets of the KITTI and the DDAD datasets with\ndepth edges ground truth.", + "To quantitatively\nevaluate our approach, and due to the lack of depth edges GT in LIDAR-based\nscenes, we manually annotated subsets of the KITTI and the DDAD datasets with\ndepth edges ground truth. We demonstrate significant gains in the accuracy of\nthe depth edges with comparable per-pixel depth accuracy on several challenging\ndatasets. Code and datasets are available at\n\\url{https://github.com/liortalker/MindTheEdge}.", + "Diffusion Models (DMs) have exhibited superior performance in generating\nhigh-quality and diverse images. However, this exceptional performance comes at\nthe cost of expensive architectural design, particularly due to the attention\nmodule heavily used in leading models. Existing works mainly adopt a retraining\nprocess to enhance DM efficiency. This is computationally expensive and not\nvery scalable. To this end, we introduce the Attention-driven Training-free\nEfficient Diffusion Model (AT-EDM) framework that leverages attention maps to\nperform run-time pruning of redundant tokens, without the need for any\nretraining. Specifically, for single-denoising-step pruning, we develop a novel\nranking algorithm, Generalized Weighted Page Rank (G-WPR), to identify\nredundant tokens, and a similarity-based recovery method to restore tokens for\nthe convolution operation. In addition, we propose a Denoising-Steps-Aware\nPruning (DSAP) approach to adjust the pruning budget across different denoising\ntimesteps for better generation quality.", + "In addition, we propose a Denoising-Steps-Aware\nPruning (DSAP) approach to adjust the pruning budget across different denoising\ntimesteps for better generation quality. Extensive evaluations show that AT-EDM\nperforms favorably against prior art in terms of efficiency (e.g., 38.8% FLOPs\nsaving and up to 1.53x speed-up over Stable Diffusion XL) while maintaining\nnearly the same FID and CLIP scores as the full model. Project webpage:\nhttps://atedm.github.io.", + "Retrieval Augmented Generation (RAG) is emerging as a flexible and robust\ntechnique to adapt models to private users data without training, to handle\ncredit attribution, and to allow efficient machine unlearning at scale.\nHowever, RAG techniques for image generation may lead to parts of the retrieved\nsamples being copied in the model's output. To reduce risks of leaking private\ninformation contained in the retrieved set, we introduce Copy-Protected\ngeneration with Retrieval (CPR), a new method for RAG with strong copyright\nprotection guarantees in a mixed-private setting for diffusion models.CPR\nallows to condition the output of diffusion models on a set of retrieved\nimages, while also guaranteeing that unique identifiable information about\nthose example is not exposed in the generated outputs. In particular, it does\nso by sampling from a mixture of public (safe) distribution and private (user)\ndistribution by merging their diffusion scores at inference. We prove that CPR\nsatisfies Near Access Freeness (NAF) which bounds the amount of information an\nattacker may be able to extract from the generated images. We provide two\nalgorithms for copyright protection, CPR-KL and CPR-Choose.", + "We prove that CPR\nsatisfies Near Access Freeness (NAF) which bounds the amount of information an\nattacker may be able to extract from the generated images. We provide two\nalgorithms for copyright protection, CPR-KL and CPR-Choose. Unlike previously\nproposed rejection-sampling-based NAF methods, our methods enable efficient\ncopyright-protected sampling with a single run of backward diffusion. We show\nthat our method can be applied to any pre-trained conditional diffusion model,\nsuch as Stable Diffusion or unCLIP. In particular, we empirically show that\napplying CPR on top of unCLIP improves quality and text-to-image alignment of\nthe generated results (81.4 to 83.17 on TIFA benchmark), while enabling credit\nattribution, copy-right protection, and deterministic, constant time,\nunlearning.", + "To serve the intricate and varied demands of image editing, precise and\nflexible manipulation in image content is indispensable. Recently, Drag-based\nediting methods have gained impressive performance. However, these methods\npredominantly center on point dragging, resulting in two noteworthy drawbacks,\nnamely \"miss tracking\", where difficulties arise in accurately tracking the\npredetermined handle points, and \"ambiguous tracking\", where tracked points are\npotentially positioned in wrong regions that closely resemble the handle\npoints. To address the above issues, we propose FreeDrag, a feature dragging\nmethodology designed to free the burden on point tracking. The FreeDrag\nincorporates two key designs, i.e., template feature via adaptive updating and\nline search with backtracking, the former improves the stability against\ndrastic content change by elaborately controls feature updating scale after\neach dragging, while the latter alleviates the misguidance from similar points\nby actively restricting the search area in a line. These two technologies\ntogether contribute to a more stable semantic dragging with higher efficiency.\nComprehensive experimental results substantiate that our approach significantly\noutperforms pre-existing methodologies, offering reliable point-based editing\neven in various complex scenarios.", + "This paper addresses text-supervised semantic segmentation, aiming to learn a\nmodel capable of segmenting arbitrary visual concepts within images by using\nonly image-text pairs without dense annotations. Existing methods have\ndemonstrated that contrastive learning on image-text pairs effectively aligns\nvisual segments with the meanings of texts. We notice that there is a\ndiscrepancy between text alignment and semantic segmentation: A text often\nconsists of multiple semantic concepts, whereas semantic segmentation strives\nto create semantically homogeneous segments. To address this issue, we propose\na novel framework, Image-Text Co-Decomposition (CoDe), where the paired image\nand text are jointly decomposed into a set of image regions and a set of word\nsegments, respectively, and contrastive learning is developed to enforce\nregion-word alignment. To work with a vision-language model, we present a\nprompt learning mechanism that derives an extra representation to highlight an\nimage segment or a word segment of interest, with which more effective features\ncan be extracted from that segment. Comprehensive experimental results\ndemonstrate that our method performs favorably against existing text-supervised\nsemantic segmentation methods on six benchmark datasets.", + "To accommodate real-world dynamics, artificial intelligence systems need to\ncope with sequentially arriving content in an online manner. Beyond regular\nContinual Learning (CL) attempting to address catastrophic forgetting with\noffline training of each task, Online Continual Learning (OCL) is a more\nchallenging yet realistic setting that performs CL in a one-pass data stream.\nCurrent OCL methods primarily rely on memory replay of old training samples.\nHowever, a notable gap from CL to OCL stems from the additional\noverfitting-underfitting dilemma associated with the use of rehearsal buffers:\nthe inadequate learning of new training samples (underfitting) and the repeated\nlearning of a few old training samples (overfitting). To this end, we introduce\na novel approach, Multi-level Online Sequential Experts (MOSE), which\ncultivates the model as stacked sub-experts, integrating multi-level\nsupervision and reverse self-distillation. Supervision signals across multiple\nstages facilitate appropriate convergence of the new task while gathering\nvarious strengths from experts by knowledge distillation mitigates the\nperformance decline of old tasks.", + "Supervision signals across multiple\nstages facilitate appropriate convergence of the new task while gathering\nvarious strengths from experts by knowledge distillation mitigates the\nperformance decline of old tasks. MOSE demonstrates remarkable efficacy in\nlearning new samples and preserving past knowledge through multi-level experts,\nthereby significantly advancing OCL performance over state-of-the-art baselines\n(e.g., up to 7.3% on Split CIFAR-100 and 6.1% on Split Tiny-ImageNet).", + "In the pursuit of robust and generalizable environment perception and\nlanguage understanding, the ubiquitous challenge of dataset bias continues to\nplague vision-and-language navigation (VLN) agents, hindering their performance\nin unseen environments. This paper introduces the generalized cross-modal\ncausal transformer (GOAT), a pioneering solution rooted in the paradigm of\ncausal inference. By delving into both observable and unobservable confounders\nwithin vision, language, and history, we propose the back-door and front-door\nadjustment causal learning (BACL and FACL) modules to promote unbiased learning\nby comprehensively mitigating potential spurious correlations. Additionally, to\ncapture global confounder features, we propose a cross-modal feature pooling\n(CFP) module supervised by contrastive learning, which is also shown to be\neffective in improving cross-modal representations during pre-training.\nExtensive experiments across multiple VLN datasets (R2R, REVERIE, RxR, and\nSOON) underscore the superiority of our proposed method over previous\nstate-of-the-art approaches. Code is available at\nhttps://github.com/CrystalSixone/VLN-GOAT.", + "In the realm of point cloud scene understanding, particularly in indoor\nscenes, objects are arranged following human habits, resulting in objects of\ncertain semantics being closely positioned and displaying notable inter-object\ncorrelations. This can create a tendency for neural networks to exploit these\nstrong dependencies, bypassing the individual object patterns. To address this\nchallenge, we introduce a novel self-supervised learning (SSL) strategy. Our\napproach leverages both object patterns and contextual cues to produce robust\nfeatures. It begins with the formulation of an object-exchanging strategy,\nwhere pairs of objects with comparable sizes are exchanged across different\nscenes, effectively disentangling the strong contextual dependencies.\nSubsequently, we introduce a context-aware feature learning strategy, which\nencodes object patterns without relying on their specific context by\naggregating object features across various scenes. Our extensive experiments\ndemonstrate the superiority of our method over existing SSL techniques, further\nshowing its better robustness to environmental changes. Moreover, we showcase\nthe applicability of our approach by transferring pre-trained models to diverse\npoint cloud datasets.", + "Addressing pose ambiguity in 6D object pose estimation from single RGB images\npresents a significant challenge, particularly due to object symmetries or\nocclusions. In response, we introduce a novel score-based diffusion method\napplied to the $SE(3)$ group, marking the first application of diffusion models\nto $SE(3)$ within the image domain, specifically tailored for pose estimation\ntasks. Extensive evaluations demonstrate the method's efficacy in handling pose\nambiguity, mitigating perspective-induced ambiguity, and showcasing the\nrobustness of our surrogate Stein score formulation on $SE(3)$. This\nformulation not only improves the convergence of denoising process but also\nenhances computational efficiency. Thus, we pioneer a promising strategy for 6D\nobject pose estimation.", + "We address the problem of synthesizing multi-view optical illusions: images\nthat change appearance upon a transformation, such as a flip or rotation. We\npropose a simple, zero-shot method for obtaining these illusions from\noff-the-shelf text-to-image diffusion models. During the reverse diffusion\nprocess, we estimate the noise from different views of a noisy image, and then\ncombine these noise estimates together and denoise the image. A theoretical\nanalysis suggests that this method works precisely for views that can be\nwritten as orthogonal transformations, of which permutations are a subset. This\nleads to the idea of a visual anagram--an image that changes appearance under\nsome rearrangement of pixels. This includes rotations and flips, but also more\nexotic pixel permutations such as a jigsaw rearrangement. Our approach also\nnaturally extends to illusions with more than two views. We provide both\nqualitative and quantitative results demonstrating the effectiveness and\nflexibility of our method. Please see our project webpage for additional\nvisualizations and results: https://dangeng.github.io/visual_anagrams/", + "Referring expression segmentation (RES) aims at segmenting the foreground\nmasks of the entities that match the descriptive natural language expression.\nPrevious datasets and methods for classic RES task heavily rely on the prior\nassumption that one expression must refer to object-level targets. In this\npaper, we take a step further to finer-grained part-level RES task. To promote\nthe object-level RES task towards finer-grained vision-language understanding,\nwe put forward a new multi-granularity referring expression segmentation (MRES)\ntask and construct an evaluation benchmark called RefCOCOm by manual\nannotations. By employing our automatic model-assisted data engine, we build\nthe largest visual grounding dataset namely MRES-32M, which comprises over\n32.2M high-quality masks and captions on the provided 1M images. Besides, a\nsimple yet strong model named UniRES is designed to accomplish the unified\nobject-level and part-level grounding task. Extensive experiments on our\nRefCOCOm for MRES and three datasets (i.e., RefCOCO(+/g) for classic RES task\ndemonstrate the superiority of our method over previous state-of-the-art\nmethods.", + "Extensive experiments on our\nRefCOCOm for MRES and three datasets (i.e., RefCOCO(+/g) for classic RES task\ndemonstrate the superiority of our method over previous state-of-the-art\nmethods. To foster future research into fine-grained visual grounding, our\nbenchmark RefCOCOm, the MRES-32M dataset and model UniRES will be publicly\navailable at https://github.com/Rubics-Xuan/MRES", + "We present DiffInDScene, a novel framework for tackling the problem of\nhigh-quality 3D indoor scene generation, which is challenging due to the\ncomplexity and diversity of the indoor scene geometry. Although diffusion-based\ngenerative models have previously demonstrated impressive performance in image\ngeneration and object-level 3D generation, they have not yet been applied to\nroom-level 3D generation due to their computationally intensive costs. In\nDiffInDScene, we propose a cascaded 3D diffusion pipeline that is efficient and\npossesses strong generative performance for Truncated Signed Distance Function\n(TSDF). The whole pipeline is designed to run on a sparse occupancy space in a\ncoarse-to-fine fashion. Inspired by KinectFusion's incremental alignment and\nfusion of local TSDF volumes, we propose a diffusion-based SDF fusion approach\nthat iteratively diffuses and fuses local TSDF volumes, facilitating the\ngeneration of an entire room environment. The generated results demonstrate\nthat our work is capable to achieve high-quality room generation directly in\nthree-dimensional space, starting from scratch.", + "The generated results demonstrate\nthat our work is capable to achieve high-quality room generation directly in\nthree-dimensional space, starting from scratch. In addition to the scene\ngeneration, the final part of DiffInDScene can be used as a post-processing\nmodule to refine the 3D reconstruction results from multi-view stereo.\nAccording to the user study, the mesh quality generated by our DiffInDScene can\neven outperform the ground truth mesh provided by ScanNet. Please visit our\nproject page for the latest progress and demonstrations:\nhttps://github.com/AkiraHero/diffindscene.", + "Robust segmentation is critical for deriving quantitative measures from\nlarge-scale, multi-center, and longitudinal medical scans. Manually annotating\nmedical scans, however, is expensive and labor-intensive and may not always be\navailable in every domain. Unsupervised domain adaptation (UDA) is a\nwell-studied technique that alleviates this label-scarcity problem by\nleveraging available labels from another domain. In this study, we introduce\nMasked Autoencoding and Pseudo-Labeling Segmentation (MAPSeg), a\n$\\textbf{unified}$ UDA framework with great versatility and superior\nperformance for heterogeneous and volumetric medical image segmentation. To the\nbest of our knowledge, this is the first study that systematically reviews and\ndevelops a framework to tackle four different domain shifts in medical image\nsegmentation. More importantly, MAPSeg is the first framework that can be\napplied to $\\textbf{centralized}$, $\\textbf{federated}$, and\n$\\textbf{test-time}$ UDA while maintaining comparable performance.", + "More importantly, MAPSeg is the first framework that can be\napplied to $\\textbf{centralized}$, $\\textbf{federated}$, and\n$\\textbf{test-time}$ UDA while maintaining comparable performance. We compare\nMAPSeg with previous state-of-the-art methods on a private infant brain MRI\ndataset and a public cardiac CT-MRI dataset, and MAPSeg outperforms others by a\nlarge margin (10.5 Dice improvement on the private MRI dataset and 5.7 on the\npublic CT-MRI dataset). MAPSeg poses great practical value and can be applied\nto real-world problems. GitHub: https://github.com/XuzheZ/MAPSeg/.", + "Addressing the intricate challenge of modeling and re-rendering dynamic\nscenes, most recent approaches have sought to simplify these complexities using\nplane-based explicit representations, overcoming the slow training time issues\nassociated with methods like Neural Radiance Fields (NeRF) and implicit\nrepresentations. However, the straightforward decomposition of 4D dynamic\nscenes into multiple 2D plane-based representations proves insufficient for\nre-rendering high-fidelity scenes with complex motions. In response, we present\na novel direction-aware representation (DaRe) approach that captures scene\ndynamics from six different directions. This learned representation undergoes\nan inverse dual-tree complex wavelet transformation (DTCWT) to recover\nplane-based information. DaReNeRF computes features for each space-time point\nby fusing vectors from these recovered planes. Combining DaReNeRF with a tiny\nMLP for color regression and leveraging volume rendering in training yield\nstate-of-the-art performance in novel view synthesis for complex dynamic\nscenes. Notably, to address redundancy introduced by the six real and six\nimaginary direction-aware wavelet coefficients, we introduce a trainable\nmasking approach, mitigating storage issues without significant performance\ndecline.", + "Notably, to address redundancy introduced by the six real and six\nimaginary direction-aware wavelet coefficients, we introduce a trainable\nmasking approach, mitigating storage issues without significant performance\ndecline. Moreover, DaReNeRF maintains a 2x reduction in training time compared\nto prior art while delivering superior performance.", + "Photometric stereo leverages variations in illumination conditions to\nreconstruct surface normals. Display photometric stereo, which employs a\nconventional monitor as an illumination source, has the potential to overcome\nlimitations often encountered in bulky and difficult-to-use conventional\nsetups. In this paper, we present differentiable display photometric stereo\n(DDPS), addressing an often overlooked challenge in display photometric stereo:\nthe design of display patterns. Departing from using heuristic display\npatterns, DDPS learns the display patterns that yield accurate normal\nreconstruction for a target system in an end-to-end manner. To this end, we\npropose a differentiable framework that couples basis-illumination image\nformation with analytic photometric-stereo reconstruction. The differentiable\nframework facilitates the effective learning of display patterns via\nauto-differentiation. Also, for training supervision, we propose to use 3D\nprinting for creating a real-world training dataset, enabling accurate\nreconstruction on the target real-world setup. Finally, we exploit that\nconventional LCD monitors emit polarized light, which allows for the optical\nseparation of diffuse and specular reflections when combined with a\npolarization camera, leading to accurate normal reconstruction.", + "Finally, we exploit that\nconventional LCD monitors emit polarized light, which allows for the optical\nseparation of diffuse and specular reflections when combined with a\npolarization camera, leading to accurate normal reconstruction. Extensive\nevaluation of DDPS shows improved normal-reconstruction accuracy compared to\nheuristic patterns and demonstrates compelling properties such as robustness to\npattern initialization, calibration errors, and simplifications in image\nformation and reconstruction.", + "Autonomous systems need to process large-scale, sparse, and irregular point\nclouds with limited compute resources. Consequently, it is essential to develop\nLiDAR perception methods that are both efficient and effective. Although\nnaively enlarging 3D kernel size can enhance performance, it will also lead to\na cubically-increasing overhead. Therefore, it is crucial to develop\nstreamlined 3D large kernel designs that eliminate redundant weights and work\neffectively with larger kernels. In this paper, we propose an efficient and\neffective Large Sparse Kernel 3D Neural Network (LSK3DNet) that leverages\ndynamic pruning to amplify the 3D kernel size. Our method comprises two core\ncomponents: Spatial-wise Dynamic Sparsity (SDS) and Channel-wise Weight\nSelection (CWS). SDS dynamically prunes and regrows volumetric weights from the\nbeginning to learn a large sparse 3D kernel. It not only boosts performance but\nalso significantly reduces model size and computational cost.", + "SDS dynamically prunes and regrows volumetric weights from the\nbeginning to learn a large sparse 3D kernel. It not only boosts performance but\nalso significantly reduces model size and computational cost. Moreover, CWS\nselects the most important channels for 3D convolution during training and\nsubsequently prunes the redundant channels to accelerate inference for 3D\nvision tasks. We demonstrate the effectiveness of LSK3DNet on three benchmark\ndatasets and five tracks compared with classical models and large kernel\ndesigns. Notably, LSK3DNet achieves the state-of-the-art performance on\nSemanticKITTI (i.e., 75.6% on single-scan and 63.4% on multi-scan), with\nroughly 40% model size reduction and 60% computing operations reduction\ncompared to the naive large 3D kernel model.", + "The goal of this work is to simultaneously generate natural talking faces and\nspeech outputs from text. We achieve this by integrating Talking Face\nGeneration (TFG) and Text-to-Speech (TTS) systems into a unified framework. We\naddress the main challenges of each task: (1) generating a range of head poses\nrepresentative of real-world scenarios, and (2) ensuring voice consistency\ndespite variations in facial motion for the same identity. To tackle these\nissues, we introduce a motion sampler based on conditional flow matching, which\nis capable of high-quality motion code generation in an efficient way.\nMoreover, we introduce a novel conditioning method for the TTS system, which\nutilises motion-removed features from the TFG model to yield uniform speech\noutputs. Our extensive experiments demonstrate that our method effectively\ncreates natural-looking talking faces and speech that accurately match the\ninput text. To our knowledge, this is the first effort to build a multimodal\nsynthesis system that can generalise to unseen identities.", + "Annotation ambiguity due to inherent data uncertainties such as blurred\nboundaries in medical scans and different observer expertise and preferences\nhas become a major obstacle for training deep-learning based medical image\nsegmentation models. To address it, the common practice is to gather multiple\nannotations from different experts, leading to the setting of multi-rater\nmedical image segmentation. Existing works aim to either merge different\nannotations into the \"groundtruth\" that is often unattainable in numerous\nmedical contexts, or generate diverse results, or produce personalized results\ncorresponding to individual expert raters. Here, we bring up a more ambitious\ngoal for multi-rater medical image segmentation, i.e., obtaining both\ndiversified and personalized results. Specifically, we propose a two-stage\nframework named D-Persona (first Diversification and then Personalization). In\nStage I, we exploit multiple given annotations to train a Probabilistic U-Net\nmodel, with a bound-constrained loss to improve the prediction diversity. In\nthis way, a common latent space is constructed in Stage I, where different\nlatent codes denote diversified expert opinions.", + "In\nStage I, we exploit multiple given annotations to train a Probabilistic U-Net\nmodel, with a bound-constrained loss to improve the prediction diversity. In\nthis way, a common latent space is constructed in Stage I, where different\nlatent codes denote diversified expert opinions. Then, in Stage II, we design\nmultiple attention-based projection heads to adaptively query the corresponding\nexpert prompts from the shared latent space, and then perform the personalized\nmedical image segmentation. We evaluated the proposed model on our in-house\nNasopharyngeal Carcinoma dataset and the public lung nodule dataset (i.e.,\nLIDC-IDRI). Extensive experiments demonstrated our D-Persona can provide\ndiversified and personalized results at the same time, achieving new SOTA\nperformance for multi-rater medical image segmentation. Our code will be\nreleased at https://github.com/ycwu1997/D-Persona.", + "We conduct a comprehensive study on a new task named power battery detection\n(PBD), which aims to localize the dense cathode and anode plates endpoints from\nX-ray images to evaluate the quality of power batteries. Existing manufacturers\nusually rely on human eye observation to complete PBD, which makes it difficult\nto balance the accuracy and efficiency of detection. To address this issue and\ndrive more attention into this meaningful task, we first elaborately collect a\ndataset, called X-ray PBD, which has $1,500$ diverse X-ray images selected from\nthousands of power batteries of $5$ manufacturers, with $7$ different visual\ninterference. Then, we propose a novel segmentation-based solution for PBD,\ntermed multi-dimensional collaborative network (MDCNet). With the help of line\nand counting predictors, the representation of the point segmentation branch\ncan be improved at both semantic and detail aspects.Besides, we design an\neffective distance-adaptive mask generation strategy, which can alleviate the\nvisual challenge caused by the inconsistent distribution density of plates to\nprovide MDCNet with stable supervision.", + "Without any bells and whistles, our\nsegmentation-based MDCNet consistently outperforms various other corner\ndetection, crowd counting and general/tiny object detection-based solutions,\nmaking it a strong baseline that can help facilitate future research in PBD.\nFinally, we share some potential difficulties and works for future researches.\nThe source code and datasets will be publicly available at\n\\href{https://github.com/Xiaoqi-Zhao-DLUT/X-ray-PBD}{X-ray PBD}.", + "Machine learning models can perform well on in-distribution data but often\nfail on biased subgroups that are underrepresented in the training data,\nhindering the robustness of models for reliable applications. Such subgroups\nare typically unknown due to the absence of subgroup labels. Discovering biased\nsubgroups is the key to understanding models' failure modes and further\nimproving models' robustness. Most previous works of subgroup discovery make an\nimplicit assumption that models only underperform on a single biased subgroup,\nwhich does not hold on in-the-wild data where multiple biased subgroups exist.\n In this work, we propose Decomposition, Interpretation, and Mitigation (DIM),\na novel method to address a more challenging but also more practical problem of\ndiscovering multiple biased subgroups in image classifiers. Our approach\ndecomposes the image features into multiple components that represent multiple\nsubgroups. This decomposition is achieved via a bilinear dimension reduction\nmethod, Partial Least Square (PLS), guided by useful supervision from the image\nclassifier. We further interpret the semantic meaning of each subgroup\ncomponent by generating natural language descriptions using vision-language\nfoundation models.", + "This decomposition is achieved via a bilinear dimension reduction\nmethod, Partial Least Square (PLS), guided by useful supervision from the image\nclassifier. We further interpret the semantic meaning of each subgroup\ncomponent by generating natural language descriptions using vision-language\nfoundation models. Finally, DIM mitigates multiple biased subgroups\nsimultaneously via two strategies, including the data- and model-centric\nstrategies. Extensive experiments on CIFAR-100 and Breeds datasets demonstrate\nthe effectiveness of DIM in discovering and mitigating multiple biased\nsubgroups. Furthermore, DIM uncovers the failure modes of the classifier on\nHard ImageNet, showcasing its broader applicability to understanding model bias\nin image classifiers. The code is available at\nhttps://github.com/ZhangAIPI/DIM.", + "Deep functional maps have emerged in recent years as a prominent\nlearning-based framework for non-rigid shape matching problems. While early\nmethods in this domain only focused on learning in the functional domain, the\nlatest techniques have demonstrated that by promoting consistency between\nfunctional and pointwise maps leads to significant improvements in accuracy.\nUnfortunately, existing approaches rely heavily on the computation of large\ndense matrices arising from soft pointwise maps, which compromises their\nefficiency and scalability. To address this limitation, we introduce a novel\nmemory-scalable and efficient functional map learning pipeline. By leveraging\nthe specific structure of functional maps, we offer the possibility to achieve\nidentical results without ever storing the pointwise map in memory.\nFurthermore, based on the same approach, we present a differentiable map\nrefinement layer adapted from an existing axiomatic refinement algorithm.\nUnlike many functional map learning methods, which use this algorithm at a\npost-processing step, ours can be easily used at train time, enabling to\nenforce consistency between the refined and initial versions of the map. Our\nresulting approach is both simpler, more efficient and more numerically stable,\nby avoiding differentiation through a linear system, while achieving close to\nstate-of-the-art results in challenging scenarios.", + "Group robustness strategies aim to mitigate learned biases in deep learning\nmodels that arise from spurious correlations present in their training\ndatasets. However, most existing methods rely on the access to the label\ndistribution of the groups, which is time-consuming and expensive to obtain. As\na result, unsupervised group robustness strategies are sought. Based on the\ninsight that a trained model's classification strategies can be inferred\naccurately based on explainability heatmaps, we introduce ExMap, an\nunsupervised two stage mechanism designed to enhance group robustness in\ntraditional classifiers. ExMap utilizes a clustering module to infer\npseudo-labels based on a model's explainability heatmaps, which are then used\nduring training in lieu of actual labels. Our empirical studies validate the\nefficacy of ExMap - We demonstrate that it bridges the performance gap with its\nsupervised counterparts and outperforms existing partially supervised and\nunsupervised methods. Additionally, ExMap can be seamlessly integrated with\nexisting group robustness learning strategies. Finally, we demonstrate its\npotential in tackling the emerging issue of multiple shortcut\nmitigation\\footnote{Code available at \\url{https://github.com/rwchakra/exmap}}.", + "Creating high-fidelity 3D head avatars has always been a research hotspot,\nbut there remains a great challenge under lightweight sparse view setups. In\nthis paper, we propose Gaussian Head Avatar represented by controllable 3D\nGaussians for high-fidelity head avatar modeling. We optimize the neutral 3D\nGaussians and a fully learned MLP-based deformation field to capture complex\nexpressions. The two parts benefit each other, thereby our method can model\nfine-grained dynamic details while ensuring expression accuracy. Furthermore,\nwe devise a well-designed geometry-guided initialization strategy based on\nimplicit SDF and Deep Marching Tetrahedra for the stability and convergence of\nthe training procedure. Experiments show our approach outperforms other\nstate-of-the-art sparse-view methods, achieving ultra high-fidelity rendering\nquality at 2K resolution even under exaggerated expressions.", + "Estimating 3D full-body avatars from AR/VR devices is essential for creating\nimmersive experiences in AR/VR applications. This task is challenging due to\nthe limited input from Head Mounted Devices, which capture only sparse\nobservations from the head and hands. Predicting the full-body avatars,\nparticularly the lower body, from these sparse observations presents\nsignificant difficulties. In this paper, we are inspired by the inherent\nproperty of the kinematic tree defined in the Skinned Multi-Person Linear\n(SMPL) model, where the upper body and lower body share only one common\nancestor node, bringing the potential of decoupled reconstruction. We propose a\nstratified approach to decouple the conventional full-body avatar\nreconstruction pipeline into two stages, with the reconstruction of the upper\nbody first and a subsequent reconstruction of the lower body conditioned on the\nprevious stage. To implement this straightforward idea, we leverage the latent\ndiffusion model as a powerful probabilistic generator, and train it to follow\nthe latent distribution of decoupled motions explored by a VQ-VAE\nencoder-decoder model. Extensive experiments on AMASS mocap dataset demonstrate\nour state-of-the-art performance in the reconstruction of full-body motions.", + "Recent studies have drawn attention to the untapped potential of the \"star\noperation\" (element-wise multiplication) in network design. While intuitive\nexplanations abound, the foundational rationale behind its application remains\nlargely unexplored. Our study attempts to reveal the star operation's ability\nto map inputs into high-dimensional, non-linear feature spaces -- akin to\nkernel tricks -- without widening the network. We further introduce StarNet, a\nsimple yet powerful prototype, demonstrating impressive performance and low\nlatency under compact network structure and efficient budget. Like stars in the\nsky, the star operation appears unremarkable but holds a vast universe of\npotential. Our work encourages further exploration across tasks, with codes\navailable at https://github.com/ma-xu/Rewrite-the-Stars.", + "Recent advancements in large-scale visual-language pre-trained models have\nled to significant progress in zero-/few-shot anomaly detection within natural\nimage domains. However, the substantial domain divergence between natural and\nmedical images limits the effectiveness of these methodologies in medical\nanomaly detection. This paper introduces a novel lightweight multi-level\nadaptation and comparison framework to repurpose the CLIP model for medical\nanomaly detection. Our approach integrates multiple residual adapters into the\npre-trained visual encoder, enabling a stepwise enhancement of visual features\nacross different levels. This multi-level adaptation is guided by multi-level,\npixel-wise visual-language feature alignment loss functions, which recalibrate\nthe model's focus from object semantics in natural imagery to anomaly\nidentification in medical images. The adapted features exhibit improved\ngeneralization across various medical data types, even in zero-shot scenarios\nwhere the model encounters unseen medical modalities and anatomical regions\nduring training.", + "The adapted features exhibit improved\ngeneralization across various medical data types, even in zero-shot scenarios\nwhere the model encounters unseen medical modalities and anatomical regions\nduring training. Our experiments on medical anomaly detection benchmarks\ndemonstrate that our method significantly surpasses current state-of-the-art\nmodels, with an average AUC improvement of 6.24% and 7.33% for anomaly\nclassification, 2.03% and 2.37% for anomaly segmentation, under the zero-shot\nand few-shot settings, respectively. Source code is available at:\nhttps://github.com/MediaBrain-SJTU/MVFA-AD", + "Zero-shot Video Object Segmentation (ZSVOS) aims at segmenting the primary\nmoving object without any human annotations. Mainstream solutions mainly focus\non learning a single model on large-scale video datasets, which struggle to\ngeneralize to unseen videos. In this work, we introduce a test-time training\n(TTT) strategy to address the problem. Our key insight is to enforce the model\nto predict consistent depth during the TTT process. In detail, we first train a\nsingle network to perform both segmentation and depth prediction tasks. This\ncan be effectively learned with our specifically designed depth modulation\nlayer. Then, for the TTT process, the model is updated by predicting consistent\ndepth maps for the same frame under different data augmentations. In addition,\nwe explore different TTT weight updating strategies. Our empirical results\nsuggest that the momentum-based weight initialization and looping-based\ntraining scheme lead to more stable improvements. Experiments show that the\nproposed method achieves clear improvements on ZSVOS. Our proposed video TTT\nstrategy provides significant superiority over state-of-the-art TTT methods.\nOur code is available at: https://nifangbaage.github.io/DATTT.", + "Recent advancements in personalized image generation using diffusion models\nhave been noteworthy. However, existing methods suffer from inefficiencies due\nto the requirement for subject-specific fine-tuning. This computationally\nintensive process hinders efficient deployment, limiting practical usability.\nMoreover, these methods often grapple with identity distortion and limited\nexpression diversity. In light of these challenges, we propose PortraitBooth,\nan innovative approach designed for high efficiency, robust identity\npreservation, and expression-editable text-to-image generation, without the\nneed for fine-tuning. PortraitBooth leverages subject embeddings from a face\nrecognition model for personalized image generation without fine-tuning. It\neliminates computational overhead and mitigates identity distortion. The\nintroduced dynamic identity preservation strategy further ensures close\nresemblance to the original image identity. Moreover, PortraitBooth\nincorporates emotion-aware cross-attention control for diverse facial\nexpressions in generated images, supporting text-driven expression editing. Its\nscalability enables efficient and high-quality image creation, including\nmulti-subject generation. Extensive results demonstrate superior performance\nover other state-of-the-art methods in both single and multiple image\ngeneration scenarios.", + "Human-centric video frame interpolation has great potential for improving\npeople's entertainment experiences and finding commercial applications in the\nsports analysis industry, e.g., synthesizing slow-motion videos. Although there\nare multiple benchmark datasets available in the community, none of them is\ndedicated for human-centric scenarios. To bridge this gap, we introduce\nSportsSloMo, a benchmark consisting of more than 130K video clips and 1M video\nframes of high-resolution ($\\geq$720p) slow-motion sports videos crawled from\nYouTube. We re-train several state-of-the-art methods on our benchmark, and the\nresults show a decrease in their accuracy compared to other datasets. It\nhighlights the difficulty of our benchmark and suggests that it poses\nsignificant challenges even for the best-performing methods, as human bodies\nare highly deformable and occlusions are frequent in sports videos. To improve\nthe accuracy, we introduce two loss terms considering the human-aware priors,\nwhere we add auxiliary supervision to panoptic segmentation and human keypoints\ndetection, respectively. The loss terms are model agnostic and can be easily\nplugged into any video frame interpolation approaches.", + "To improve\nthe accuracy, we introduce two loss terms considering the human-aware priors,\nwhere we add auxiliary supervision to panoptic segmentation and human keypoints\ndetection, respectively. The loss terms are model agnostic and can be easily\nplugged into any video frame interpolation approaches. Experimental results\nvalidate the effectiveness of our proposed loss terms, leading to consistent\nperformance improvement over 5 existing models, which establish strong baseline\nmodels on our benchmark. The dataset and code can be found at:\nhttps://neu-vi.github.io/SportsSlomo/.", + "This paper introduces the first text-guided work for generating the sequence\nof hand-object interaction in 3D. The main challenge arises from the lack of\nlabeled data where existing ground-truth datasets are nowhere near\ngeneralizable in interaction type and object category, which inhibits the\nmodeling of diverse 3D hand-object interaction with the correct physical\nimplication (e.g., contacts and semantics) from text prompts. To address this\nchallenge, we propose to decompose the interaction generation task into two\nsubtasks: hand-object contact generation; and hand-object motion generation.\nFor contact generation, a VAE-based network takes as input a text and an object\nmesh, and generates the probability of contacts between the surfaces of hands\nand the object during the interaction. The network learns a variety of local\ngeometry structure of diverse objects that is independent of the objects'\ncategory, and thus, it is applicable to general objects.", + "The network learns a variety of local\ngeometry structure of diverse objects that is independent of the objects'\ncategory, and thus, it is applicable to general objects. For motion generation,\na Transformer-based diffusion model utilizes this 3D contact map as a strong\nprior for generating physically plausible hand-object motion as a function of\ntext prompts by learning from the augmented labeled dataset; where we annotate\ntext labels from many existing 3D hand and object motion data. Finally, we\nfurther introduce a hand refiner module that minimizes the distance between the\nobject surface and hand joints to improve the temporal stability of the\nobject-hand contacts and to suppress the penetration artifacts. In the\nexperiments, we demonstrate that our method can generate more realistic and\ndiverse interactions compared to other baseline methods. We also show that our\nmethod is applicable to unseen objects. We will release our model and newly\nlabeled data as a strong foundation for future research. Codes and data are\navailable in: https://github.com/JunukCha/Text2HOI.", + "Continual learning (CL) aims to empower models to learn new tasks without\nforgetting previously acquired knowledge. Most prior works concentrate on the\ntechniques of architectures, replay data, regularization, \\etc. However, the\ncategory name of each class is largely neglected. Existing methods commonly\nutilize the one-hot labels and randomly initialize the classifier head. We\nargue that the scarce semantic information conveyed by the one-hot labels\nhampers the effective knowledge transfer across tasks. In this paper, we\nrevisit the role of the classifier head within the CL paradigm and replace the\nclassifier with semantic knowledge from pretrained language models (PLMs).\nSpecifically, we use PLMs to generate semantic targets for each class, which\nare frozen and serve as supervision signals during training. Such targets fully\nconsider the semantic correlation between all classes across tasks. Empirical\nstudies show that our approach mitigates forgetting by alleviating\nrepresentation drifting and facilitating knowledge transfer across tasks. The\nproposed method is simple to implement and can seamlessly be plugged into\nexisting methods with negligible adjustments. Extensive experiments based on\neleven mainstream baselines demonstrate the effectiveness and generalizability\nof our approach to various protocols.", + "The\nproposed method is simple to implement and can seamlessly be plugged into\nexisting methods with negligible adjustments. Extensive experiments based on\neleven mainstream baselines demonstrate the effectiveness and generalizability\nof our approach to various protocols. For example, under the class-incremental\nlearning setting on ImageNet-100, our method significantly improves the Top-1\naccuracy by 3.2\\% to 6.1\\% while reducing the forgetting rate by 2.6\\% to\n13.1\\%.", + "The rapid expansion of large-scale text-to-image diffusion models has raised\ngrowing concerns regarding their potential misuse in creating harmful or\nmisleading content. In this paper, we introduce MACE, a finetuning framework\nfor the task of mass concept erasure. This task aims to prevent models from\ngenerating images that embody unwanted concepts when prompted. Existing concept\nerasure methods are typically restricted to handling fewer than five concepts\nsimultaneously and struggle to find a balance between erasing concept synonyms\n(generality) and maintaining unrelated concepts (specificity). In contrast,\nMACE differs by successfully scaling the erasure scope up to 100 concepts and\nby achieving an effective balance between generality and specificity. This is\nachieved by leveraging closed-form cross-attention refinement along with LoRA\nfinetuning, collectively eliminating the information of undesirable concepts.\nFurthermore, MACE integrates multiple LoRAs without mutual interference. We\nconduct extensive evaluations of MACE against prior methods across four\ndifferent tasks: object erasure, celebrity erasure, explicit content erasure,\nand artistic style erasure. Our results reveal that MACE surpasses prior\nmethods in all evaluated tasks.", + "We\nconduct extensive evaluations of MACE against prior methods across four\ndifferent tasks: object erasure, celebrity erasure, explicit content erasure,\nand artistic style erasure. Our results reveal that MACE surpasses prior\nmethods in all evaluated tasks. Code is available at\nhttps://github.com/Shilin-LU/MACE.", + "We present Dive Into the BoundarieS (DIBS), a novel pretraining framework for\ndense video captioning (DVC), that elaborates on improving the quality of the\ngenerated event captions and their associated pseudo event boundaries from\nunlabeled videos. By leveraging the capabilities of diverse large language\nmodels (LLMs), we generate rich DVC-oriented caption candidates and optimize\nthe corresponding pseudo boundaries under several meticulously designed\nobjectives, considering diversity, event-centricity, temporal ordering, and\ncoherence. Moreover, we further introduce a novel online boundary refinement\nstrategy that iteratively improves the quality of pseudo boundaries during\ntraining. Comprehensive experiments have been conducted to examine the\neffectiveness of the proposed technique components. By leveraging a substantial\namount of unlabeled video data, such as HowTo100M, we achieve a remarkable\nadvancement on standard DVC datasets like YouCook2 and ActivityNet. We\noutperform the previous state-of-the-art Vid2Seq across a majority of metrics,\nachieving this with just 0.4% of the unlabeled video data used for pre-training\nby Vid2Seq.", + "Recently, some large kernel convnets strike back with appealing performance\nand efficiency. However, given the square complexity of convolution, scaling up\nkernels can bring about an enormous amount of parameters and the proliferated\nparameters can induce severe optimization problem. Due to these issues, current\nCNNs compromise to scale up to 51x51 in the form of stripe convolution (i.e.,\n51x5 + 5x51) and start to saturate as the kernel size continues growing. In\nthis paper, we delve into addressing these vital issues and explore whether we\ncan continue scaling up kernels for more performance gains. Inspired by human\nvision, we propose a human-like peripheral convolution that efficiently reduces\nover 90% parameter count of dense grid convolution through parameter sharing,\nand manage to scale up kernel size to extremely large. Our peripheral\nconvolution behaves highly similar to human, reducing the complexity of\nconvolution from O(K^2) to O(logK) without backfiring performance. Built on\nthis, we propose Parameter-efficient Large Kernel Network (PeLK).", + "Our peripheral\nconvolution behaves highly similar to human, reducing the complexity of\nconvolution from O(K^2) to O(logK) without backfiring performance. Built on\nthis, we propose Parameter-efficient Large Kernel Network (PeLK). Our PeLK\noutperforms modern vision Transformers and ConvNet architectures like Swin,\nConvNeXt, RepLKNet and SLaK on various vision tasks including ImageNet\nclassification, semantic segmentation on ADE20K and object detection on MS\nCOCO. For the first time, we successfully scale up the kernel size of CNNs to\nan unprecedented 101x101 and demonstrate consistent improvements.", + "Expressive human pose and shape estimation (a.k.a. 3D whole-body mesh\nrecovery) involves the human body, hand, and expression estimation. Most\nexisting methods have tackled this task in a two-stage manner, first detecting\nthe human body part with an off-the-shelf detection model and inferring the\ndifferent human body parts individually. Despite the impressive results\nachieved, these methods suffer from 1) loss of valuable contextual information\nvia cropping, 2) introducing distractions, and 3) lacking inter-association\namong different persons and body parts, inevitably causing performance\ndegradation, especially for crowded scenes. To address these issues, we\nintroduce a novel all-in-one-stage framework, AiOS, for multiple expressive\nhuman pose and shape recovery without an additional human detection step.\nSpecifically, our method is built upon DETR, which treats multi-person\nwhole-body mesh recovery task as a progressive set prediction problem with\nvarious sequential detection. We devise the decoder tokens and extend them to\nour task.", + "Specifically, our method is built upon DETR, which treats multi-person\nwhole-body mesh recovery task as a progressive set prediction problem with\nvarious sequential detection. We devise the decoder tokens and extend them to\nour task. Specifically, we first employ a human token to probe a human location\nin the image and encode global features for each instance, which provides a\ncoarse location for the later transformer block. Then, we introduce a\njoint-related token to probe the human joint in the image and encoder a\nfine-grained local feature, which collaborates with the global feature to\nregress the whole-body mesh. This straightforward but effective model\noutperforms previous state-of-the-art methods by a 9% reduction in NMVE on\nAGORA, a 30% reduction in PVE on EHF, a 10% reduction in PVE on ARCTIC, and a\n3% reduction in PVE on EgoBody.", + "Deep learning models, particularly those based on transformers, often employ\nnumerous stacked structures, which possess identical architectures and perform\nsimilar functions. While effective, this stacking paradigm leads to a\nsubstantial increase in the number of parameters, posing challenges for\npractical applications. In today's landscape of increasingly large models,\nstacking depth can even reach dozens, further exacerbating this issue. To\nmitigate this problem, we introduce LORS (LOw-rank Residual Structure). LORS\nallows stacked modules to share the majority of parameters, requiring a much\nsmaller number of unique ones per module to match or even surpass the\nperformance of using entirely distinct ones, thereby significantly reducing\nparameter usage. We validate our method by applying it to the stacked decoders\nof a query-based object detector, and conduct extensive experiments on the\nwidely used MS COCO dataset. Experimental results demonstrate the effectiveness\nof our method, as even with a 70\\% reduction in the parameters of the decoder,\nour method still enables the model to achieve comparable or", + "In recent years, there has been a significant shift in the field of digital\navatar research, towards modeling, animating and reconstructing clothed human\nrepresentations, as a key step towards creating realistic avatars. However,\ncurrent 3D cloth generation methods are garment specific or trained completely\non synthetic data, hence lacking fine details and realism. In this work, we\nmake a step towards automatic realistic garment design and propose\nDesign2Cloth, a high fidelity 3D generative model trained on a real world\ndataset from more than 2000 subject scans. To provide vital contribution to the\nfashion industry, we developed a user-friendly adversarial model capable of\ngenerating diverse and detailed clothes simply by drawing a 2D cloth mask.\nUnder a series of both qualitative and quantitative experiments, we showcase\nthat Design2Cloth outperforms current state-of-the-art cloth generative models\nby a large margin. In addition to the generative properties of our network, we\nshowcase that the proposed method can be used to achieve high quality\nreconstructions from single in-the-wild images and 3D scans. Dataset, code and\npre-trained model will become publicly available.", + "Scene text recognition (STR) in the wild frequently encounters challenges\nwhen coping with domain variations, font diversity, shape deformations, etc. A\nstraightforward solution is performing model fine-tuning tailored to a specific\nscenario, but it is computationally intensive and requires multiple model\ncopies for various scenarios. Recent studies indicate that large language\nmodels (LLMs) can learn from a few demonstration examples in a training-free\nmanner, termed \"In-Context Learning\" (ICL). Nevertheless, applying LLMs as a\ntext recognizer is unacceptably resource-consuming. Moreover, our pilot\nexperiments on LLMs show that ICL fails in STR, mainly attributed to the\ninsufficient incorporation of contextual information from diverse samples in\nthe training stage. To this end, we introduce E$^2$STR, a STR model trained\nwith context-rich scene text sequences, where the sequences are generated via\nour proposed in-context training strategy. E$^2$STR demonstrates that a\nregular-sized model is sufficient to achieve effective ICL capabilities in STR.", + "To this end, we introduce E$^2$STR, a STR model trained\nwith context-rich scene text sequences, where the sequences are generated via\nour proposed in-context training strategy. E$^2$STR demonstrates that a\nregular-sized model is sufficient to achieve effective ICL capabilities in STR.\nExtensive experiments show that E$^2$STR exhibits remarkable training-free\nadaptation in various scenarios and outperforms even the fine-tuned\nstate-of-the-art approaches on public benchmarks. The code is released at\nhttps://github.com/bytedance/E2STR .", + "Our brain can effortlessly recognize objects even when partially hidden from\nview. Seeing the visible of the hidden is called amodal completion; however,\nthis task remains a challenge for generative AI despite rapid progress. We\npropose to sidestep many of the difficulties of existing approaches, which\ntypically involve a two-step process of predicting amodal masks and then\ngenerating pixels. Our method involves thinking outside the box, literally! We\ngo outside the object bounding box to use its context to guide a pre-trained\ndiffusion inpainting model, and then progressively grow the occluded object and\ntrim the extra background. We overcome two technical challenges: 1) how to be\nfree of unwanted co-occurrence bias, which tends to regenerate similar\noccluders, and 2) how to judge if an amodal completion has succeeded. Our\namodal completion method exhibits improved photorealistic completion results\ncompared to existing approaches in numerous successful completion cases. And\nthe best part? It doesn't require any special training or fine-tuning of\nmodels.", + "We present Diff3F as a simple, robust, and class-agnostic feature descriptor\nthat can be computed for untextured input shapes (meshes or point clouds). Our\nmethod distills diffusion features from image foundational models onto input\nshapes. Specifically, we use the input shapes to produce depth and normal maps\nas guidance for conditional image synthesis. In the process, we produce\n(diffusion) features in 2D that we subsequently lift and aggregate on the\noriginal surface. Our key observation is that even if the conditional image\ngenerations obtained from multi-view rendering of the input shapes are\ninconsistent, the associated image features are robust and, hence, can be\ndirectly aggregated across views. This produces semantic features on the input\nshapes, without requiring additional data or training. We perform extensive\nexperiments on multiple benchmarks (SHREC'19, SHREC'20, FAUST, and TOSCA) and\ndemonstrate that our features, being semantic instead of geometric, produce\nreliable correspondence across both isometric and non-isometrically related\nshape families. Code is available via the project page at\nhttps://diff3f.github.io/", + "Microscopic traffic simulation plays a crucial role in transportation\nengineering by providing insights into individual vehicle behavior and overall\ntraffic flow. However, creating a realistic simulator that accurately\nreplicates human driving behaviors in various traffic conditions presents\nsignificant challenges. Traditional simulators relying on heuristic models\noften fail to deliver accurate simulations due to the complexity of real-world\ntraffic environments. Due to the covariate shift issue, existing imitation\nlearning-based simulators often fail to generate stable long-term simulations.\nIn this paper, we propose a novel approach called learner-aware supervised\nimitation learning to address the covariate shift problem in multi-agent\nimitation learning. By leveraging a variational autoencoder simultaneously\nmodeling the expert and learner state distribution, our approach augments\nexpert states such that the augmented state is aware of learner state\ndistribution. Our method, applied to urban traffic simulation, demonstrates\nsignificant improvements over existing state-of-the-art baselines in both\nshort-term microscopic and long-term macroscopic realism when evaluated on the\nreal-world dataset pNEUMA.", + "Deep learning-based methods have achieved significant successes on solving\nthe blind super-resolution (BSR) problem. However, most of them request\nsupervised pre-training on labelled datasets. This paper proposes an\nunsupervised kernel estimation model, named dynamic kernel prior (DKP), to\nrealize an unsupervised and pre-training-free learning-based algorithm for\nsolving the BSR problem. DKP can adaptively learn dynamic kernel priors to\nrealize real-time kernel estimation, and thereby enables superior HR image\nrestoration performances. This is achieved by a Markov chain Monte Carlo\nsampling process on random kernel distributions. The learned kernel prior is\nthen assigned to optimize a blur kernel estimation network, which entails a\nnetwork-based Langevin dynamic optimization strategy. These two techniques\nensure the accuracy of the kernel estimation. DKP can be easily used to replace\nthe kernel estimation models in the existing methods, such as Double-DIP and\nFKP-DIP, or be added to the off-the-shelf image restoration model, such as\ndiffusion model.", + "These two techniques\nensure the accuracy of the kernel estimation. DKP can be easily used to replace\nthe kernel estimation models in the existing methods, such as Double-DIP and\nFKP-DIP, or be added to the off-the-shelf image restoration model, such as\ndiffusion model. In this paper, we incorporate our DKP model with DIP and\ndiffusion model, referring to DIP-DKP and Diff-DKP, for validations. Extensive\nsimulations on Gaussian and motion kernel scenarios demonstrate that the\nproposed DKP model can significantly improve the kernel estimation with\ncomparable runtime and memory usage, leading to state-of-the-art BSR results.\nThe code is available at https://github.com/XYLGroup/DKP.", + "In the evolving landscape of digital media and video production, the precise\nmanipulation and reproduction of visual elements like camera movements and\ncharacter actions are highly desired. Existing SLAM methods face limitations in\ndynamic scenes and human pose estimation often focuses on 2D projections,\nneglecting 3D statuses. To address these issues, we first introduce a reverse\nfilming behavior estimation technique. It optimizes camera trajectories by\nleveraging NeRF as a differentiable renderer and refining SMPL tracks. We then\nintroduce a cinematic transfer pipeline that is able to transfer various shot\ntypes to a new 2D video or a 3D virtual environment. The incorporation of 3D\nengine workflow enables superior rendering and control abilities, which also\nachieves a higher rating in the user study.", + "Language has emerged as a natural interface for image editing. In this paper,\nwe introduce a method for region-based image editing driven by textual prompts,\nwithout the need for user-provided masks or sketches. Specifically, our\napproach leverages an existing pre-trained text-to-image model and introduces a\nbounding box generator to identify the editing regions that are aligned with\nthe textual prompts. We show that this simple approach enables flexible editing\nthat is compatible with current image generation models, and is able to handle\ncomplex prompts featuring multiple objects, complex sentences, or lengthy\nparagraphs. We conduct an extensive user study to compare our method against\nstate-of-the-art methods. The experiments demonstrate the competitive\nperformance of our method in manipulating images with high fidelity and realism\nthat correspond to the provided language descriptions. Our project webpage can\nbe found at: https://yuanze-lin.me/LearnableRegions_page.", + "Despite their exceptional generative abilities, large text-to-image diffusion\nmodels, much like skilled but careless artists, often struggle with accurately\ndepicting visual relationships between objects. This issue, as we uncover\nthrough careful analysis, arises from a misaligned text encoder that struggles\nto interpret specific relationships and differentiate the logical order of\nassociated objects. To resolve this, we introduce a novel task termed Relation\nRectification, aiming to refine the model to accurately represent a given\nrelationship it initially fails to generate. To address this, we propose an\ninnovative solution utilizing a Heterogeneous Graph Convolutional Network\n(HGCN). It models the directional relationships between relation terms and\ncorresponding objects within the input prompts. Specifically, we optimize the\nHGCN on a pair of prompts with identical relational words but reversed object\norders, supplemented by a few reference images. The lightweight HGCN adjusts\nthe text embeddings generated by the text encoder, ensuring the accurate\nreflection of the textual relation in the embedding space. Crucially, our\nmethod retains the parameters of the text encoder and diffusion model,\npreserving the model's robust performance on unrelated descriptions.", + "The lightweight HGCN adjusts\nthe text embeddings generated by the text encoder, ensuring the accurate\nreflection of the textual relation in the embedding space. Crucially, our\nmethod retains the parameters of the text encoder and diffusion model,\npreserving the model's robust performance on unrelated descriptions. We\nvalidated our approach on a newly curated dataset of diverse relational data,\ndemonstrating both quantitative and qualitative enhancements in generating\nimages with precise visual relations. Project page:\nhttps://wuyinwei-hah.github.io/rrnet.github.io/.", + "We present a lightweight and affordable motion capture method based on two\nsmartwatches and a head-mounted camera. In contrast to the existing approaches\nthat use six or more expert-level IMU devices, our approach is much more\ncost-effective and convenient. Our method can make wearable motion capture\naccessible to everyone everywhere, enabling 3D full-body motion capture in\ndiverse environments. As a key idea to overcome the extreme sparsity and\nambiguities of sensor inputs with different modalities, we integrate 6D head\nposes obtained from the head-mounted cameras for motion estimation. To enable\ncapture in expansive indoor and outdoor scenes, we propose an algorithm to\ntrack and update floor level changes to define head poses, coupled with a\nmulti-stage Transformer-based regression module. We also introduce novel\nstrategies leveraging visual cues of egocentric images to further enhance the\nmotion capture quality while reducing ambiguities. We demonstrate the\nperformance of our method on various challenging scenarios, including complex\noutdoor environments and everyday motions including object interactions and\nsocial interactions among multiple individuals.", + "Sampling from diffusion models can be treated as solving the corresponding\nordinary differential equations (ODEs), with the aim of obtaining an accurate\nsolution with as few number of function evaluations (NFE) as possible.\nRecently, various fast samplers utilizing higher-order ODE solvers have emerged\nand achieved better performance than the initial first-order one. However,\nthese numerical methods inherently result in certain approximation errors,\nwhich significantly degrades sample quality with extremely small NFE (e.g.,\naround 5). In contrast, based on the geometric observation that each sampling\ntrajectory almost lies in a two-dimensional subspace embedded in the ambient\nspace, we propose Approximate MEan-Direction Solver (AMED-Solver) that\neliminates truncation errors by directly learning the mean direction for fast\ndiffusion sampling. Besides, our method can be easily used as a plugin to\nfurther improve existing ODE-based samplers. Extensive experiments on image\nsynthesis with the resolution ranging from 32 to 512 demonstrate the\neffectiveness of our method.", + "Besides, our method can be easily used as a plugin to\nfurther improve existing ODE-based samplers. Extensive experiments on image\nsynthesis with the resolution ranging from 32 to 512 demonstrate the\neffectiveness of our method. With only 5 NFE, we achieve 6.61 FID on CIFAR-10,\n10.74 FID on ImageNet 64$\\times$64, and 13.20 FID on LSUN Bedroom. Our code is\navailable at https://github.com/zju-pi/diff-sampler.", + "Automatic web navigation aims to build a web agent that can follow language\ninstructions to execute complex and diverse tasks on real-world websites.\nExisting work primarily takes HTML documents as input, which define the\ncontents and action spaces (i.e., actionable elements and operations) of\nwebpages. Nevertheless, HTML documents may not provide a clear task-related\ncontext for each element, making it hard to select the right (sequence of)\nactions. In this paper, we propose to contextualize HTML elements through their\n\"dual views\" in webpage screenshots: each HTML element has its corresponding\nbounding box and visual content in the screenshot. We build upon the insight --\nweb developers tend to arrange task-related elements nearby on webpages to\nenhance user experiences -- and propose to contextualize each element with its\nneighbor elements, using both textual and visual features. The resulting\nrepresentations of HTML elements are more informative for the agent to take\naction. We validate our method on the recently released Mind2Web dataset, which\nfeatures diverse navigation domains and tasks on real-world websites. Our\nmethod consistently outperforms the baseline in all the scenarios, including\ncross-task, cross-website, and cross-domain ones.", + "Noisy correspondence that refers to mismatches in cross-modal data pairs, is\nprevalent on human-annotated or web-crawled datasets. Prior approaches to\nleverage such data mainly consider the application of uni-modal noisy label\nlearning without amending the impact on both cross-modal and intra-modal\ngeometrical structures in multimodal learning. Actually, we find that both\nstructures are effective to discriminate noisy correspondence through\nstructural differences when being well-established. Inspired by this\nobservation, we introduce a Geometrical Structure Consistency (GSC) method to\ninfer the true correspondence. Specifically, GSC ensures the preservation of\ngeometrical structures within and between modalities, allowing for the accurate\ndiscrimination of noisy samples based on structural differences. Utilizing\nthese inferred true correspondence labels, GSC refines the learning of\ngeometrical structures by filtering out the noisy samples. Experiments across\nfour cross-modal datasets confirm that GSC effectively identifies noisy samples\nand significantly outperforms the current leading methods.", + "This paper addresses the challenge of learning a local visual pattern of an\nobject from one image, and generating images depicting objects with that\npattern. Learning a localized concept and placing it on an object in a target\nimage is a nontrivial task, as the objects may have different orientations and\nshapes. Our approach builds upon recent advancements in visual concept\nlearning. It involves acquiring a visual concept (e.g., an ornament) from a\nsource image and subsequently applying it to an object (e.g., a chair) in a\ntarget image. Our key idea is to perform in-context concept learning, acquiring\nthe local visual concept within the broader context of the objects they belong\nto. To localize the concept learning, we employ soft masks that contain both\nthe concept within the mask and the surrounding image area. We demonstrate our\napproach through object generation within an image, showcasing plausible\nembedding of in-context learned concepts. We also introduce methods for\ndirecting acquired concepts to specific locations within target images,\nemploying cross-attention mechanisms, and establishing correspondences between\nsource and target objects. The effectiveness of our method is demonstrated\nthrough quantitative and qualitative experiments, along with comparisons\nagainst baseline techniques.", + "We present an approach to pose object recognition as next token prediction.\nThe idea is to apply a language decoder that auto-regressively predicts the\ntext tokens from image embeddings to form labels. To ground this prediction\nprocess in auto-regression, we customize a non-causal attention mask for the\ndecoder, incorporating two key features: modeling tokens from different labels\nto be independent, and treating image tokens as a prefix. This masking\nmechanism inspires an efficient method - one-shot sampling - to simultaneously\nsample tokens of multiple labels in parallel and rank generated labels by their\nprobabilities during inference. To further enhance the efficiency, we propose a\nsimple strategy to construct a compact decoder by simply discarding the\nintermediate blocks of a pretrained language model. This approach yields a\ndecoder that matches the full model's performance while being notably more\nefficient. The code is available at https://github.com/kaiyuyue/nxtp", + "Determining the relative pose of an object between two images is pivotal to\nthe success of generalizable object pose estimation. Existing approaches\ntypically approximate the continuous pose representation with a large number of\ndiscrete pose hypotheses, which incurs a computationally expensive process of\nscoring each hypothesis at test time. By contrast, we present a Deep Voxel\nMatching Network (DVMNet) that eliminates the need for pose hypotheses and\ncomputes the relative object pose in a single pass. To this end, we map the two\ninput RGB images, reference and query, to their respective voxelized 3D\nrepresentations. We then pass the resulting voxels through a pose estimation\nmodule, where the voxels are aligned and the pose is computed in an end-to-end\nfashion by solving a least-squares problem. To enhance robustness, we introduce\na weighted closest voxel algorithm capable of mitigating the impact of noisy\nvoxels. We conduct extensive experiments on the CO3D, LINEMOD, and Objaverse\ndatasets, demonstrating that our method delivers more accurate relative pose\nestimates for novel objects at a lower computational cost compared to\nstate-of-the-art methods.", + "We conduct extensive experiments on the CO3D, LINEMOD, and Objaverse\ndatasets, demonstrating that our method delivers more accurate relative pose\nestimates for novel objects at a lower computational cost compared to\nstate-of-the-art methods. Our code is released at:\nhttps://github.com/sailor-z/DVMNet/.", + "Self-supervised learning (SSL) has been successful in building patch\nembeddings of small histology images (e.g., 224x224 pixels), but scaling these\nmodels to learn slide embeddings from the entirety of giga-pixel whole-slide\nimages (WSIs) remains challenging. Here, we leverage complementary information\nfrom gene expression profiles to guide slide representation learning using\nmultimodal pre-training. Expression profiles constitute highly detailed\nmolecular descriptions of a tissue that we hypothesize offer a strong\ntask-agnostic training signal for learning slide embeddings. Our slide and\nexpression (S+E) pre-training strategy, called Tangle, employs\nmodality-specific encoders, the outputs of which are aligned via contrastive\nlearning. Tangle was pre-trained on samples from three different organs: liver\n(n=6,597 S+E pairs), breast (n=1,020), and lung (n=1,012) from two different\nspecies (Homo sapiens and Rattus norvegicus).", + "Tangle was pre-trained on samples from three different organs: liver\n(n=6,597 S+E pairs), breast (n=1,020), and lung (n=1,012) from two different\nspecies (Homo sapiens and Rattus norvegicus). Across three independent test\ndatasets consisting of 1,265 breast WSIs, 1,946 lung WSIs, and 4,584 liver\nWSIs, Tangle shows significantly better few-shot performance compared to\nsupervised and SSL baselines. When assessed using prototype-based\nclassification and slide retrieval, Tangle also shows a substantial performance\nimprovement over all baselines. Code available at\nhttps://github.com/mahmoodlab/TANGLE.", + "Diffusion models have achieved remarkable results in generating high-quality,\ndiverse, and creative images. However, when it comes to text-based image\ngeneration, they often fail to capture the intended meaning presented in the\ntext. For instance, a specified object may not be generated, an unnecessary\nobject may be generated, and an adjective may alter objects it was not intended\nto modify. Moreover, we found that relationships indicating possession between\nobjects are often overlooked. While users' intentions in text are diverse,\nexisting methods tend to specialize in only some aspects of these. In this\npaper, we propose Predicated Diffusion, a unified framework to express users'\nintentions. We consider that the root of the above issues lies in the text\nencoder, which often focuses only on individual words and neglects the logical\nrelationships between them. The proposed method does not solely rely on the\ntext encoder, but instead, represents the intended meaning in the text as\npropositions using predicate logic and treats the pixels in the attention maps\nas the fuzzy predicates. This enables us to obtain a differentiable loss\nfunction that makes the image fulfill the proposition by minimizing it.", + "This enables us to obtain a differentiable loss\nfunction that makes the image fulfill the proposition by minimizing it. When\ncompared to several existing methods, we demonstrated that Predicated Diffusion\ncan generate images that are more faithful to various text prompts, as verified\nby human evaluators and pretrained image-text models.", + "We present Multi-Baseline Radiance Fields (MuRF), a general feed-forward\napproach to solving sparse view synthesis under multiple different baseline\nsettings (small and large baselines, and different number of input views). To\nrender a target novel view, we discretize the 3D space into planes parallel to\nthe target image plane, and accordingly construct a target view frustum volume.\nSuch a target volume representation is spatially aligned with the target view,\nwhich effectively aggregates relevant information from the input views for\nhigh-quality rendering. It also facilitates subsequent radiance field\nregression with a convolutional network thanks to its axis-aligned nature. The\n3D context modeled by the convolutional network enables our method to synthesis\nsharper scene structures than prior works. Our MuRF achieves state-of-the-art\nperformance across multiple different baseline settings and diverse scenarios\nranging from simple objects (DTU) to complex indoor and outdoor scenes\n(RealEstate10K and LLFF). We also show promising zero-shot generalization\nabilities on the Mip-NeRF 360 dataset, demonstrating the general applicability\nof MuRF.", + "Utilizing large language models (LLMs) to compose off-the-shelf visual tools\nrepresents a promising avenue of research for developing robust visual\nassistants capable of addressing diverse visual tasks. However, these methods\noften overlook the potential for continual learning, typically by freezing the\nutilized tools, thus limiting their adaptation to environments requiring new\nknowledge. To tackle this challenge, we propose CLOVA, a Closed-Loop Visual\nAssistant, which operates within a framework encompassing inference,\nreflection, and learning phases. During the inference phase, LLMs generate\nprograms and execute corresponding tools to complete assigned tasks. In the\nreflection phase, a multimodal global-local reflection scheme analyzes human\nfeedback to determine which tools require updating. Lastly, the learning phase\nemploys three flexible approaches to automatically gather training data and\nintroduces a novel prompt tuning scheme to update the tools, allowing CLOVA to\nefficiently acquire new knowledge. Experimental findings demonstrate that CLOVA\nsurpasses existing tool-usage methods by 5% in visual question answering and\nmultiple-image reasoning, by 10% in knowledge tagging, and by 20% in image\nediting. These results underscore the significance of the continual learning\ncapability in general visual assistants.", + "Dense depth maps have been used as a key element of visual perception tasks.\nThere have been tremendous efforts to enhance the depth quality, ranging from\noptimization-based to learning-based methods. Despite the remarkable progress\nfor a long time, their applicability in the real world is limited due to\nsystematic measurement biases such as density, sensing pattern, and scan range.\nIt is well-known that the biases make it difficult for these methods to achieve\ntheir generalization. We observe that learning a joint representation for input\nmodalities (e.g., images and depth), which most recent methods adopt, is\nsensitive to the biases. In this work, we disentangle those modalities to\nmitigate the biases with prompt engineering. For this, we design a novel depth\nprompt module to allow the desirable feature representation according to new\ndepth distributions from either sensor types or scene configurations. Our depth\nprompt can be embedded into foundation models for monocular depth estimation.\nThrough this embedding process, our method helps the pretrained model to be\nfree from restraint of depth scan range and to provide absolute scale depth\nmaps. We demonstrate the effectiveness of our method through extensive\nevaluations.", + "Our depth\nprompt can be embedded into foundation models for monocular depth estimation.\nThrough this embedding process, our method helps the pretrained model to be\nfree from restraint of depth scan range and to provide absolute scale depth\nmaps. We demonstrate the effectiveness of our method through extensive\nevaluations. Source code is publicly available at\nhttps://github.com/JinhwiPark/DepthPrompting .", + "We introduce a novel 3D generative method, Generative 3D Reconstruction\n(G3DR) in ImageNet, capable of generating diverse and high-quality 3D objects\nfrom single images, addressing the limitations of existing methods. At the\nheart of our framework is a novel depth regularization technique that enables\nthe generation of scenes with high-geometric fidelity. G3DR also leverages a\npretrained language-vision model, such as CLIP, to enable reconstruction in\nnovel views and improve the visual realism of generations. Additionally, G3DR\ndesigns a simple but effective sampling procedure to further improve the\nquality of generations. G3DR offers diverse and efficient 3D asset generation\nbased on class or text conditioning. Despite its simplicity, G3DR is able to\nbeat state-of-theart methods, improving over them by up to 22% in perceptual\nmetrics and 90% in geometry scores, while needing only half of the training\ntime. Code is available at https://github.com/preddy5/G3DR", + "Aiming to enhance the utilization of metric space by the parametric softmax\nclassifier, recent studies suggest replacing it with a non-parametric\nalternative. Although a non-parametric classifier may provide better metric\nspace utilization, it introduces the challenge of capturing inter-class\nrelationships. A shared characteristic among prior non-parametric classifiers\nis the static assignment of labels to prototypes during the training, ie, each\nprototype consistently represents a class throughout the training course.\nOrthogonal to previous works, we present a simple yet effective method to\noptimize the category assigned to each prototype (label-to-prototype\nassignment) during the training. To this aim, we formalize the problem as a\ntwo-step optimization objective over network parameters and label-to-prototype\nassignment mapping. We solve this optimization using a sequential combination\nof gradient descent and Bipartide matching. We demonstrate the benefits of the\nproposed approach by conducting experiments on balanced and long-tail\nclassification problems using different backbone network architectures.", + "We solve this optimization using a sequential combination\nof gradient descent and Bipartide matching. We demonstrate the benefits of the\nproposed approach by conducting experiments on balanced and long-tail\nclassification problems using different backbone network architectures. In\nparticular, our method outperforms its competitors by 1.22\\% accuracy on\nCIFAR-100, and 2.15\\% on ImageNet-200 using a metric space dimension half of\nthe size of its competitors. Code:\nhttps://github.com/msed-Ebrahimi/DL2PA_CVPR24", + "Large language models (LLMs) have shown remarkable text understanding\ncapabilities, which have been extended as Video LLMs to handle video data for\ncomprehending visual details. However, existing Video LLMs can only provide a\ncoarse description of the entire video, failing to capture the precise start\nand end time boundary of specific events. In this paper, we solve this issue\nvia proposing VTimeLLM, a novel Video LLM designed for fine-grained video\nmoment understanding and reasoning with respect to time boundary. Specifically,\nour VTimeLLM adopts a boundary-aware three-stage training strategy, which\nrespectively utilizes image-text pairs for feature alignment, multiple-event\nvideos to increase temporal-boundary awareness, and high-quality\nvideo-instruction tuning to further improve temporal understanding ability as\nwell as align with human intents. Extensive experiments demonstrate that in\nfine-grained time-related comprehension tasks for videos such as Temporal Video\nGrounding and Dense Video Captioning, VTimeLLM significantly outperforms\nexisting Video LLMs.", + "Extensive experiments demonstrate that in\nfine-grained time-related comprehension tasks for videos such as Temporal Video\nGrounding and Dense Video Captioning, VTimeLLM significantly outperforms\nexisting Video LLMs. Besides, benefits from the fine-grained temporal\nunderstanding of the videos further enable VTimeLLM to beat existing Video LLMs\nin video dialogue benchmark, showing its superior cross-modal understanding and\nreasoning abilities.", + "The modern surge in camera usage alongside widespread computer vision\ntechnology applications poses significant privacy and security concerns.\nCurrent artificial intelligence (AI) technologies aid in recognizing relevant\nevents and assisting in daily tasks in homes, offices, hospitals, etc. The need\nto access or process personal information for these purposes raises privacy\nconcerns. While software-level solutions like face de-identification provide a\ngood privacy/utility trade-off, they present vulnerabilities to sniffing\nattacks. In this paper, we propose a hardware-level face de-identification\nmethod to solve this vulnerability. Specifically, our approach first learns an\noptical encoder along with a regression model to obtain a face heatmap while\nhiding the face identity from the source image. We also propose an\nanonymization framework that generates a new face using the privacy-preserving\nimage, face heatmap, and a reference face image from a public dataset as input.\nWe validate our approach with extensive simulations and hardware experiments.", + "Predicting the future motion of surrounding agents is essential for\nautonomous vehicles (AVs) to operate safely in dynamic, human-robot-mixed\nenvironments. Context information, such as road maps and surrounding agents'\nstates, provides crucial geometric and semantic information for motion behavior\nprediction. To this end, recent works explore two-stage prediction frameworks\nwhere coarse trajectories are first proposed, and then used to select critical\ncontext information for trajectory refinement. However, they either incur a\nlarge amount of computation or bring limited improvement, if not both. In this\npaper, we introduce a novel scenario-adaptive refinement strategy, named\nSmartRefine, to refine prediction with minimal additional computation.\nSpecifically, SmartRefine can comprehensively adapt refinement configurations\nbased on each scenario's properties, and smartly chooses the number of\nrefinement iterations by introducing a quality score to measure the prediction\nquality and remaining refinement potential of each scenario. SmartRefine is\ndesigned as a generic and flexible approach that can be seamlessly integrated\ninto most state-of-the-art motion prediction models.", + "SmartRefine is\ndesigned as a generic and flexible approach that can be seamlessly integrated\ninto most state-of-the-art motion prediction models. Experiments on Argoverse\n(1 & 2) show that our method consistently improves the prediction accuracy of\nmultiple state-of-the-art prediction models. Specifically, by adding\nSmartRefine to QCNet, we outperform all published ensemble-free works on the\nArgoverse 2 leaderboard (single agent track) at submission. Comprehensive\nstudies are also conducted to ablate design choices and explore the mechanism\nbehind multi-iteration refinement. Codes are available at\nhttps://github.com/opendilab/SmartRefine/", + "With the rapid development of Multi-modal Large Language Models (MLLMs), a\nnumber of diagnostic benchmarks have recently emerged to evaluate the\ncomprehension capabilities of these models. However, most benchmarks\npredominantly assess spatial understanding in the static image tasks, while\noverlooking temporal understanding in the dynamic video tasks. To alleviate\nthis issue, we introduce a comprehensive Multi-modal Video understanding\nBenchmark, namely MVBench, which covers 20 challenging video tasks that cannot\nbe effectively solved with a single frame. Specifically, we first introduce a\nnovel static-to-dynamic method to define these temporal-related tasks. By\ntransforming various static tasks into dynamic ones, we enable the systematic\ngeneration of video tasks that require a broad spectrum of temporal skills,\nranging from perception to cognition. Then, guided by the task definition, we\nautomatically convert public video annotations into multiple-choice QA to\nevaluate each task. On one hand, such a distinct paradigm allows us to build\nMVBench efficiently, without much manual intervention. On the other hand, it\nguarantees evaluation fairness with ground-truth video annotations, avoiding\nthe biased scoring of LLMs.", + "On one hand, such a distinct paradigm allows us to build\nMVBench efficiently, without much manual intervention. On the other hand, it\nguarantees evaluation fairness with ground-truth video annotations, avoiding\nthe biased scoring of LLMs. Moreover, we further develop a robust video MLLM\nbaseline, i.e., VideoChat2, by progressive multi-modal training with diverse\ninstruction-tuning data. The extensive results on our MVBench reveal that, the\nexisting MLLMs are far from satisfactory in temporal understanding, while our\nVideoChat2 largely surpasses these leading models by over 15% on MVBench. All\nmodels and data are available at https://github.com/OpenGVLab/Ask-Anything.", + "The performance of Federated Learning (FL) hinges on the effectiveness of\nutilizing knowledge from distributed datasets. Traditional FL methods adopt an\naggregate-then-adapt framework, where clients update local models based on a\nglobal model aggregated by the server from the previous training round. This\nprocess can cause client drift, especially with significant cross-client data\nheterogeneity, impacting model performance and convergence of the FL algorithm.\nTo address these challenges, we introduce FedAF, a novel aggregation-free FL\nalgorithm. In this framework, clients collaboratively learn condensed data by\nleveraging peer knowledge, the server subsequently trains the global model\nusing the condensed data and soft labels received from the clients. FedAF\ninherently avoids the issue of client drift, enhances the quality of condensed\ndata amid notable data heterogeneity, and improves the global model\nperformance. Extensive numerical studies on several popular benchmark datasets\nshow FedAF surpasses various state-of-the-art FL algorithms in handling\nlabel-skew and feature-skew data heterogeneity, leading to superior global\nmodel accuracy and faster convergence.", + "The human ability to easily solve multimodal tasks in context (i.e., with\nonly a few demonstrations or simple instructions), is what current multimodal\nsystems have largely struggled to imitate. In this work, we demonstrate that\nthe task-agnostic in-context learning capabilities of large multimodal models\ncan be significantly enhanced by effective scaling-up. We introduce Emu2, a\ngenerative multimodal model with 37 billion parameters, trained on large-scale\nmultimodal sequences with a unified autoregressive objective. Emu2 exhibits\nstrong multimodal in-context learning abilities, even emerging to solve tasks\nthat require on-the-fly reasoning, such as visual prompting and object-grounded\ngeneration. The model sets a new record on multiple multimodal understanding\ntasks in few-shot settings. When instruction-tuned to follow specific\ninstructions, Emu2 further achieves new state-of-the-art on challenging tasks\nsuch as question answering benchmarks for large multimodal models and\nopen-ended subject-driven generation. These achievements demonstrate that Emu2\ncan serve as a base model and general-purpose interface for a wide range of\nmultimodal tasks. Code and models are publicly available to facilitate future\nresearch.", + "Remarkable strides have been made in reconstructing static scenes or human\nbodies from monocular videos. Yet, the two problems have largely been\napproached independently, without much synergy. Most visual SLAM methods can\nonly reconstruct camera trajectories and scene structures up to scale, while\nmost HMR methods reconstruct human meshes in metric scale but fall short in\nreasoning with cameras and scenes. This work introduces Synergistic Camera and\nHuman Reconstruction (SynCHMR) to marry the best of both worlds. Specifically,\nwe design Human-aware Metric SLAM to reconstruct metric-scale camera poses and\nscene point clouds using camera-frame HMR as a strong prior, addressing depth,\nscale, and dynamic ambiguities. Conditioning on the dense scene recovered, we\nfurther learn a Scene-aware SMPL Denoiser to enhance world-frame HMR by\nincorporating spatio-temporal coherency and dynamic scene constraints.\nTogether, they lead to consistent reconstructions of camera trajectories, human\nmeshes, and dense scene point clouds in a common world frame. Project page:\nhttps://paulchhuang.github.io/synchmr", + "Audio-visual saliency prediction can draw support from diverse modality\ncomplements, but further performance enhancement is still challenged by\ncustomized architectures as well as task-specific loss functions. In recent\nstudies, denoising diffusion models have shown more promising in unifying task\nframeworks owing to their inherent ability of generalization. Following this\nmotivation, a novel Diffusion architecture for generalized audio-visual\nSaliency prediction (DiffSal) is proposed in this work, which formulates the\nprediction problem as a conditional generative task of the saliency map by\nutilizing input audio and video as the conditions. Based on the spatio-temporal\naudio-visual features, an extra network Saliency-UNet is designed to perform\nmulti-modal attention modulation for progressive refinement of the ground-truth\nsaliency map from the noisy map. Extensive experiments demonstrate that the\nproposed DiffSal can achieve excellent performance across six challenging\naudio-visual benchmarks, with an average relative improvement of 6.3\\% over the\nprevious state-of-the-art results by six metrics.", + "This research focuses on the issue of single-image reflection removal (SIRR)\nin real-world conditions, examining it from two angles: the collection pipeline\nof real reflection pairs and the perception of real reflection locations. We\ndevise an advanced reflection collection pipeline that is highly adaptable to a\nwide range of real-world reflection scenarios and incurs reduced costs in\ncollecting large-scale aligned reflection pairs. In the process, we develop a\nlarge-scale, high-quality reflection dataset named Reflection Removal in the\nWild (RRW). RRW contains over 14,950 high-resolution real-world reflection\npairs, a dataset forty-five times larger than its predecessors. Regarding\nperception of reflection locations, we identify that numerous virtual\nreflection objects visible in reflection images are not present in the\ncorresponding ground-truth images. This observation, drawn from the aligned\npairs, leads us to conceive the Maximum Reflection Filter (MaxRF). The MaxRF\ncould accurately and explicitly characterize reflection locations from pairs of\nimages. Building upon this, we design a reflection location-aware cascaded\nframework, specifically tailored for SIRR.", + "This observation, drawn from the aligned\npairs, leads us to conceive the Maximum Reflection Filter (MaxRF). The MaxRF\ncould accurately and explicitly characterize reflection locations from pairs of\nimages. Building upon this, we design a reflection location-aware cascaded\nframework, specifically tailored for SIRR. Powered by these innovative\ntechniques, our solution achieves superior performance than current leading\nmethods across multiple real-world benchmarks. Codes and datasets will be\npublicly available.", + "3D Morphable Models (3DMMs) provide promising 3D face reconstructions in\nvarious applications. However, existing methods struggle to reconstruct faces\nwith extreme expressions due to deficiencies in supervisory signals, such as\nsparse or inaccurate landmarks. Segmentation information contains effective\ngeometric contexts for face reconstruction. Certain attempts intuitively depend\non differentiable renderers to compare the rendered silhouettes of\nreconstruction with segmentation, which is prone to issues like local optima\nand gradient instability. In this paper, we fully utilize the facial part\nsegmentation geometry by introducing Part Re-projection Distance Loss (PRDL).\nSpecifically, PRDL transforms facial part segmentation into 2D points and\nre-projects the reconstruction onto the image plane. Subsequently, by\nintroducing grid anchors and computing different statistical distances from\nthese anchors to the point sets, PRDL establishes geometry descriptors to\noptimize the distribution of the point sets for face reconstruction. PRDL\nexhibits a clear gradient compared to the renderer-based methods and presents\nstate-of-the-art reconstruction performance in extensive quantitative and\nqualitative experiments. Our project is available at\nhttps://github.com/wang-zidu/3DDFA-V3 .", + "Weakly supervised video anomaly detection (WSVAD) is a challenging task.\nGenerating fine-grained pseudo-labels based on weak-label and then\nself-training a classifier is currently a promising solution. However, since\nthe existing methods use only RGB visual modality and the utilization of\ncategory text information is neglected, thus limiting the generation of more\naccurate pseudo-labels and affecting the performance of self-training. Inspired\nby the manual labeling process based on the event description, in this paper,\nwe propose a novel pseudo-label generation and self-training framework based on\nText Prompt with Normality Guidance (TPWNG) for WSVAD. Our idea is to transfer\nthe rich language-visual knowledge of the contrastive language-image\npre-training (CLIP) model for aligning the video event description text and\ncorresponding video frames to generate pseudo-labels. Specifically, We first\nfine-tune the CLIP for domain adaptation by designing two ranking losses and a\ndistributional inconsistency loss. Further, we propose a learnable text prompt\nmechanism with the assist of a normality visual prompt to further improve the\nmatching accuracy of video event description text and video frames.", + "Further, we propose a learnable text prompt\nmechanism with the assist of a normality visual prompt to further improve the\nmatching accuracy of video event description text and video frames. Then, we\ndesign a pseudo-label generation module based on the normality guidance to\ninfer reliable frame-level pseudo-labels. Finally, we introduce a temporal\ncontext self-adaptive learning module to learn the temporal dependencies of\ndifferent video events more flexibly and accurately. Extensive experiments show\nthat our method achieves state-of-the-art performance on two benchmark\ndatasets, UCF-Crime and XD-Viole", + "Vision-based perception for autonomous driving requires an explicit modeling\nof a 3D space, where 2D latent representations are mapped and subsequent 3D\noperators are applied. However, operating on dense latent spaces introduces a\ncubic time and space complexity, which limits scalability in terms of\nperception range or spatial resolution. Existing approaches compress the dense\nrepresentation using projections like Bird's Eye View (BEV) or Tri-Perspective\nView (TPV). Although efficient, these projections result in information loss,\nespecially for tasks like semantic occupancy prediction. To address this, we\npropose SparseOcc, an efficient occupancy network inspired by sparse point\ncloud processing. It utilizes a lossless sparse latent representation with\nthree key innovations. Firstly, a 3D sparse diffuser performs latent completion\nusing spatially decomposed 3D sparse convolutional kernels. Secondly, a feature\npyramid and sparse interpolation enhance scales with information from others.\nFinally, the transformer head is redesigned as a sparse variant. SparseOcc\nachieves a remarkable 74.9% reduction on FLOPs over the dense baseline.", + "Secondly, a feature\npyramid and sparse interpolation enhance scales with information from others.\nFinally, the transformer head is redesigned as a sparse variant. SparseOcc\nachieves a remarkable 74.9% reduction on FLOPs over the dense baseline.\nInterestingly, it also improves accuracy, from 12.8% to 14.1% mIOU, which in\npart can be attributed to the sparse representation's ability to avoid\nhallucinations on empty voxels.", + "While super-resolution (SR) methods based on diffusion models exhibit\npromising results, their practical application is hindered by the substantial\nnumber of required inference steps. Recent methods utilize degraded images in\nthe initial state, thereby shortening the Markov chain. Nevertheless, these\nsolutions either rely on a precise formulation of the degradation process or\nstill necessitate a relatively lengthy generation path (e.g., 15 iterations).\nTo enhance inference speed, we propose a simple yet effective method for\nachieving single-step SR generation, named SinSR. Specifically, we first derive\na deterministic sampling process from the most recent state-of-the-art (SOTA)\nmethod for accelerating diffusion-based SR. This allows the mapping between the\ninput random noise and the generated high-resolution image to be obtained in a\nreduced and acceptable number of inference steps during training. We show that\nthis deterministic mapping can be distilled into a student model that performs\nSR within only one inference step.", + "This allows the mapping between the\ninput random noise and the generated high-resolution image to be obtained in a\nreduced and acceptable number of inference steps during training. We show that\nthis deterministic mapping can be distilled into a student model that performs\nSR within only one inference step. Additionally, we propose a novel\nconsistency-preserving loss to simultaneously leverage the ground-truth image\nduring the distillation process, ensuring that the performance of the student\nmodel is not solely bound by the feature manifold of the teacher model,\nresulting in further performance improvement. Extensive experiments conducted\non synthetic and real-world datasets demonstrate that the proposed method can\nachieve comparable or even superior performance compared to both previous SOTA\nmethods and the teacher model, in just one sampling step, resulting in a\nremarkable up to x10 speedup for inference. Our code will be released at\nhttps://github.com/wyf0912/SinSR", + "Video Motion Magnification (VMM) aims to reveal subtle and imperceptible\nmotion information of objects in the macroscopic world. Prior methods directly\nmodel the motion field from the Eulerian perspective by Representation Learning\nthat separates shape and texture or Multi-domain Learning from phase\nfluctuations. Inspired by the frequency spectrum, we observe that the\nlow-frequency components with stable energy always possess spatial structure\nand less noise, making them suitable for modeling the subtle motion field. To\nthis end, we present FD4MM, a new paradigm of Frequency Decoupling for Motion\nMagnification with a Multi-level Isomorphic Architecture to capture multi-level\nhigh-frequency details and a stable low-frequency structure (motion field) in\nvideo space. Since high-frequency details and subtle motions are susceptible to\ninformation degradation due to their inherent subtlety and unavoidable external\ninterference from noise, we carefully design Sparse High/Low-pass Filters to\nenhance the integrity of details and motion structures, and a Sparse Frequency\nMixer to promote seamless recoupling. Besides, we innovatively design a\ncontrastive regularization for this task to strengthen the model's ability to\ndiscriminate irrelevant features, reducing undesired motion magnification.", + "Besides, we innovatively design a\ncontrastive regularization for this task to strengthen the model's ability to\ndiscriminate irrelevant features, reducing undesired motion magnification.\nExtensive experiments on both Real-world and Synthetic Datasets show that our\nFD4MM outperforms SOTA methods. Meanwhile, FD4MM reduces FLOPs by 1.63$\\times$\nand boosts inference speed by 1.68$\\times$ than the latest method. Our code is\navailable at https://github.com/Jiafei127/FD4MM.", + "In typical medical image classification problems, labeled data is scarce\nwhile unlabeled data is more available. Semi-supervised learning and\nself-supervised learning are two different research directions that can improve\naccuracy by learning from extra unlabeled data. Recent methods from both\ndirections have reported significant gains on traditional benchmarks. Yet past\nbenchmarks do not focus on medical tasks and rarely compare self- and semi-\nmethods together on an equal footing. Furthermore, past benchmarks often handle\nhyperparameter tuning suboptimally. First, they may not tune hyperparameters at\nall, leading to underfitting. Second, when tuning does occur, it often\nunrealistically uses a labeled validation set that is much larger than the\ntraining set. Therefore currently published rankings might not always\ncorroborate with their practical utility This study contributes a systematic\nevaluation of self- and semi- methods with a unified experimental protocol\nintended to guide a practitioner with scarce overall labeled data and a limited\ncompute budget. We answer two key questions: Can hyperparameter tuning be\neffective with realistic-sized validation sets?", + "We answer two key questions: Can hyperparameter tuning be\neffective with realistic-sized validation sets? If so, when all methods are\ntuned well, which self- or semi-supervised methods achieve the best accuracy?\nOur study compares 13 representative semi- and self-supervised methods to\nstrong labeled-set-only baselines on 4 medical datasets. From 20000+ GPU hours\nof computation, we provide valuable best practices to resource-constrained\npractitioners: hyperparameter tuning is effective, and the semi-supervised\nmethod known as MixMatch delivers the most reliable gains across 4 datasets.", + "Open-world detection poses significant challenges, as it requires the\ndetection of any object using either object class labels or free-form texts.\nExisting related works often use large-scale manual annotated caption datasets\nfor training, which are extremely expensive to collect. Instead, we propose to\ntransfer knowledge from vision-language models (VLMs) to enrich the\nopen-vocabulary descriptions automatically. Specifically, we bootstrap dense\nsynthetic captions using pre-trained VLMs to provide rich descriptions on\ndifferent regions in images, and incorporate these captions to train a novel\ndetector that generalizes to novel concepts. To mitigate the noise caused by\nhallucination in synthetic captions, we also propose a novel hyperbolic\nvision-language learning approach to impose a hierarchy between visual and\ncaption embeddings. We call our detector ``HyperLearner''. We conduct extensive\nexperiments on a wide variety of open-world detection benchmarks (COCO, LVIS,\nObject Detection in the Wild, RefCOCO) and our results show that our model\nconsistently outperforms existing state-of-the-art methods, such as GLIP,\nGLIPv2 and Grounding DINO, when using the same backbone.", + "In recent advancements in high-fidelity image generation, Denoising Diffusion\nProbabilistic Models (DDPMs) have emerged as a key player. However, their\napplication at high resolutions presents significant computational challenges.\nCurrent methods, such as patchifying, expedite processes in UNet and\nTransformer architectures but at the expense of representational capacity.\nAddressing this, we introduce the Diffusion State Space Model (DiffuSSM), an\narchitecture that supplants attention mechanisms with a more scalable state\nspace model backbone. This approach effectively handles higher resolutions\nwithout resorting to global compression, thus preserving detailed image\nrepresentation throughout the diffusion process. Our focus on FLOP-efficient\narchitectures in diffusion training marks a significant step forward.\nComprehensive evaluations on both ImageNet and LSUN datasets at two resolutions\ndemonstrate that DiffuSSMs are on par or even outperform existing diffusion\nmodels with attention modules in FID and Inception Score metrics while\nsignificantly reducing total FLOP usage.", + "Quantifying the degree of similarity between images is a key copyright issue\nfor image-based machine learning. In legal doctrine however, determining the\ndegree of similarity between works requires subjective analysis, and\nfact-finders (judges and juries) can demonstrate considerable variability in\nthese subjective judgement calls. Images that are structurally similar can be\ndeemed dissimilar, whereas images of completely different scenes can be deemed\nsimilar enough to support a claim of copying. We seek to define and compute a\nnotion of \"conceptual similarity\" among images that captures high-level\nrelations even among images that do not share repeated elements or visually\nsimilar components. The idea is to use a base multi-modal model to generate\n\"explanations\" (captions) of visual data at increasing levels of complexity.\nThen, similarity can be measured by the length of the caption needed to\ndiscriminate between the two images: Two highly dissimilar images can be\ndiscriminated early in their description, whereas conceptually dissimilar ones\nwill need more detail to be distinguished.", + "Then, similarity can be measured by the length of the caption needed to\ndiscriminate between the two images: Two highly dissimilar images can be\ndiscriminated early in their description, whereas conceptually dissimilar ones\nwill need more detail to be distinguished. We operationalize this definition\nand show that it correlates with subjective (averaged human evaluation)\nassessment, and beats existing baselines on both image-to-image and\ntext-to-text similarity benchmarks. Beyond just providing a number, our method\nalso offers interpretability by pointing to the specific level of granularity\nof the description where the source data are differentiated.", + "Content-aware graphic layout generation aims to automatically arrange visual\nelements along with a given content, such as an e-commerce product image. In\nthis paper, we argue that the current layout generation approaches suffer from\nthe limited training data for the high-dimensional layout structure. We show\nthat a simple retrieval augmentation can significantly improve the generation\nquality. Our model, which is named Retrieval-Augmented Layout Transformer\n(RALF), retrieves nearest neighbor layout examples based on an input image and\nfeeds these results into an autoregressive generator. Our model can apply\nretrieval augmentation to various controllable generation tasks and yield\nhigh-quality layouts within a unified architecture. Our extensive experiments\nshow that RALF successfully generates content-aware layouts in both constrained\nand unconstrained settings and significantly outperforms the baselines.", + "Online Continual Learning (CL) solves the problem of learning the\never-emerging new classification tasks from a continuous data stream. Unlike\nits offline counterpart, in online CL, the training data can only be seen once.\nMost existing online CL research regards catastrophic forgetting (i.e., model\nstability) as almost the only challenge. In this paper, we argue that the\nmodel's capability to acquire new knowledge (i.e., model plasticity) is another\nchallenge in online CL. While replay-based strategies have been shown to be\neffective in alleviating catastrophic forgetting, there is a notable gap in\nresearch attention toward improving model plasticity. To this end, we propose\nCollaborative Continual Learning (CCL), a collaborative learning based strategy\nto improve the model's capability in acquiring new concepts. Additionally, we\nintroduce Distillation Chain (DC), a collaborative learning scheme to boost the\ntraining of the models. We adapt CCL-DC to existing representative online CL\nworks.", + "Additionally, we\nintroduce Distillation Chain (DC), a collaborative learning scheme to boost the\ntraining of the models. We adapt CCL-DC to existing representative online CL\nworks. Extensive experiments demonstrate that even if the learners are\nwell-trained with state-of-the-art online CL methods, our strategy can still\nimprove model plasticity dramatically, and thereby improve the overall\nperformance by a large margin. The source code of our work is available at\nhttps://github.com/maorong-wang/CCL-DC.", + "Recent advances in personalized image generation allow a pre-trained\ntext-to-image model to learn a new concept from a set of images. However,\nexisting personalization approaches usually require heavy test-time finetuning\nfor each concept, which is time-consuming and difficult to scale. We propose\nInstantBooth, a novel approach built upon pre-trained text-to-image models that\nenables instant text-guided image personalization without any test-time\nfinetuning. We achieve this with several major components. First, we learn the\ngeneral concept of the input images by converting them to a textual token with\na learnable image encoder. Second, to keep the fine details of the identity, we\nlearn rich visual feature representation by introducing a few adapter layers to\nthe pre-trained model. We train our components only on text-image pairs without\nusing paired images of the same concept. Compared to test-time finetuning-based\nmethods like DreamBooth and Textual-Inversion, our model can generate\ncompetitive results on unseen concepts concerning language-image alignment,\nimage fidelity, and identity preservation while being 100 times faster.", + "N:M sparsity has received increasing attention due to its remarkable\nperformance and latency trade-off compared with structured and unstructured\nsparsity. However, existing N:M sparsity methods do not differentiate the\nrelative importance of weights among blocks and leave important weights\nunderappreciated. Besides, they directly apply N:M sparsity to the whole\nnetwork, which will cause severe information loss. Thus, they are still\nsub-optimal. In this paper, we propose an efficient and effective Multi-Axis\nQuery methodology, dubbed as MaxQ, to rectify these problems. During the\ntraining, MaxQ employs a dynamic approach to generate soft N:M masks,\nconsidering the weight importance across multiple axes. This method enhances\nthe weights with more importance and ensures more effective updates. Meanwhile,\na sparsity strategy that gradually increases the percentage of N:M weight\nblocks is applied, which allows the network to heal from the pruning-induced\ndamage progressively. During the runtime, the N:M soft masks can be precomputed\nas constants and folded into weights without causing any distortion to the\nsparse pattern and incurring additional computational overhead.", + "During the runtime, the N:M soft masks can be precomputed\nas constants and folded into weights without causing any distortion to the\nsparse pattern and incurring additional computational overhead. Comprehensive\nexperiments demonstrate that MaxQ achieves consistent improvements across\ndiverse CNN architectures in various computer vision tasks, including image\nclassification, object detection and instance segmentation. For ResNet50 with\n1:16 sparse pattern, MaxQ can achieve 74.6\\% top-1 accuracy on ImageNet and\nimprove by over 2.8\\% over the state-of-the-art. Codes and checkpoints are\navailable at \\url{https://github.com/JingyangXiang/MaxQ}.", + "We introduce multimodal story summarization by leveraging TV episode recaps -\nshort video sequences interweaving key story moments from previous episodes to\nbring viewers up to speed. We propose PlotSnap, a dataset featuring two crime\nthriller TV shows with rich recaps and long episodes of 40 minutes. Story\nsummarization labels are unlocked by matching recap shots to corresponding\nsub-stories in the episode. We propose a hierarchical model TaleSumm that\nprocesses entire episodes by creating compact shot and dialog representations,\nand predicts importance scores for each video shot and dialog utterance by\nenabling interactions between local story groups. Unlike traditional\nsummarization, our method extracts multiple plot points from long videos. We\npresent a thorough evaluation on story summarization, including promising\ncross-series generalization. TaleSumm also shows good results on classic video\nsummarization benchmarks.", + "Image datasets are essential not only in validating existing methods in\ncomputer vision but also in developing new methods. Most existing image\ndatasets focus on trichromatic intensity images to mimic human vision. However,\npolarization and spectrum, the wave properties of light that animals in harsh\nenvironments and with limited brain capacity often rely on, remain\nunderrepresented in existing datasets. Although spectro-polarimetric datasets\nexist, these datasets have insufficient object diversity, limited illumination\nconditions, linear-only polarization data, and inadequate image count. Here, we\nintroduce two spectro-polarimetric datasets: trichromatic Stokes images and\nhyperspectral Stokes images. These novel datasets encompass both linear and\ncircular polarization; they introduce multiple spectral channels; and they\nfeature a broad selection of real-world scenes. With our dataset in hand, we\nanalyze the spectro-polarimetric image statistics, develop efficient\nrepresentations of such high-dimensional data, and evaluate spectral dependency\nof shape-from-polarization methods. As such, the proposed dataset promises a\nfoundation for data-driven spectro-polarimetric imaging and vision research.\nDataset and code will be publicly available.", + "Generative vision-language models (VLMs) have shown impressive performance in\nzero-shot vision-language tasks like image captioning and visual question\nanswering. However, improving their zero-shot reasoning typically requires\nsecond-stage instruction tuning, which relies heavily on human-labeled or large\nlanguage model-generated annotation, incurring high labeling costs. To tackle\nthis challenge, we introduce Image-Conditioned Caption Correction (ICCC), a\nnovel pre-training task designed to enhance VLMs' zero-shot performance without\nthe need for labeled task-aware data. The ICCC task compels VLMs to rectify\nmismatches between visual and language concepts, thereby enhancing instruction\nfollowing and text generation conditioned on visual inputs. Leveraging language\nstructure and a lightweight dependency parser, we construct data samples of\nICCC task from image-text datasets with low labeling and computation costs.\nExperimental results on BLIP-2 and InstructBLIP demonstrate significant\nimprovements in zero-shot image-text generation-based VL tasks through ICCC\ninstruction tuning.", + "Automating visual inspection in industrial production lines is essential for\nincreasing product quality across various industries. Anomaly detection (AD)\nmethods serve as robust tools for this purpose. However, existing public\ndatasets primarily consist of images without anomalies, limiting the practical\napplication of AD methods in production settings. To address this challenge, we\npresent (1) the Valeo Anomaly Dataset (VAD), a novel real-world industrial\ndataset comprising 5000 images, including 2000 instances of challenging real\ndefects across more than 20 subclasses. Acknowledging that traditional AD\nmethods struggle with this dataset, we introduce (2) Segmentation-based Anomaly\nDetector (SegAD). First, SegAD leverages anomaly maps as well as segmentation\nmaps to compute local statistics. Next, SegAD uses these statistics and an\noptional supervised classifier score as input features for a Boosted Random\nForest (BRF) classifier, yielding the final anomaly score. Our SegAD achieves\nstate-of-the-art performance on both VAD (+2.1% AUROC) and the VisA dataset\n(+0.4% AUROC). The code and the models are publicly available.", + "Current approaches for 3D scene graph prediction rely on labeled datasets to\ntrain models for a fixed set of known object classes and relationship\ncategories. We present Open3DSG, an alternative approach to learn 3D scene\ngraph prediction in an open world without requiring labeled scene graph data.\nWe co-embed the features from a 3D scene graph prediction backbone with the\nfeature space of powerful open world 2D vision language foundation models. This\nenables us to predict 3D scene graphs from 3D point clouds in a zero-shot\nmanner by querying object classes from an open vocabulary and predicting the\ninter-object relationships from a grounded LLM with scene graph features and\nqueried object classes as context. Open3DSG is the first 3D point cloud method\nto predict not only explicit open-vocabulary object classes, but also open-set\nrelationships that are not limited to a predefined label set, making it\npossible to express rare as well as specific objects and relationships in the\npredicted 3D scene graph. Our experiments show that Open3DSG is effective at\npredicting arbitrary object classes as well as their complex inter-object\nrelationships describing spatial, supportive, semantic and comparative\nrelationships.", + "In this paper, we revisit techniques for uncertainty estimation within deep\nneural networks and consolidate a suite of techniques to enhance their\nreliability. Our investigation reveals that an integrated application of\ndiverse techniques--spanning model regularization, classifier and\noptimization--substantially improves the accuracy of uncertainty predictions in\nimage classification tasks. The synergistic effect of these techniques\nculminates in our novel SURE approach. We rigorously evaluate SURE against the\nbenchmark of failure prediction, a critical testbed for uncertainty estimation\nefficacy. Our results showcase a consistently better performance than models\nthat individually deploy each technique, across various datasets and model\narchitectures. When applied to real-world challenges, such as data corruption,\nlabel noise, and long-tailed class distribution, SURE exhibits remarkable\nrobustness, delivering results that are superior or on par with current\nstate-of-the-art specialized methods. Particularly on Animal-10N and Food-101N\nfor learning with noisy labels, SURE achieves state-of-the-art performance\nwithout any task-specific adjustments.", + "Particularly on Animal-10N and Food-101N\nfor learning with noisy labels, SURE achieves state-of-the-art performance\nwithout any task-specific adjustments. This work not only sets a new benchmark\nfor robust uncertainty estimation but also paves the way for its application in\ndiverse, real-world scenarios where reliability is paramount. Our code is\navailable at \\url{https://yutingli0606.github.io/SURE/}.", + "In text-to-image personalization, a timely and crucial challenge is the\ntendency of generated images overfitting to the biases present in the reference\nimages. We initiate our study with a comprehensive categorization of the biases\ninto background, nearby-object, tied-object, substance (in style\nre-contextualization), and pose biases. These biases manifest in the generated\nimages due to their entanglement into the subject embedding. This undesired\nembedding entanglement not only results in the reflection of biases from the\nreference images into the generated images but also notably diminishes the\nalignment of the generated images with the given generation prompt. To address\nthis challenge, we propose SID~(Selectively Informative Description), a text\ndescription strategy that deviates from the prevalent approach of only\ncharacterizing the subject's class identification. SID is generated utilizing\nmultimodal GPT-4 and can be seamlessly integrated into optimization-based\nmodels. We present comprehensive experimental results along with analyses of\ncross-attention maps, subject-alignment, non-subject-disentanglement, and\ntext-alignment.", + "We study object interaction anticipation in egocentric videos. This task\nrequires an understanding of the spatio-temporal context formed by past actions\non objects, coined action context. We propose TransFusion, a multimodal\ntransformer-based architecture. It exploits the representational power of\nlanguage by summarizing the action context. TransFusion leverages pre-trained\nimage captioning and vision-language models to extract the action context from\npast video frames. This action context together with the next video frame is\nprocessed by the multimodal fusion module to forecast the next object\ninteraction. Our model enables more efficient end-to-end learning. The large\npre-trained language models add common sense and a generalisation capability.\nExperiments on Ego4D and EPIC-KITCHENS-100 show the effectiveness of our\nmultimodal fusion model. They also highlight the benefits of using\nlanguage-based context summaries in a task where vision seems to suffice. Our\nmethod outperforms state-of-the-art approaches by 40.4% in relative terms in\noverall mAP on the Ego4D test set.", + "They also highlight the benefits of using\nlanguage-based context summaries in a task where vision seems to suffice. Our\nmethod outperforms state-of-the-art approaches by 40.4% in relative terms in\noverall mAP on the Ego4D test set. We validate the effectiveness of TransFusion\nvia experiments on EPIC-KITCHENS-100. Video and code are available at\nhttps://eth-ait.github.io/transfusion-proj/.", + "Image denoising is a fundamental task in computer vision. While prevailing\ndeep learning-based supervised and self-supervised methods have excelled in\neliminating in-distribution noise, their susceptibility to out-of-distribution\n(OOD) noise remains a significant challenge. The recent emergence of\ncontrastive language-image pre-training (CLIP) model has showcased exceptional\ncapabilities in open-world image recognition and segmentation. Yet, the\npotential for leveraging CLIP to enhance the robustness of low-level tasks\nremains largely unexplored. This paper uncovers that certain dense features\nextracted from the frozen ResNet image encoder of CLIP exhibit\ndistortion-invariant and content-related properties, which are highly desirable\nfor generalizable denoising. Leveraging these properties, we devise an\nasymmetrical encoder-decoder denoising network, which incorporates dense\nfeatures including the noisy image and its multi-scale features from the frozen\nResNet encoder of CLIP into a learnable image decoder to achieve generalizable\ndenoising. The progressive feature augmentation strategy is further proposed to\nmitigate feature overfitting and improve the robustness of the learnable\ndecoder.", + "The progressive feature augmentation strategy is further proposed to\nmitigate feature overfitting and improve the robustness of the learnable\ndecoder. Extensive experiments and comparisons conducted across diverse OOD\nnoises, including synthetic noise, real-world sRGB noise, and low-dose CT image\nnoise, demonstrate the superior generalization ability of our method.", + "Recently, diffusion models have made remarkable progress in text-to-image\n(T2I) generation, synthesizing images with high fidelity and diverse contents.\nDespite this advancement, latent space smoothness within diffusion models\nremains largely unexplored. Smooth latent spaces ensure that a perturbation on\nan input latent corresponds to a steady change in the output image. This\nproperty proves beneficial in downstream tasks, including image interpolation,\ninversion, and editing. In this work, we expose the non-smoothness of diffusion\nlatent spaces by observing noticeable visual fluctuations resulting from minor\nlatent variations. To tackle this issue, we propose Smooth Diffusion, a new\ncategory of diffusion models that can be simultaneously high-performing and\nsmooth. Specifically, we introduce Step-wise Variation Regularization to\nenforce the proportion between the variations of an arbitrary input latent and\nthat of the output image is a constant at any diffusion training step. In\naddition, we devise an interpolation standard deviation (ISTD) metric to\neffectively assess the latent space smoothness of a diffusion model.", + "In\naddition, we devise an interpolation standard deviation (ISTD) metric to\neffectively assess the latent space smoothness of a diffusion model. Extensive\nquantitative and qualitative experiments demonstrate that Smooth Diffusion\nstands out as a more desirable solution not only in T2I generation but also\nacross various downstream tasks. Smooth Diffusion is implemented as a\nplug-and-play Smooth-LoRA to work with various community models. Code is\navailable at https://github.com/SHI-Labs/Smooth-Diffusion.", + "The task of Visual Place Recognition (VPR) aims to match a query image\nagainst references from an extensive database of images from different places,\nrelying solely on visual cues. State-of-the-art pipelines focus on the\naggregation of features extracted from a deep backbone, in order to form a\nglobal descriptor for each image. In this context, we introduce SALAD (Sinkhorn\nAlgorithm for Locally Aggregated Descriptors), which reformulates NetVLAD's\nsoft-assignment of local features to clusters as an optimal transport problem.\nIn SALAD, we consider both feature-to-cluster and cluster-to-feature relations\nand we also introduce a 'dustbin' cluster, designed to selectively discard\nfeatures deemed non-informative, enhancing the overall descriptor quality.\nAdditionally, we leverage and fine-tune DINOv2 as a backbone, which provides\nenhanced description power for the local features, and dramatically reduces the\nrequired training time. As a result, our single-stage method not only surpasses\nsingle-stage baselines in public VPR datasets, but also surpasses two-stage\nmethods that add a re-ranking with significantly higher cost. Code and models\nare available at https://github.com/serizba/salad.", + "Vision foundation models have been explored recently to build general-purpose\nvision systems. However, predominant paradigms, driven by casting\ninstance-level tasks as an object-word alignment, bring heavy cross-modality\ninteraction, which is not effective in prompting object detection and visual\ngrounding. Another line of work that focuses on pixel-level tasks often\nencounters a large annotation gap of things and stuff, and suffers from mutual\ninterference between foreground-object and background-class segmentation. In\nstark contrast to the prevailing methods, we present APE, a universal visual\nperception model for aligning and prompting everything all at once in an image\nto perform diverse tasks, i.e., detection, segmentation, and grounding, as an\ninstance-level sentence-object matching paradigm. Specifically, APE advances\nthe convergence of detection and grounding by reformulating language-guided\ngrounding as open-vocabulary detection, which efficiently scales up model\nprompting to thousands of category vocabularies and region descriptions while\nmaintaining the effectiveness of cross-modality fusion. To bridge the\ngranularity gap of different pixel-level tasks, APE equalizes semantic and\npanoptic segmentation to proxy instance learning by considering any isolated\nregions as individual instances.", + "To bridge the\ngranularity gap of different pixel-level tasks, APE equalizes semantic and\npanoptic segmentation to proxy instance learning by considering any isolated\nregions as individual instances. APE aligns vision and language representation\non broad data with natural and challenging characteristics all at once without\ntask-specific fine-tuning. The extensive experiments on over 160 datasets\ndemonstrate that, with only one-suit of weights, APE outperforms (or is on par\nwith) the state-of-the-art models, proving that an effective yet universal\nperception for anything aligning and prompting is indeed feasible. Codes and\ntrained models are released at https://github.com/shenyunhang/APE.", + "Multimodal sentiment analysis (MSA) aims to understand human sentiment\nthrough multimodal data. Most MSA efforts are based on the assumption of\nmodality completeness. However, in real-world applications, some practical\nfactors cause uncertain modality missingness, which drastically degrades the\nmodel's performance. To this end, we propose a Correlation-decoupled Knowledge\nDistillation (CorrKD) framework for the MSA task under uncertain missing\nmodalities. Specifically, we present a sample-level contrastive distillation\nmechanism that transfers comprehensive knowledge containing cross-sample\ncorrelations to reconstruct missing semantics. Moreover, a category-guided\nprototype distillation mechanism is introduced to capture cross-category\ncorrelations using category prototypes to align feature distributions and\ngenerate favorable joint representations. Eventually, we design a\nresponse-disentangled consistency distillation strategy to optimize the\nsentiment decision boundaries of the student network through response\ndisentanglement and mutual information maximization. Comprehensive experiments\non three datasets indicate that our framework can achieve favorable\nimprovements compared with several baselines.", + "The machine learning community has witnessed a drastic change in the training\npipeline, pivoted by those ''foundation models'' with unprecedented scales.\nHowever, the field of adversarial training is lagging behind, predominantly\ncentered around small model sizes like ResNet-50, and tiny and low-resolution\ndatasets like CIFAR-10. To bridge this transformation gap, this paper provides\na modern re-examination with adversarial training, investigating its potential\nbenefits when applied at scale. Additionally, we introduce an efficient and\neffective training strategy to enable adversarial training with giant models\nand web-scale data at an affordable computing cost. We denote this newly\nintroduced framework as AdvXL.\n Empirical results demonstrate that AdvXL establishes new state-of-the-art\nrobust accuracy records under AutoAttack on ImageNet-1K. For example, by\ntraining on DataComp-1B dataset, our AdvXL empowers a vanilla ViT-g model to\nsubstantially surpass the previous records of $l_{\\infty}$-, $l_{2}$-, and\n$l_{1}$-robust accuracy by margins of 11.4%, 14.2% and 12.9%, respectively.", + "This achievement posits AdvXL as a pioneering approach, charting a new\ntrajectory for the efficient training of robust visual representations at\nsignificantly larger scales. Our code is available at\nhttps://github.com/UCSC-VLAA/AdvXL.", + "Although adversarial training (AT) has proven effective in enhancing the\nmodel's robustness, the recently revealed issue of fairness in robustness has\nnot been well addressed, i.e. the robust accuracy varies significantly among\ndifferent categories. In this paper, instead of uniformly evaluating the\nmodel's average class performance, we delve into the issue of robust fairness,\nby considering the worst-case distribution across various classes. We propose a\nnovel learning paradigm, named Fairness-Aware Adversarial Learning (FAAL). As a\ngeneralization of conventional AT, we re-define the problem of adversarial\ntraining as a min-max-max framework, to ensure both robustness and fairness of\nthe trained model. Specifically, by taking advantage of distributional robust\noptimization, our method aims to find the worst distribution among different\ncategories, and the solution is guaranteed to obtain the upper bound\nperformance with high probability. In particular, FAAL can fine-tune an unfair\nrobust model to be fair within only two epochs, without compromising the\noverall clean and robust accuracies. Extensive experiments on various image\ndatasets validate the superior performance and efficiency of the proposed FAAL\ncompared to other state-of-the-art methods.", + "Referring video object segmentation (RVOS) aims to segment the target\ninstance referred by a given text expression in a video clip. The text\nexpression normally contains sophisticated description of the instance's\nappearance, action, and relation with others. It is therefore rather difficult\nfor a RVOS model to capture all these attributes correspondingly in the video;\nin fact, the model often favours more on the action- and relation-related\nvisual attributes of the instance. This can end up with partial or even\nincorrect mask prediction of the target instance. We tackle this problem by\ntaking a subject-centric short text expression from the original long text\nexpression. The short one retains only the appearance-related information of\nthe target instance so that we can use it to focus the model's attention on the\ninstance's appearance. We let the model make joint predictions using both long\nand short text expressions; and insert a long-short cross-attention module to\ninteract the joint features and a long-short predictions intersection loss to\nregulate the joint predictions.", + "We let the model make joint predictions using both long\nand short text expressions; and insert a long-short cross-attention module to\ninteract the joint features and a long-short predictions intersection loss to\nregulate the joint predictions. Besides the improvement on the linguistic part,\nwe also introduce a forward-backward visual consistency loss, which utilizes\noptical flows to warp visual features between the annotated frames and their\ntemporal neighbors for consistency. We build our method on top of two state of\nthe art pipelines. Extensive experiments on A2D-Sentences, Refer-YouTube-VOS,\nJHMDB-Sentences and Refer-DAVIS17 show impressive improvements of our\nmethod.Code is available at https://github.com/LinfengYuan1997/Losh.", + "Dual-Camera Compressed Hyperspectral Imaging (DCCHI) offers the capability to\nreconstruct 3D Hyperspectral Image (HSI) by fusing compressive and Panchromatic\n(PAN) image, which has shown great potential for snapshot hyperspectral imaging\nin practice. In this paper, we introduce a novel DCCHI reconstruction network,\nthe Intra-Inter Similarity Exploiting Transformer (In2SET). Our key insight is\nto make full use of the PAN image to assist the reconstruction. To this end, we\npropose using the intra-similarity within the PAN image as a proxy for\napproximating the intra-similarity in the original HSI, thereby offering an\nenhanced content prior for more accurate HSI reconstruction. Furthermore, we\naim to align the features from the underlying HSI with those of the PAN image,\nmaintaining semantic consistency and introducing new contextual information for\nthe reconstruction process. By integrating In2SET into a PAN-guided unrolling\nframework, our method substantially enhances the spatial-spectral fidelity and\ndetail of the reconstructed images, providing a more comprehensive and accurate\ndepiction of the scene.", + "By integrating In2SET into a PAN-guided unrolling\nframework, our method substantially enhances the spatial-spectral fidelity and\ndetail of the reconstructed images, providing a more comprehensive and accurate\ndepiction of the scene. Extensive experiments conducted on both real and\nsimulated datasets demonstrate that our approach consistently outperforms\nexisting state-of-the-art methods in terms of reconstruction quality and\ncomputational complexity. Code will be released.", + "Unsupervised video object segmentation (VOS) aims to detect and segment the\nmost salient object in videos. The primary techniques used in unsupervised VOS\nare 1) the collaboration of appearance and motion information; and 2) temporal\nfusion between different frames. This paper proposes two novel prototype-based\nattention mechanisms, inter-modality attention (IMA) and inter-frame attention\n(IFA), to incorporate these techniques via dense propagation across different\nmodalities and frames. IMA densely integrates context information from\ndifferent modalities based on a mutual refinement. IFA injects global context\nof a video to the query frame, enabling a full utilization of useful properties\nfrom multiple frames. Experimental results on public benchmark datasets\ndemonstrate that our proposed approach outperforms all existing methods by a\nsubstantial margin. The proposed two components are also thoroughly validated\nvia ablative study.", + "We introduce in-context matting, a novel task setting of image matting. Given\na reference image of a certain foreground and guided priors such as points,\nscribbles, and masks, in-context matting enables automatic alpha estimation on\na batch of target images of the same foreground category, without additional\nauxiliary input. This setting marries good performance in auxiliary input-based\nmatting and ease of use in automatic matting, which finds a good trade-off\nbetween customization and automation. To overcome the key challenge of accurate\nforeground matching, we introduce IconMatting, an in-context matting model\nbuilt upon a pre-trained text-to-image diffusion model. Conditioned on inter-\nand intra-similarity matching, IconMatting can make full use of reference\ncontext to generate accurate target alpha mattes. To benchmark the task, we\nalso introduce a novel testing dataset ICM-$57$, covering 57 groups of\nreal-world images. Quantitative and qualitative results on the ICM-57 testing\nset show that IconMatting rivals the accuracy of trimap-based matting while\nretaining the automation level akin to automatic matting. Code is available at\nhttps://github.com/tiny-smart/in-context-matting", + "Recent studies have noted an intriguing phenomenon termed Neural Collapse,\nthat is, when the neural networks establish the right correlation between\nfeature spaces and the training targets, their last-layer features, together\nwith the classifier weights, will collapse into a stable and symmetric\nstructure. In this paper, we extend the investigation of Neural Collapse to the\nbiased datasets with imbalanced attributes. We observe that models will easily\nfall into the pitfall of shortcut learning and form a biased, non-collapsed\nfeature space at the early period of training, which is hard to reverse and\nlimits the generalization capability. To tackle the root cause of biased\nclassification, we follow the recent inspiration of prime training, and propose\nan avoid-shortcut learning framework without additional training complexity.\nWith well-designed shortcut primes based on Neural Collapse structure, the\nmodels are encouraged to skip the pursuit of simple shortcuts and naturally\ncapture the intrinsic correlations. Experimental results demonstrate that our\nmethod induces better convergence properties during training, and achieves\nstate-of-the-art generalization performance on both synthetic and real-world\nbiased datasets.", + "While previous studies have demonstrated successful 3D object shape\ncompletion with a sufficient number of points, they often fail in scenarios\nwhen a few points, e.g. tens of points, are observed. Surprisingly, via entropy\nanalysis, we find that even a few points, e.g. 64 points, could retain\nsubstantial information to help recover the 3D shape of the object. To address\nthe challenge of shape completion with very sparse point clouds, we then\npropose Few-point Shape Completion (FSC) model, which contains a novel\ndual-branch feature extractor for handling extremely sparse inputs, coupled\nwith an extensive branch for maximal point utilization with a saliency branch\nfor dynamic importance assignment. This model is further bolstered by a\ntwo-stage revision network that refines both the extracted features and the\ndecoder output, enhancing the detail and authenticity of the completed point\ncloud. Our experiments demonstrate the feasibility of recovering 3D shapes from\na few points. The proposed Few-point Shape Completion (FSC) model outperforms\nprevious methods on both few-point inputs and many-point inputs, and shows good\ngeneralizability to different object categories.", + "The increased demand for 3D data in AR/VR, robotics and gaming applications,\ngave rise to powerful generative pipelines capable of synthesizing high-quality\n3D objects. Most of these models rely on the Score Distillation Sampling (SDS)\nalgorithm to optimize a 3D representation such that the rendered image\nmaintains a high likelihood as evaluated by a pre-trained diffusion model.\nHowever, finding a correct mode in the high-dimensional distribution produced\nby the diffusion model is challenging and often leads to issues such as\nover-saturation, over-smoothing, and Janus-like artifacts. In this paper, we\npropose a novel learning paradigm for 3D synthesis that utilizes pre-trained\ndiffusion models. Instead of focusing on mode-seeking, our method directly\nmodels the distribution discrepancy between multi-view renderings and diffusion\npriors in an adversarial manner, which unlocks the generation of high-fidelity\nand photorealistic 3D content, conditioned on a single image and prompt.", + "Instead of focusing on mode-seeking, our method directly\nmodels the distribution discrepancy between multi-view renderings and diffusion\npriors in an adversarial manner, which unlocks the generation of high-fidelity\nand photorealistic 3D content, conditioned on a single image and prompt.\nMoreover, by harnessing the latent space of GANs and expressive diffusion model\npriors, our method facilitates a wide variety of 3D applications including\nsingle-view reconstruction, high diversity generation and continuous 3D\ninterpolation in the open domain. The experiments demonstrate the superiority\nof our pipeline compared to previous works in terms of generation quality and\ndiversity.", + "We propose Strongly Supervised pre-training with ScreenShots (S4) - a novel\npre-training paradigm for Vision-Language Models using data from large-scale\nweb screenshot rendering. Using web screenshots unlocks a treasure trove of\nvisual and textual cues that are not present in using image-text pairs. In S4,\nwe leverage the inherent tree-structured hierarchy of HTML elements and the\nspatial localization to carefully design 10 pre-training tasks with large scale\nannotated data. These tasks resemble downstream tasks across different domains\nand the annotations are cheap to obtain. We demonstrate that, compared to\ncurrent screenshot pre-training objectives, our innovative pre-training method\nsignificantly enhances performance of image-to-text model in nine varied and\npopular downstream tasks - up to 76.1% improvements on Table Detection, and at\nleast 1% on Widget Captioning.", + "In this paper, we democratise caricature generation, empowering individuals\nto effortlessly craft personalised caricatures with just a photo and a\nconceptual sketch. Our objective is to strike a delicate balance between\nabstraction and identity, while preserving the creativity and subjectivity\ninherent in a sketch. To achieve this, we present Explicit Rank-1 Model Editing\nalongside single-image personalisation, selectively applying nuanced edits to\ncross-attention layers for a seamless merge of identity and style.\nAdditionally, we propose Random Mask Reconstruction to enhance robustness,\ndirecting the model to focus on distinctive identity and style features.\nCrucially, our aim is not to replace artists but to eliminate accessibility\nbarriers, allowing enthusiasts to engage in the artistry.", + "We concentrate on a novel human-centric image synthesis task, that is, given\nonly one reference facial photograph, it is expected to generate specific\nindividual images with diverse head positions, poses, facial expressions, and\nilluminations in different contexts. To accomplish this goal, we argue that our\ngenerative model should be capable of the following favorable characteristics:\n(1) a strong visual and semantic understanding of our world and human society\nfor basic object and human image generation. (2) generalizable identity\npreservation ability. (3) flexible and fine-grained head control. Recently,\nlarge pre-trained text-to-image diffusion models have shown remarkable results,\nserving as a powerful generative foundation. As a basis, we aim to unleash the\nabove two capabilities of the pre-trained model. In this work, we present a new\nframework named CapHuman. We embrace the \"encode then learn to align\" paradigm,\nwhich enables generalizable identity preservation for new individuals without\ncumbersome tuning at inference. CapHuman encodes identity features and then\nlearns to align them into the latent space.", + "We embrace the \"encode then learn to align\" paradigm,\nwhich enables generalizable identity preservation for new individuals without\ncumbersome tuning at inference. CapHuman encodes identity features and then\nlearns to align them into the latent space. Moreover, we introduce the 3D\nfacial prior to equip our model with control over the human head in a flexible\nand 3D-consistent manner. Extensive qualitative and quantitative analyses\ndemonstrate our CapHuman can produce well-identity-preserved, photo-realistic,\nand high-fidelity portraits with content-rich representations and various head\nrenditions, superior to established baselines. Code and checkpoint will be\nreleased at https://github.com/VamosC/CapHuman.", + "Recently, transformer-based methods have achieved state-of-the-art prediction\nquality on human pose estimation(HPE). Nonetheless, most of these\ntop-performing transformer-based models are too computation-consuming and\nstorage-demanding to deploy on edge computing platforms. Those\ntransformer-based models that require fewer resources are prone to\nunder-fitting due to their smaller scale and thus perform notably worse than\ntheir larger counterparts. Given this conundrum, we introduce SDPose, a new\nself-distillation method for improving the performance of small\ntransformer-based models. To mitigate the problem of under-fitting, we design a\ntransformer module named Multi-Cycled Transformer(MCT) based on multiple-cycled\nforwards to more fully exploit the potential of small model parameters.\nFurther, in order to prevent the additional inference compute-consuming brought\nby MCT, we introduce a self-distillation scheme, extracting the knowledge from\nthe MCT module to a naive forward model. Specifically, on the MSCOCO validation\ndataset, SDPose-T obtains 69.7% mAP with 4.4M parameters and 1.8 GFLOPs.", + "Specifically, on the MSCOCO validation\ndataset, SDPose-T obtains 69.7% mAP with 4.4M parameters and 1.8 GFLOPs.\nFurthermore, SDPose-S-V2 obtains 73.5% mAP on the MSCOCO validation dataset\nwith 6.2M parameters and 4.7 GFLOPs, achieving a new state-of-the-art among\npredominant tiny neural network methods. Our code is available at\nhttps://github.com/MartyrPenink/SDPose.", + "The authentic 3D hand avatar with every identifiable information, such as\nhand shapes and textures, is necessary for immersive experiences in AR/VR. In\nthis paper, we present a universal hand model (UHM), which 1) can universally\nrepresent high-fidelity 3D hand meshes of arbitrary identities (IDs) and 2) can\nbe adapted to each person with a short phone scan for the authentic hand\navatar. For effective universal hand modeling, we perform tracking and modeling\nat the same time, while previous 3D hand models perform them separately. The\nconventional separate pipeline suffers from the accumulated errors from the\ntracking stage, which cannot be recovered in the modeling stage. On the other\nhand, ours does not suffer from the accumulated errors while having a much more\nconcise overall pipeline. We additionally introduce a novel image matching loss\nfunction to address a skin sliding during the tracking and modeling, while\nexisting works have not focused on it much. Finally, using learned priors from\nour UHM, we effectively adapt our UHM to each person's short phone scan for the\nauthentic hand avatar.", + "Humans possess the remarkable skill of Visual Perception, the ability to see\nand understand the seen, helping them make sense of the visual world and, in\nturn, reason. Multimodal Large Language Models (MLLM) have recently achieved\nimpressive performance on vision-language tasks ranging from visual\nquestion-answering and image captioning to visual reasoning and image\ngeneration. However, when prompted to identify or count (perceive) the entities\nin a given image, existing MLLM systems fail. Working towards developing an\naccurate MLLM system for perception and reasoning, we propose using Versatile\nvision enCoders (VCoder) as perception eyes for Multimodal LLMs. We feed the\nVCoder with perception modalities such as segmentation or depth maps, improving\nthe MLLM's perception abilities. Secondly, we leverage the images from COCO and\noutputs from off-the-shelf vision perception models to create our COCO\nSegmentation Text (COST) dataset for training and evaluating MLLMs on the\nobject perception task. Thirdly, we introduce metrics to assess the object\nperception abilities in MLLMs on our COST dataset.", + "Thirdly, we introduce metrics to assess the object\nperception abilities in MLLMs on our COST dataset. Lastly, we provide extensive\nexperimental evidence proving the VCoder's improved object-level perception\nskills over existing Multimodal LLMs, including GPT-4V. We open-source our\ndataset, code, and models to promote research. We open-source our code at\nhttps://github.com/SHI-Labs/VCoder", + "Interpreting camera data is key for autonomously acting systems, such as\nautonomous vehicles. Vision systems that operate in real-world environments\nmust be able to understand their surroundings and need the ability to deal with\nnovel situations. This paper tackles open-world semantic segmentation, i.e.,\nthe variant of interpreting image data in which objects occur that have not\nbeen seen during training. We propose a novel approach that performs accurate\nclosed-world semantic segmentation and, at the same time, can identify new\ncategories without requiring any additional training data. Our approach\nadditionally provides a similarity measure for every newly discovered class in\nan image to a known category, which can be useful information in downstream\ntasks such as planning or mapping. Through extensive experiments, we show that\nour model achieves state-of-the-art results on classes known from training data\nas well as for anomaly segmentation and can distinguish between different\nunknown classes.", + "We propose a lightweight and scalable Regional Point-Language Contrastive\nlearning framework, namely \\textbf{RegionPLC}, for open-world 3D scene\nunderstanding, aiming to identify and recognize open-set objects and\ncategories. Specifically, based on our empirical studies, we introduce a\n3D-aware SFusion strategy that fuses 3D vision-language pairs derived from\nmultiple 2D foundation models, yielding high-quality, dense region-level\nlanguage descriptions without human 3D annotations. Subsequently, we devise a\nregion-aware point-discriminative contrastive learning objective to enable\nrobust and effective 3D learning from dense regional language supervision. We\ncarry out extensive experiments on ScanNet, ScanNet200, and nuScenes datasets,\nand our model outperforms prior 3D open-world scene understanding approaches by\nan average of 17.2\\% and 9.1\\% for semantic and instance segmentation,\nrespectively, while maintaining greater scalability and lower resource demands.\nFurthermore, our method has the flexibility to be effortlessly integrated with\nlanguage models to enable open-ended grounded 3D reasoning without extra\ntask-specific training. Code is available at https://github.com/CVMI-Lab/PLA.", + "Visual-inertial odometry (VIO) has demonstrated remarkable success due to its\nlow-cost and complementary sensors. However, existing VIO methods lack the\ngeneralization ability to adjust to different environments and sensor\nattributes. In this paper, we propose Adaptive VIO, a new monocular\nvisual-inertial odometry that combines online continual learning with\ntraditional nonlinear optimization. Adaptive VIO comprises two networks to\npredict visual correspondence and IMU bias. Unlike end-to-end approaches that\nuse networks to fuse the features from two modalities (camera and IMU) and\npredict poses directly, we combine neural networks with visual-inertial bundle\nadjustment in our VIO system. The optimized estimates will be fed back to the\nvisual and IMU bias networks, refining the networks in a self-supervised\nmanner. Such a learning-optimization-combined framework and feedback mechanism\nenable the system to perform online continual learning. Experiments demonstrate\nthat our Adaptive VIO manifests adaptive capability on EuRoC and TUM-VI\ndatasets. The overall performance exceeds the currently known learning-based\nVIO methods and is comparable to the state-of-the-art optimization-based\nmethods.", + "Pretrained diffusion models and their outputs are widely accessible due to\ntheir exceptional capacity for synthesizing high-quality images and their\nopen-source nature. The users, however, may face litigation risks owing to the\nmodels' tendency to memorize and regurgitate training data during inference. To\naddress this, we introduce Anti-Memorization Guidance (AMG), a novel framework\nemploying three targeted guidance strategies for the main causes of\nmemorization: image and caption duplication, and highly specific user prompts.\nConsequently, AMG ensures memorization-free outputs while maintaining high\nimage quality and text alignment, leveraging the synergy of its guidance\nmethods, each indispensable in its own right. AMG also features an innovative\nautomatic detection system for potential memorization during each step of\ninference process, allows selective application of guidance strategies,\nminimally interfering with the original sampling process to preserve output\nutility. We applied AMG to pretrained Denoising Diffusion Probabilistic Models\n(DDPM) and Stable Diffusion across various generation tasks.", + "We applied AMG to pretrained Denoising Diffusion Probabilistic Models\n(DDPM) and Stable Diffusion across various generation tasks. The results\ndemonstrate that AMG is the first approach to successfully eradicates all\ninstances of memorization with no or marginal impacts on image quality and\ntext-alignment, as evidenced by FID and CLIP scores.", + "The lightweight \"local-match-global\" matching introduced by SRe2L\nsuccessfully creates a distilled dataset with comprehensive information on the\nfull 224x224 ImageNet-1k. However, this one-sided approach is limited to a\nparticular backbone, layer, and statistics, which limits the improvement of the\ngeneralization of a distilled dataset. We suggest that sufficient and various\n\"local-match-global\" matching are more precise and effective than a single one\nand has the ability to create a distilled dataset with richer information and\nbetter generalization. We call this perspective \"generalized matching\" and\npropose Generalized Various Backbone and Statistical Matching (G-VBSM) in this\nwork, which aims to create a synthetic dataset with densities, ensuring\nconsistency with the complete dataset across various backbones, layers, and\nstatistics. As experimentally demonstrated, G-VBSM is the first algorithm to\nobtain strong performance across both small-scale and large-scale datasets.", + "As experimentally demonstrated, G-VBSM is the first algorithm to\nobtain strong performance across both small-scale and large-scale datasets.\nSpecifically, G-VBSM achieves a performance of 38.7% on CIFAR-100 with\n128-width ConvNet, 47.6% on Tiny-ImageNet with ResNet18, and 31.4% on the full\n224x224 ImageNet-1k with ResNet18, under images per class (IPC) 10, 50, and 10,\nrespectively. These results surpass all SOTA methods by margins of 3.9%, 6.5%,\nand 10.1%, respectively.", + "Self-supervised image backbones can be used to address complex 2D tasks\n(e.g., semantic segmentation, object discovery) very efficiently and with\nlittle or no downstream supervision. Ideally, 3D backbones for lidar should be\nable to inherit these properties after distillation of these powerful 2D\nfeatures. The most recent methods for image-to-lidar distillation on autonomous\ndriving data show promising results, obtained thanks to distillation methods\nthat keep improving. Yet, we still notice a large performance gap when\nmeasuring the quality of distilled and fully supervised features by linear\nprobing. In this work, instead of focusing only on the distillation method, we\nstudy the effect of three pillars for distillation: the 3D backbone, the\npretrained 2D backbones, and the pretraining dataset. In particular, thanks to\nour scalable distillation method named ScaLR, we show that scaling the 2D and\n3D backbones and pretraining on diverse datasets leads to a substantial\nimprovement of the feature quality.", + "In particular, thanks to\nour scalable distillation method named ScaLR, we show that scaling the 2D and\n3D backbones and pretraining on diverse datasets leads to a substantial\nimprovement of the feature quality. This allows us to significantly reduce the\ngap between the quality of distilled and fully-supervised 3D features, and to\nimprove the robustness of the pretrained backbones to domain gaps and\nperturbations.", + "How important is it for training and evaluation sets to not have class\noverlap in image retrieval? We revisit Google Landmarks v2 clean, the most\npopular training set, by identifying and removing class overlap with Revisited\nOxford and Paris [34], the most popular evaluation set. By comparing the\noriginal and the new RGLDv2-clean on a benchmark of reproduced state-of-the-art\nmethods, our findings are striking. Not only is there a dramatic drop in\nperformance, but it is inconsistent across methods, changing the ranking.What\ndoes it take to focus on objects or interest and ignore background clutter when\nindexing? Do we need to train an object detector and the representation\nseparately? Do we need location supervision? We introduce Single-stage\nDetect-to-Retrieve (CiDeR), an end-to-end, single-stage pipeline to detect\nobjects of interest and extract a global image representation. We outperform\nprevious state-of-the-art on both existing training sets and the new\nRGLDv2-clean. Our dataset is available at\nhttps://github.com/dealicious-inc/RGLDv2-clean.", + "In this paper, we address the challenge of making ViT models more robust to\nunseen affine transformations. Such robustness becomes useful in various\nrecognition tasks such as face recognition when image alignment failures occur.\nWe propose a novel method called KP-RPE, which leverages key points\n(e.g.~facial landmarks) to make ViT more resilient to scale, translation, and\npose variations. We begin with the observation that Relative Position Encoding\n(RPE) is a good way to bring affine transform generalization to ViTs. RPE,\nhowever, can only inject the model with prior knowledge that nearby pixels are\nmore important than far pixels. Keypoint RPE (KP-RPE) is an extension of this\nprinciple, where the significance of pixels is not solely dictated by their\nproximity but also by their relative positions to specific keypoints within the\nimage. By anchoring the significance of pixels around keypoints, the model can\nmore effectively retain spatial relationships, even when those relationships\nare disrupted by affine transformations. We show the merit of KP-RPE in face\nand gait recognition.", + "By anchoring the significance of pixels around keypoints, the model can\nmore effectively retain spatial relationships, even when those relationships\nare disrupted by affine transformations. We show the merit of KP-RPE in face\nand gait recognition. The experimental results demonstrate the effectiveness in\nimproving face recognition performance from low-quality images, particularly\nwhere alignment is prone to failure. Code and pre-trained models are available.", + "Object State Changes (OSCs) are pivotal for video understanding. While humans\ncan effortlessly generalize OSC understanding from familiar to unknown objects,\ncurrent approaches are confined to a closed vocabulary. Addressing this gap, we\nintroduce a novel open-world formulation for the video OSC problem. The goal is\nto temporally localize the three stages of an OSC -- the object's initial\nstate, its transitioning state, and its end state -- whether or not the object\nhas been observed during training. Towards this end, we develop VidOSC, a\nholistic learning approach that: (1) leverages text and vision-language models\nfor supervisory signals to obviate manually labeling OSC training data, and (2)\nabstracts fine-grained shared state representations from objects to enhance\ngeneralization. Furthermore, we present HowToChange, the first open-world\nbenchmark for video OSC localization, which offers an order of magnitude\nincrease in the label space and annotation volume compared to the best existing\nbenchmark. Experimental results demonstrate the efficacy of our approach, in\nboth traditional closed-world and open-world scenarios.", + "Sampling from the posterior distribution poses a major computational\nchallenge in solving inverse problems using latent diffusion models. Common\nmethods rely on Tweedie's first-order moments, which are known to induce a\nquality-limiting bias. Existing second-order approximations are impractical due\nto prohibitive computational costs, making standard reverse diffusion processes\nintractable for posterior sampling. This paper introduces Second-order Tweedie\nsampler from Surrogate Loss (STSL), a novel sampler that offers efficiency\ncomparable to first-order Tweedie with a tractable reverse process using\nsecond-order approximation. Our theoretical results reveal that the\nsecond-order approximation is lower bounded by our surrogate loss that only\nrequires $O(1)$ compute using the trace of the Hessian, and by the lower bound\nwe derive a new drift term to make the reverse process tractable. Our method\nsurpasses SoTA solvers PSLD and P2L, achieving 4X and 8X reduction in neural\nfunction evaluations, respectively, while notably enhancing sampling quality on\nFFHQ, ImageNet, and COCO benchmarks.", + "Our method\nsurpasses SoTA solvers PSLD and P2L, achieving 4X and 8X reduction in neural\nfunction evaluations, respectively, while notably enhancing sampling quality on\nFFHQ, ImageNet, and COCO benchmarks. In addition, we show STSL extends to\ntext-guided image editing and addresses residual distortions present from\ncorrupted images in leading text-guided image editing methods. To our best\nknowledge, this is the first work to offer an efficient second-order\napproximation in solving inverse problems using latent diffusion and editing\nreal-world images with corruptions.", + "Vector-Quantized (VQ-based) generative models usually consist of two basic\ncomponents, i.e., VQ tokenizers and generative transformers. Prior research\nfocuses on improving the reconstruction fidelity of VQ tokenizers but rarely\nexamines how the improvement in reconstruction affects the generation ability\nof generative transformers. In this paper, we surprisingly find that improving\nthe reconstruction fidelity of VQ tokenizers does not necessarily improve the\ngeneration. Instead, learning to compress semantic features within VQ\ntokenizers significantly improves generative transformers' ability to capture\ntextures and structures. We thus highlight two competing objectives of VQ\ntokenizers for image synthesis: semantic compression and details preservation.\nDifferent from previous work that only pursues better details preservation, we\npropose Semantic-Quantized GAN (SeQ-GAN) with two learning phases to balance\nthe two objectives. In the first phase, we propose a semantic-enhanced\nperceptual loss for better semantic compression. In the second phase, we fix\nthe encoder and codebook, but enhance and finetune the decoder to achieve\nbetter details preservation.", + "In the first phase, we propose a semantic-enhanced\nperceptual loss for better semantic compression. In the second phase, we fix\nthe encoder and codebook, but enhance and finetune the decoder to achieve\nbetter details preservation. The proposed SeQ-GAN greatly improves VQ-based\ngenerative models and surpasses the GAN and Diffusion Models on both\nunconditional and conditional image generation. Our SeQ-GAN (364M) achieves\nFrechet Inception Distance (FID) of 6.25 and Inception Score (IS) of 140.9 on\n256x256 ImageNet generation, a remarkable improvement over VIT-VQGAN (714M),\nwhich obtains 11.2 FID and 97.2 IS.", + "Feature matching is a crucial task in the field of computer vision, which\ninvolves finding correspondences between images. Previous studies achieve\nremarkable performance using learning-based feature comparison. However, the\npervasive presence of matching redundancy between images gives rise to\nunnecessary and error-prone computations in these methods, imposing limitations\non their accuracy. To address this issue, we propose MESA, a novel approach to\nestablish precise area (or region) matches for efficient matching redundancy\nreduction. MESA first leverages the advanced image understanding capability of\nSAM, a state-of-the-art foundation model for image segmentation, to obtain\nimage areas with implicit semantic. Then, a multi-relational graph is proposed\nto model the spatial structure of these areas and construct their scale\nhierarchy. Based on graphical models derived from the graph, the area matching\nis reformulated as an energy minimization task and effectively resolved.\nExtensive experiments demonstrate that MESA yields substantial precision\nimprovement for multiple point matchers in indoor and outdoor downstream tasks,\ne.g. +13.61% for DKM in indoor pose estimation.", + "Image signal processing (ISP) pipeline plays a fundamental role in digital\ncameras, which converts raw Bayer sensor data to RGB images. However,\nISP-generated images usually suffer from imperfections due to the compounded\ndegradations that stem from sensor noises, demosaicing noises, compression\nartifacts, and possibly adverse effects of erroneous ISP hyperparameter\nsettings such as ISO and gamma values. In a general sense, these ISP\nimperfections can be considered as degradations. The highly complex mechanisms\nof ISP degradations, some of which are even unknown, pose great challenges to\nthe generalization capability of deep neural networks (DNN) for image\nrestoration and to their adaptability to downstream tasks. To tackle the\nissues, we propose a novel DNN approach to learn degradation-independent\nrepresentations (DiR) through the refinement of a self-supervised learned\nbaseline representation. The proposed DiR learning technique has remarkable\ndomain generalization capability and consequently, it outperforms\nstate-of-the-art methods across various downstream tasks, including blind image\nrestoration, object detection, and instance segmentation, as verified in our\nexperiments.", + "Accurate representation in media is known to improve the well-being of the\npeople who consume it. Generative image models trained on large web-crawled\ndatasets such as LAION are known to produce images with harmful stereotypes and\nmisrepresentations of cultures. We improve inclusive representation in\ngenerated images by (1) engaging with communities to collect a culturally\nrepresentative dataset that we call the Cross-Cultural Understanding Benchmark\n(CCUB) and (2) proposing a novel Self-Contrastive Fine-Tuning (SCoFT) method\nthat leverages the model's known biases to self-improve. SCoFT is designed to\nprevent overfitting on small datasets, encode only high-level information from\nthe data, and shift the generated distribution away from misrepresentations\nencoded in a pretrained model. Our user study conducted on 51 participants from\n5 different countries based on their self-selected national cultural\naffiliation shows that fine-tuning on CCUB consistently generates images with\nhigher cultural relevance and fewer stereotypes when compared to the Stable\nDiffusion baseline, which is further improved with our SCoFT technique.", + "In this paper, we showcase the effectiveness of optimizing monocular camera\nposes as a continuous function of time. The camera poses are represented using\nan implicit neural function which maps the given time to the corresponding\ncamera pose. The mapped camera poses are then used for the downstream tasks\nwhere joint camera pose optimization is also required. While doing so, the\nnetwork parameters -- that implicitly represent camera poses -- are optimized.\nWe exploit the proposed method in four diverse experimental settings, namely,\n(1) NeRF from noisy poses; (2) NeRF from asynchronous Events; (3) Visual\nSimultaneous Localization and Mapping (vSLAM); and (4) vSLAM with IMUs. In all\nfour settings, the proposed method performs significantly better than the\ncompared baselines and the state-of-the-art methods. Additionally, using the\nassumption of continuous motion, changes in pose may actually live in a\nmanifold that has lower than 6 degrees of freedom (DOF) is also realized. We\ncall this low DOF motion representation as the \\emph{intrinsic motion} and use\nthe approach in vSLAM settings, showing impressive camera tracking performance.", + "The image matching field has been witnessing a continuous emergence of novel\nlearnable feature matching techniques, with ever-improving performance on\nconventional benchmarks. However, our investigation shows that despite these\ngains, their potential for real-world applications is restricted by their\nlimited generalization capabilities to novel image domains. In this paper, we\nintroduce OmniGlue, the first learnable image matcher that is designed with\ngeneralization as a core principle. OmniGlue leverages broad knowledge from a\nvision foundation model to guide the feature matching process, boosting\ngeneralization to domains not seen at training time. Additionally, we propose a\nnovel keypoint position-guided attention mechanism which disentangles spatial\nand appearance information, leading to enhanced matching descriptors. We\nperform comprehensive experiments on a suite of $7$ datasets with varied image\ndomains, including scene-level, object-centric and aerial images.", + "Additionally, we propose a\nnovel keypoint position-guided attention mechanism which disentangles spatial\nand appearance information, leading to enhanced matching descriptors. We\nperform comprehensive experiments on a suite of $7$ datasets with varied image\ndomains, including scene-level, object-centric and aerial images. OmniGlue's\nnovel components lead to relative gains on unseen domains of $20.9\\%$ with\nrespect to a directly comparable reference model, while also outperforming the\nrecent LightGlue method by $9.5\\%$ relatively.Code and model can be found at\nhttps://hwjiang1510.github.io/OmniGlue", + "We present a method to reconstruct indoor and outdoor static scene geometry\nand appearance from an omnidirectional video moving in a small circular sweep.\nThis setting is challenging because of the small baseline and large depth\nranges, making it difficult to find ray crossings. To better constrain the\noptimization, we estimate geometry as a signed distance field within a\nspherical binoctree data structure and use a complementary efficient tree\ntraversal strategy based on a breadth-first search for sampling. Unlike regular\ngrids or trees, the shape of this structure well-matches the camera setting,\ncreating a better memory-quality trade-off. From an initial depth estimate, the\nbinoctree is adaptively subdivided throughout the optimization; previous\nmethods use a fixed depth that leaves the scene undersampled. In comparison\nwith three neural optimization methods and two non-neural methods, ours shows\ndecreased geometry error on average, especially in a detailed scene, while\nsignificantly reducing the required number of voxels to represent such details.", + "Recovering ghost-free High Dynamic Range (HDR) images from multiple Low\nDynamic Range (LDR) images becomes challenging when the LDR images exhibit\nsaturation and significant motion. Recent Diffusion Models (DMs) have been\nintroduced in HDR imaging field, demonstrating promising performance,\nparticularly in achieving visually perceptible results compared to previous\nDNN-based methods. However, DMs require extensive iterations with large models\nto estimate entire images, resulting in inefficiency that hinders their\npractical application. To address this challenge, we propose the Low-Frequency\naware Diffusion (LF-Diff) model for ghost-free HDR imaging. The key idea of\nLF-Diff is implementing the DMs in a highly compacted latent space and\nintegrating it into a regression-based model to enhance the details of\nreconstructed images. Specifically, as low-frequency information is closely\nrelated to human visual perception we propose to utilize DMs to create compact\nlow-frequency priors for the reconstruction process. In addition, to take full\nadvantage of the above low-frequency priors, the Dynamic HDR Reconstruction\nNetwork (DHRNet) is carried out in a regression-based manner to obtain final\nHDR images.", + "In addition, to take full\nadvantage of the above low-frequency priors, the Dynamic HDR Reconstruction\nNetwork (DHRNet) is carried out in a regression-based manner to obtain final\nHDR images. Extensive experiments conducted on synthetic and real-world\nbenchmark datasets demonstrate that our LF-Diff performs favorably against\nseveral state-of-the-art methods and is 10$\\times$ faster than previous\nDM-based methods.", + "A fundamental characteristic common to both human vision and natural language\nis their compositional nature. Yet, despite the performance gains contributed\nby large vision and language pretraining, recent investigations find that\nmost-if not all-our state-of-the-art vision-language models struggle at\ncompositionality. They are unable to distinguish between images of \" a girl in\nwhite facing a man in black\" and \"a girl in black facing a man in white\".\nMoreover, prior work suggests that compositionality doesn't arise with scale:\nlarger model sizes or training data don't help. This paper develops a new\niterated training algorithm that incentivizes compositionality. We draw on\ndecades of cognitive science research that identifies cultural transmission-the\nneed to teach a new generation-as a necessary inductive prior that incentivizes\nhumans to develop compositional languages. Specifically, we reframe\nvision-language contrastive learning as the Lewis Signaling Game between a\nvision agent and a language agent, and operationalize cultural transmission by\niteratively resetting one of the agent's weights during training. After every\niteration, this training paradigm induces representations that become \"easier\nto learn\", a property of compositional languages: e.g.", + "After every\niteration, this training paradigm induces representations that become \"easier\nto learn\", a property of compositional languages: e.g. our model trained on\nCC3M and CC12M improves standard CLIP by 4.7%, 4.0% respectfully in the\nSugarCrepe benchmark.", + "Tracking using bio-inspired event cameras has drawn more and more attention\nin recent years. Existing works either utilize aligned RGB and event data for\naccurate tracking or directly learn an event-based tracker. The first category\nneeds more cost for inference and the second one may be easily influenced by\nnoisy events or sparse spatial resolution. In this paper, we propose a novel\nhierarchical knowledge distillation framework that can fully utilize\nmulti-modal / multi-view information during training to facilitate knowledge\ntransfer, enabling us to achieve high-speed and low-latency visual tracking\nduring testing by using only event signals. Specifically, a teacher\nTransformer-based multi-modal tracking framework is first trained by feeding\nthe RGB frame and event stream simultaneously. Then, we design a new\nhierarchical knowledge distillation strategy which includes pairwise\nsimilarity, feature representation, and response maps-based knowledge\ndistillation to guide the learning of the student Transformer network.\nMoreover, since existing event-based tracking datasets are all low-resolution\n($346 \\times 260$), we propose the first large-scale high-resolution ($1280\n\\times 720$) dataset named EventVOT.", + "Moreover, since existing event-based tracking datasets are all low-resolution\n($346 \\times 260$), we propose the first large-scale high-resolution ($1280\n\\times 720$) dataset named EventVOT. It contains 1141 videos and covers a wide\nrange of categories such as pedestrians, vehicles, UAVs, ping pongs, etc.\nExtensive experiments on both low-resolution (FE240hz, VisEvent, COESOT), and\nour newly proposed high-resolution EventVOT dataset fully validated the\neffectiveness of our proposed method. The dataset, evaluation toolkit, and\nsource code are available on\n\\url{https://github.com/Event-AHU/EventVOT_Benchmark}", + "Temporal Action Detection (TAD) aims to identify the action boundaries and\nthe corresponding category within untrimmed videos. Inspired by the success of\nDETR in object detection, several methods have adapted the query-based\nframework to the TAD task. However, these approaches primarily followed DETR to\npredict actions at the instance level (i.e., identify each action by its center\npoint), leading to sub-optimal boundary localization. To address this issue, we\npropose a new Dual-level query-based TAD framework, namely DualDETR, to detect\nactions from both instance-level and boundary-level. Decoding at different\nlevels requires semantics of different granularity, therefore we introduce a\ntwo-branch decoding structure. This structure builds distinctive decoding\nprocesses for different levels, facilitating explicit capture of temporal cues\nand semantics at each level. On top of the two-branch design, we present a\njoint query initialization strategy to align queries from both levels.\nSpecifically, we leverage encoder proposals to match queries from each level in\na one-to-one manner. Then, the matched queries are initialized using position\nand content prior from the matched action proposal.", + "Specifically, we leverage encoder proposals to match queries from each level in\na one-to-one manner. Then, the matched queries are initialized using position\nand content prior from the matched action proposal. The aligned dual-level\nqueries can refine the matched proposal with complementary cues during\nsubsequent decoding. We evaluate DualDETR on three challenging multi-label TAD\nbenchmarks. The experimental results demonstrate the superior performance of\nDualDETR to the existing state-of-the-art methods, achieving a substantial\nimprovement under det-mAP and delivering impressive results under seg-mAP.", + "Recent Text-to-Image (T2I) generation models such as Stable Diffusion and\nImagen have made significant progress in generating high-resolution images\nbased on text descriptions. However, many generated images still suffer from\nissues such as artifacts/implausibility, misalignment with text descriptions,\nand low aesthetic quality. Inspired by the success of Reinforcement Learning\nwith Human Feedback (RLHF) for large language models, prior works collected\nhuman-provided scores as feedback on generated images and trained a reward\nmodel to improve the T2I generation. In this paper, we enrich the feedback\nsignal by (i) marking image regions that are implausible or misaligned with the\ntext, and (ii) annotating which words in the text prompt are misrepresented or\nmissing on the image. We collect such rich human feedback on 18K generated\nimages (RichHF-18K) and train a multimodal transformer to predict the rich\nfeedback automatically.", + "We collect such rich human feedback on 18K generated\nimages (RichHF-18K) and train a multimodal transformer to predict the rich\nfeedback automatically. We show that the predicted rich human feedback can be\nleveraged to improve image generation, for example, by selecting high-quality\ntraining data to finetune and improve the generative models, or by creating\nmasks with predicted heatmaps to inpaint the problematic regions. Notably, the\nimprovements generalize to models (Muse) beyond those used to generate the\nimages on which human feedback data were collected (Stable Diffusion variants).\nThe RichHF-18K data set will be released in our GitHub repository:\nhttps://github.com/google-research/google-research/tree/master/richhf_18k.", + "Panorama video recently attracts more interest in both study and application,\ncourtesy of its immersive experience. Due to the expensive cost of capturing\n360-degree panoramic videos, generating desirable panorama videos by prompts is\nurgently required. Lately, the emerging text-to-video (T2V) diffusion methods\ndemonstrate notable effectiveness in standard video generation. However, due to\nthe significant gap in content and motion patterns between panoramic and\nstandard videos, these methods encounter challenges in yielding satisfactory\n360-degree panoramic videos. In this paper, we propose a pipeline named\n360-Degree Video Diffusion model (360DVD) for generating 360-degree panoramic\nvideos based on the given prompts and motion conditions. Specifically, we\nintroduce a lightweight 360-Adapter accompanied by 360 Enhancement Techniques\nto transform pre-trained T2V models for panorama video generation. We further\npropose a new panorama dataset named WEB360 consisting of panoramic video-text\npairs for training 360DVD, addressing the absence of captioned panoramic video\ndatasets. Extensive experiments demonstrate the superiority and effectiveness\nof 360DVD for panorama video generation. Our project page is at\nhttps://akaneqwq.github.io/360DVD/.", + "Pose regression networks predict the camera pose of a query image relative to\na known environment. Within this family of methods, absolute pose regression\n(APR) has recently shown promising accuracy in the range of a few centimeters\nin position error. APR networks encode the scene geometry implicitly in their\nweights. To achieve high accuracy, they require vast amounts of training data\nthat, realistically, can only be created using novel view synthesis in a\ndays-long process. This process has to be repeated for each new scene again and\nagain. We present a new approach to pose regression, map-relative pose\nregression (marepo), that satisfies the data hunger of the pose regression\nnetwork in a scene-agnostic fashion. We condition the pose regressor on a\nscene-specific map representation such that its pose predictions are relative\nto the scene map. This allows us to train the pose regressor across hundreds of\nscenes to learn the generic relation between a scene-specific map\nrepresentation and the camera pose. Our map-relative pose regressor can be\napplied to new map representations immediately or after mere minutes of\nfine-tuning for the highest accuracy.", + "This allows us to train the pose regressor across hundreds of\nscenes to learn the generic relation between a scene-specific map\nrepresentation and the camera pose. Our map-relative pose regressor can be\napplied to new map representations immediately or after mere minutes of\nfine-tuning for the highest accuracy. Our approach outperforms previous pose\nregression methods by far on two public datasets, indoor and outdoor. Code is\navailable: https://nianticlabs.github.io/marepo", + "Implicit neural SLAM has achieved remarkable progress recently. Nevertheless,\nexisting methods face significant challenges in non-ideal scenarios, such as\nmotion blur or lighting variation, which often leads to issues like convergence\nfailures, localization drifts, and distorted mapping. To address these\nchallenges, we propose EN-SLAM, the first event-RGBD implicit neural SLAM\nframework, which effectively leverages the high rate and high dynamic range\nadvantages of event data for tracking and mapping. Specifically, EN-SLAM\nproposes a differentiable CRF (Camera Response Function) rendering technique to\ngenerate distinct RGB and event camera data via a shared radiance field, which\nis optimized by learning a unified implicit representation with the captured\nevent and RGBD supervision. Moreover, based on the temporal difference property\nof events, we propose a temporal aggregating optimization strategy for the\nevent joint tracking and global bundle adjustment, capitalizing on the\nconsecutive difference constraints of events, significantly enhancing tracking\naccuracy and robustness. Finally, we construct the simulated dataset\nDEV-Indoors and real captured dataset DEV-Reals containing 6 scenes, 17\nsequences with practical motion blur and lighting changes for evaluations.", + "Finally, we construct the simulated dataset\nDEV-Indoors and real captured dataset DEV-Reals containing 6 scenes, 17\nsequences with practical motion blur and lighting changes for evaluations.\nExperimental results show that our method outperforms the SOTA methods in both\ntracking ATE and mapping ACC with a real-time 17 FPS in various challenging\nenvironments. Project page: https://delinqu.github.io/EN-SLAM.", + "In this paper, we introduce a novel approach that harnesses both 2D and 3D\nattentions to enable highly accurate depth completion without requiring\niterative spatial propagations. Specifically, we first enhance a baseline\nconvolutional depth completion model by applying attention to 2D features in\nthe bottleneck and skip connections. This effectively improves the performance\nof this simple network and sets it on par with the latest, complex\ntransformer-based models. Leveraging the initial depths and features from this\nnetwork, we uplift the 2D features to form a 3D point cloud and construct a 3D\npoint transformer to process it, allowing the model to explicitly learn and\nexploit 3D geometric features. In addition, we propose normalization techniques\nto process the point cloud, which improves learning and leads to better\naccuracy than directly using point transformers off the shelf. Furthermore, we\nincorporate global attention on downsampled point cloud features, which enables\nlong-range context while still being computationally feasible.", + "In addition, we propose normalization techniques\nto process the point cloud, which improves learning and leads to better\naccuracy than directly using point transformers off the shelf. Furthermore, we\nincorporate global attention on downsampled point cloud features, which enables\nlong-range context while still being computationally feasible. We evaluate our\nmethod, DeCoTR, on established depth completion benchmarks, including NYU Depth\nV2 and KITTI, showcasing that it sets new state-of-the-art performance. We\nfurther conduct zero-shot evaluations on ScanNet and DDAD benchmarks and\ndemonstrate that DeCoTR has superior generalizability compared to existing\napproaches.", + "When building classification systems with demographic fairness\nconsiderations, there are two objectives to satisfy: 1) maximizing utility for\nthe specific task and 2) ensuring fairness w.r.t. a known demographic\nattribute. These objectives often compete, so optimizing both can lead to a\ntrade-off between utility and fairness. While existing works acknowledge the\ntrade-offs and study their limits, two questions remain unanswered: 1) What are\nthe optimal trade-offs between utility and fairness? and 2) How can we\nnumerically quantify these trade-offs from data for a desired prediction task\nand demographic attribute of interest? This paper addresses these questions. We\nintroduce two utility-fairness trade-offs: the Data-Space and Label-Space\nTrade-off. The trade-offs reveal three regions within the utility-fairness\nplane, delineating what is fully and partially possible and impossible. We\npropose U-FaTE, a method to numerically quantify the trade-offs for a given\nprediction task and group fairness definition from data samples. Based on the\ntrade-offs, we introduce a new scheme for evaluating representations.", + "We\npropose U-FaTE, a method to numerically quantify the trade-offs for a given\nprediction task and group fairness definition from data samples. Based on the\ntrade-offs, we introduce a new scheme for evaluating representations. An\nextensive evaluation of fair representation learning methods and\nrepresentations from over 1000 pre-trained models revealed that most current\napproaches are far from the estimated and achievable fairness-utility\ntrade-offs across multiple datasets and prediction tasks.", + "Test-time adaptation (TTA) aims to adapt a pre-trained model to a new test\ndomain without access to source data after deployment. Existing approaches\ntypically rely on self-training with pseudo-labels since ground-truth cannot be\nobtained from test data. Although the quality of pseudo labels is important for\nstable and accurate long-term adaptation, it has not been previously addressed.\nIn this work, we propose DPLOT, a simple yet effective TTA framework that\nconsists of two components: (1) domain-specific block selection and (2)\npseudo-label generation using paired-view images. Specifically, we select\nblocks that involve domain-specific feature extraction and train these blocks\nby entropy minimization. After blocks are adjusted for current test domain, we\ngenerate pseudo-labels by averaging given test images and corresponding flipped\ncounterparts. By simply using flip augmentation, we prevent a decrease in the\nquality of the pseudo-labels, which can be caused by the domain gap resulting\nfrom strong augmentation.", + "After blocks are adjusted for current test domain, we\ngenerate pseudo-labels by averaging given test images and corresponding flipped\ncounterparts. By simply using flip augmentation, we prevent a decrease in the\nquality of the pseudo-labels, which can be caused by the domain gap resulting\nfrom strong augmentation. Our experimental results demonstrate that DPLOT\noutperforms previous TTA methods in CIFAR10-C, CIFAR100-C, and ImageNet-C\nbenchmarks, reducing error by up to 5.4%, 9.1%, and 2.9%, respectively. Also,\nwe provide an extensive analysis to demonstrate effectiveness of our framework.\nCode is available at\nhttps://github.com/gist-ailab/domain-specific-block-selection-and-paired-view-pseudo-labeling-for-online-TTA.", + "We present a neural radiance field method for urban-scale semantic and\nbuilding-level instance segmentation from aerial images by lifting noisy 2D\nlabels to 3D. This is a challenging problem due to two primary reasons.\nFirstly, objects in urban aerial images exhibit substantial variations in size,\nincluding buildings, cars, and roads, which pose a significant challenge for\naccurate 2D segmentation. Secondly, the 2D labels generated by existing\nsegmentation methods suffer from the multi-view inconsistency problem,\nespecially in the case of aerial images, where each image captures only a small\nportion of the entire scene. To overcome these limitations, we first introduce\na scale-adaptive semantic label fusion strategy that enhances the segmentation\nof objects of varying sizes by combining labels predicted from different\naltitudes, harnessing the novel-view synthesis capabilities of NeRF. We then\nintroduce a novel cross-view instance label grouping strategy based on the 3D\nscene representation to mitigate the multi-view inconsistency problem in the 2D\ninstance labels. Furthermore, we exploit multi-view reconstructed depth priors\nto improve the geometric quality of the reconstructed radiance field, resulting\nin enhanced segmentation results.", + "Furthermore, we exploit multi-view reconstructed depth priors\nto improve the geometric quality of the reconstructed radiance field, resulting\nin enhanced segmentation results. Experiments on multiple real-world\nurban-scale datasets demonstrate that our approach outperforms existing\nmethods, highlighting its effectiveness.", + "We introduce SAOR, a novel approach for estimating the 3D shape, texture, and\nviewpoint of an articulated object from a single image captured in the wild.\nUnlike prior approaches that rely on pre-defined category-specific 3D templates\nor tailored 3D skeletons, SAOR learns to articulate shapes from single-view\nimage collections with a skeleton-free part-based model without requiring any\n3D object shape priors. To prevent ill-posed solutions, we propose a\ncross-instance consistency loss that exploits disentangled object shape\ndeformation and articulation. This is helped by a new silhouette-based sampling\nmechanism to enhance viewpoint diversity during training. Our method only\nrequires estimated object silhouettes and relative depth maps from\noff-the-shelf pre-trained networks during training. At inference time, given a\nsingle-view image, it efficiently outputs an explicit mesh representation. We\nobtain improved qualitative and quantitative results on challenging quadruped\nanimals compared to relevant existing work.", + "Referring multi-object tracking (RMOT) aims to track multiple objects based\non input textual descriptions. Previous works realize it by simply integrating\nan extra textual module into the multi-object tracker. However, they typically\nneed to retrain the entire framework and have difficulties in optimization. In\nthis work, we propose an insertable Knowledge Unification Network, termed iKUN,\nto enable communication with off-the-shelf trackers in a plug-and-play manner.\nConcretely, a knowledge unification module (KUM) is designed to adaptively\nextract visual features based on textual guidance. Meanwhile, to improve the\nlocalization accuracy, we present a neural version of Kalman filter (NKF) to\ndynamically adjust process noise and observation noise based on the current\nmotion status. Moreover, to address the problem of open-set long-tail\ndistribution of textual descriptions, a test-time similarity calibration method\nis proposed to refine the confidence score with pseudo frequency. Extensive\nexperiments on Refer-KITTI dataset verify the effectiveness of our framework.", + "Moreover, to address the problem of open-set long-tail\ndistribution of textual descriptions, a test-time similarity calibration method\nis proposed to refine the confidence score with pseudo frequency. Extensive\nexperiments on Refer-KITTI dataset verify the effectiveness of our framework.\nFinally, to speed up the development of RMOT, we also contribute a more\nchallenging dataset, Refer-Dance, by extending public DanceTrack dataset with\nmotion and dressing descriptions. The codes and dataset are available at\nhttps://github.com/dyhBUPT/iKUN.", + "We propose RoHM, an approach for robust 3D human motion reconstruction from\nmonocular RGB(-D) videos in the presence of noise and occlusions. Most previous\napproaches either train neural networks to directly regress motion in 3D or\nlearn data-driven motion priors and combine them with optimization at test\ntime. The former do not recover globally coherent motion and fail under\nocclusions; the latter are time-consuming, prone to local minima, and require\nmanual tuning. To overcome these shortcomings, we exploit the iterative,\ndenoising nature of diffusion models. RoHM is a novel diffusion-based motion\nmodel that, conditioned on noisy and occluded input data, reconstructs\ncomplete, plausible motions in consistent global coordinates. Given the\ncomplexity of the problem -- requiring one to address different tasks\n(denoising and infilling) in different solution spaces (local and global\nmotion) -- we decompose it into two sub-tasks and learn two models, one for\nglobal trajectory and one for local motion. To capture the correlations between\nthe two, we then introduce a novel conditioning module, combining it with an\niterative inference scheme.", + "To capture the correlations between\nthe two, we then introduce a novel conditioning module, combining it with an\niterative inference scheme. We apply RoHM to a variety of tasks -- from motion\nreconstruction and denoising to spatial and temporal infilling. Extensive\nexperiments on three popular datasets show that our method outperforms\nstate-of-the-art approaches qualitatively and quantitatively, while being\nfaster at test time. The code is available at\nhttps://sanweiliti.github.io/ROHM/ROHM.html.", + "There has been significant attention to the research on dense video\ncaptioning, which aims to automatically localize and caption all events within\nuntrimmed video. Several studies introduce methods by designing dense video\ncaptioning as a multitasking problem of event localization and event captioning\nto consider inter-task relations. However, addressing both tasks using only\nvisual input is challenging due to the lack of semantic content. In this study,\nwe address this by proposing a novel framework inspired by the cognitive\ninformation processing of humans. Our model utilizes external memory to\nincorporate prior knowledge. The memory retrieval method is proposed with\ncross-modal video-to-text matching. To effectively incorporate retrieved text\nfeatures, the versatile encoder and the decoder with visual and textual\ncross-attention modules are designed. Comparative experiments have been\nconducted to show the effectiveness of the proposed method on ActivityNet\nCaptions and YouCook2 datasets. Experimental results show promising performance\nof our model without extensive pretraining from a large video dataset.", + "Recently, One-stage Weakly Supervised Semantic Segmentation (WSSS) with\nimage-level labels has gained increasing interest due to simplification over\nits cumbersome multi-stage counterpart. Limited by the inherent ambiguity of\nClass Activation Map (CAM), we observe that one-stage pipelines often encounter\nconfirmation bias caused by incorrect CAM pseudo-labels, impairing their final\nsegmentation performance. Although recent works discard many unreliable\npseudo-labels to implicitly alleviate this issue, they fail to exploit\nsufficient supervision for their models. To this end, we propose a dual student\nframework with trustworthy progressive learning (DuPL). Specifically, we\npropose a dual student network with a discrepancy loss to yield diverse CAMs\nfor each sub-net. The two sub-nets generate supervision for each other,\nmitigating the confirmation bias caused by learning their own incorrect\npseudo-labels. In this process, we progressively introduce more trustworthy\npseudo-labels to be involved in the supervision through dynamic threshold\nadjustment with an adaptive noise filtering strategy. Moreover, we believe that\nevery pixel, even discarded from supervision due to its unreliability, is\nimportant for WSSS.", + "In this process, we progressively introduce more trustworthy\npseudo-labels to be involved in the supervision through dynamic threshold\nadjustment with an adaptive noise filtering strategy. Moreover, we believe that\nevery pixel, even discarded from supervision due to its unreliability, is\nimportant for WSSS. Thus, we develop consistency regularization on these\ndiscarded regions, providing supervision of every pixel. Experiment results\ndemonstrate the superiority of the proposed DuPL over the recent\nstate-of-the-art alternatives on PASCAL VOC 2012 and MS COCO datasets. Code is\navailable at https://github.com/Wu0409/DuPL.", + "Dynamic human rendering from video sequences has achieved remarkable progress\nby formulating the rendering as a mapping from static poses to human images.\nHowever, existing methods focus on the human appearance reconstruction of every\nsingle frame while the temporal motion relations are not fully explored. In\nthis paper, we propose a new 4D motion modeling paradigm, SurMo, that jointly\nmodels the temporal dynamics and human appearances in a unified framework with\nthree key designs: 1) Surface-based motion encoding that models 4D human\nmotions with an efficient compact surface-based triplane. It encodes both\nspatial and temporal motion relations on the dense surface manifold of a\nstatistical body template, which inherits body topology priors for\ngeneralizable novel view synthesis with sparse training observations. 2)\nPhysical motion decoding that is designed to encourage physical motion learning\nby decoding the motion triplane features at timestep t to predict both spatial\nderivatives and temporal derivatives at the next timestep t+1 in the training\nstage. 3) 4D appearance decoding that renders the motion triplanes into images\nby an efficient volumetric surface-conditioned renderer that focuses on the\nrendering of body surfaces with motion learning conditioning.", + "3) 4D appearance decoding that renders the motion triplanes into images\nby an efficient volumetric surface-conditioned renderer that focuses on the\nrendering of body surfaces with motion learning conditioning. Extensive\nexperiments validate the state-of-the-art performance of our new paradigm and\nillustrate the expressiveness of surface-based motion triplanes for rendering\nhigh-fidelity view-consistent humans with fast motions and even\nmotion-dependent shadows. Our project page is at:\nhttps://taohuumd.github.io/projects/SurMo/", + "Class-Incremental Learning (CIL) trains a model to continually recognize new\nclasses from non-stationary data while retaining learned knowledge. A major\nchallenge of CIL arises when applying to real-world data characterized by\nnon-uniform distribution, which introduces a dual imbalance problem involving\n(i) disparities between stored exemplars of old tasks and new class data\n(inter-phase imbalance), and (ii) severe class imbalances within each\nindividual task (intra-phase imbalance). We show that this dual imbalance issue\ncauses skewed gradient updates with biased weights in FC layers, thus inducing\nover/under-fitting and catastrophic forgetting in CIL. Our method addresses it\nby reweighting the gradients towards balanced optimization and unbiased\nclassifier learning. Additionally, we observe imbalanced forgetting where\nparadoxically the instance-rich classes suffer higher performance degradation\nduring CIL due to a larger amount of training data becoming unavailable in\nsubsequent learning phases. To tackle this, we further introduce a\ndistribution-aware knowledge distillation loss to mitigate forgetting by\naligning output logits proportionally with the distribution of lost training\ndata.", + "To tackle this, we further introduce a\ndistribution-aware knowledge distillation loss to mitigate forgetting by\naligning output logits proportionally with the distribution of lost training\ndata. We validate our method on CIFAR-100, ImageNetSubset, and Food101 across\nvarious evaluation protocols and demonstrate consistent improvements compared\nto existing works, showing great potential to apply CIL in real-world scenarios\nwith enhanced robustness and effectiveness.", + "Despite diffusion models having shown powerful abilities to generate\nphotorealistic images, generating videos that are realistic and diverse still\nremains in its infancy. One of the key reasons is that current methods\nintertwine spatial content and temporal dynamics together, leading to a notably\nincreased complexity of text-to-video generation (T2V). In this work, we\npropose HiGen, a diffusion model-based method that improves performance by\ndecoupling the spatial and temporal factors of videos from two perspectives,\ni.e., structure level and content level. At the structure level, we decompose\nthe T2V task into two steps, including spatial reasoning and temporal\nreasoning, using a unified denoiser. Specifically, we generate spatially\ncoherent priors using text during spatial reasoning and then generate\ntemporally coherent motions from these priors during temporal reasoning. At the\ncontent level, we extract two subtle cues from the content of the input video\nthat can express motion and appearance changes, respectively. These two cues\nthen guide the model's training for generating videos, enabling flexible\ncontent variations and enhancing temporal stability.", + "At the\ncontent level, we extract two subtle cues from the content of the input video\nthat can express motion and appearance changes, respectively. These two cues\nthen guide the model's training for generating videos, enabling flexible\ncontent variations and enhancing temporal stability. Through the decoupled\nparadigm, HiGen can effectively reduce the complexity of this task and generate\nrealistic videos with semantics accuracy and motion stability. Extensive\nexperiments demonstrate the superior performance of HiGen over the\nstate-of-the-art T2V methods.", + "Recent advancements in large-scale pre-trained text-to-image models have led\nto remarkable progress in semantic image synthesis. Nevertheless, synthesizing\nhigh-quality images with consistent semantics and layout remains a challenge.\nIn this paper, we propose the adaPtive LAyout-semantiC fusion modulE (PLACE)\nthat harnesses pre-trained models to alleviate the aforementioned issues.\nSpecifically, we first employ the layout control map to faithfully represent\nlayouts in the feature space. Subsequently, we combine the layout and semantic\nfeatures in a timestep-adaptive manner to synthesize images with realistic\ndetails. During fine-tuning, we propose the Semantic Alignment (SA) loss to\nfurther enhance layout alignment. Additionally, we introduce the Layout-Free\nPrior Preservation (LFP) loss, which leverages unlabeled data to maintain the\npriors of pre-trained models, thereby improving the visual quality and semantic\nconsistency of synthesized images. Extensive experiments demonstrate that our\napproach performs favorably in terms of visual quality, semantic consistency,\nand layout alignment. The source code and model are available at\nhttps://github.com/cszy98/PLACE/tree/main.", + "Self-supervised denoising has attracted widespread attention due to its\nability to train without clean images. However, noise in real-world scenarios\nis often spatially correlated, which causes many self-supervised algorithms\nthat assume pixel-wise independent noise to perform poorly. Recent works have\nattempted to break noise correlation with downsampling or neighborhood masking.\nHowever, denoising on downsampled subgraphs can lead to aliasing effects and\nloss of details due to a lower sampling rate. Furthermore, the neighborhood\nmasking methods either come with high computational complexity or do not\nconsider local spatial preservation during inference. Through the analysis of\nexisting methods, we point out that the key to obtaining high-quality and\ntexture-rich results in real-world self-supervised denoising tasks is to train\nat the original input resolution structure and use asymmetric operations during\ntraining and inference. Based on this, we propose Asymmetric Tunable Blind-Spot\nNetwork (AT-BSN), where the blind-spot size can be freely adjusted, thus better\nbalancing noise correlation suppression and image local spatial destruction\nduring training and inference.", + "Based on this, we propose Asymmetric Tunable Blind-Spot\nNetwork (AT-BSN), where the blind-spot size can be freely adjusted, thus better\nbalancing noise correlation suppression and image local spatial destruction\nduring training and inference. In addition, we regard the pre-trained AT-BSN as\na meta-teacher network capable of generating various teacher networks by\nsampling different blind-spots. We propose a blind-spot based multi-teacher\ndistillation strategy to distill a lightweight network, significantly improving\nperformance. Experimental results on multiple datasets prove that our method\nachieves state-of-the-art, and is superior to other self-supervised algorithms\nin terms of computational overhead and visual effects.", + "We present the first application of 3D Gaussian Splatting in monocular SLAM,\nthe most fundamental but the hardest setup for Visual SLAM. Our method, which\nruns live at 3fps, utilises Gaussians as the only 3D representation, unifying\nthe required representation for accurate, efficient tracking, mapping, and\nhigh-quality rendering. Designed for challenging monocular settings, our\napproach is seamlessly extendable to RGB-D SLAM when an external depth sensor\nis available. Several innovations are required to continuously reconstruct 3D\nscenes with high fidelity from a live camera. First, to move beyond the\noriginal 3DGS algorithm, which requires accurate poses from an offline\nStructure from Motion (SfM) system, we formulate camera tracking for 3DGS using\ndirect optimisation against the 3D Gaussians, and show that this enables fast\nand robust tracking with a wide basin of convergence. Second, by utilising the\nexplicit nature of the Gaussians, we introduce geometric verification and\nregularisation to handle the ambiguities occurring in incremental 3D dense\nreconstruction.", + "Second, by utilising the\nexplicit nature of the Gaussians, we introduce geometric verification and\nregularisation to handle the ambiguities occurring in incremental 3D dense\nreconstruction. Finally, we introduce a full SLAM system which not only\nachieves state-of-the-art results in novel view synthesis and trajectory\nestimation but also reconstruction of tiny and even transparent objects.", + "Understanding long, real-world videos requires modeling of long-range visual\ndependencies. To this end, we explore video-first architectures, building on\nthe common paradigm of transferring large-scale, image--text models to video\nvia shallow temporal fusion. However, we expose two limitations to the\napproach: (1) decreased spatial capabilities, likely due to poor\nvideo--language alignment in standard video datasets, and (2) higher memory\nconsumption, bottlenecking the number of frames that can be processed. To\nmitigate the memory bottleneck, we systematically analyze the memory/accuracy\ntrade-off of various efficient methods: factorized attention,\nparameter-efficient image-to-video adaptation, input masking, and\nmulti-resolution patchification. Surprisingly, simply masking large portions of\nthe video (up to 75%) during contrastive pre-training proves to be one of the\nmost robust ways to scale encoders to videos up to 4.3 minutes at 1 FPS.", + "Surprisingly, simply masking large portions of\nthe video (up to 75%) during contrastive pre-training proves to be one of the\nmost robust ways to scale encoders to videos up to 4.3 minutes at 1 FPS. Our\nsimple approach for training long video-to-text models, which scales to 1B\nparameters, does not add new architectural complexity and is able to outperform\nthe popular paradigm of using much larger LLMs as an information aggregator\nover segment-based information on benchmarks with long-range temporal\ndependencies (YouCook2, EgoSchema).", + "This paper introduces Hierarchical Diffusion Policy (HDP), a hierarchical\nagent for multi-task robotic manipulation. HDP factorises a manipulation policy\ninto a hierarchical structure: a high-level task-planning agent which predicts\na distant next-best end-effector pose (NBP), and a low-level goal-conditioned\ndiffusion policy which generates optimal motion trajectories. The factorised\npolicy representation allows HDP to tackle both long-horizon task planning\nwhile generating fine-grained low-level actions. To generate context-aware\nmotion trajectories while satisfying robot kinematics constraints, we present a\nnovel kinematics-aware goal-conditioned control agent, Robot Kinematics\nDiffuser (RK-Diffuser). Specifically, RK-Diffuser learns to generate both the\nend-effector pose and joint position trajectories, and distill the accurate but\nkinematics-unaware end-effector pose diffuser to the kinematics-aware but less\naccurate joint position diffuser via differentiable kinematics. Empirically, we\nshow that HDP achieves a significantly higher success rate than the\nstate-of-the-art methods in both simulation and real-world.", + "Coarse-to-fine schemes are widely used in traditional single-image motion\ndeblur; however, in the context of deep learning, existing multi-scale\nalgorithms not only require the use of complex modules for feature fusion of\nlow-scale RGB images and deep semantics, but also manually generate\nlow-resolution pairs of images that do not have sufficient confidence. In this\nwork, we propose a multi-scale network based on single-input and\nmultiple-outputs(SIMO) for motion deblurring. This simplifies the complexity of\nalgorithms based on a coarse-to-fine scheme. To alleviate restoration defects\nimpacting detail information brought about by using a multi-scale architecture,\nwe combine the characteristics of real-world blurring trajectories with a\nlearnable wavelet transform module to focus on the directional continuity and\nfrequency features of the step-by-step transitions between blurred images to\nsharp images. In conclusion, we propose a multi-scale network with a learnable\ndiscrete wavelet transform (MLWNet), which exhibits state-of-the-art\nperformance on multiple real-world deblurred datasets, in terms of both\nsubjective and objective quality as well as computational efficiency.", + "Temporal action detection (TAD) aims to locate action positions and recognize\naction categories in long-term untrimmed videos. Although many methods have\nachieved promising results, their robustness has not been thoroughly studied.\nIn practice, we observe that temporal information in videos can be occasionally\ncorrupted, such as missing or blurred frames. Interestingly, existing methods\noften incur a significant performance drop even if only one frame is affected.\nTo formally evaluate the robustness, we establish two temporal corruption\nrobustness benchmarks, namely THUMOS14-C and ActivityNet-v1.3-C. In this paper,\nwe extensively analyze the robustness of seven leading TAD methods and obtain\nsome interesting findings: 1) Existing methods are particularly vulnerable to\ntemporal corruptions, and end-to-end methods are often more susceptible than\nthose with a pre-trained feature extractor; 2) Vulnerability mainly comes from\nlocalization error rather than classification error; 3) When corruptions occur\nin the middle of an action instance, TAD models tend to yield the largest\nperformance drop.", + "Besides building a benchmark, we further develop a simple but\neffective robust training method to defend against temporal corruptions,\nthrough the FrameDrop augmentation and Temporal-Robust Consistency loss.\nRemarkably, our approach not only improves robustness but also yields promising\nimprovements on clean data. We believe that this study will serve as a\nbenchmark for future research in robust video analysis. Source code and models\nare available at https://github.com/Alvin-Zeng/temporal-robustness-benchmark.", + "Realizing unified monocular 3D object detection, including both indoor and\noutdoor scenes, holds great importance in applications like robot navigation.\nHowever, involving various scenarios of data to train models poses challenges\ndue to their significantly different characteristics, e.g., diverse geometry\nproperties and heterogeneous domain distributions. To address these challenges,\nwe build a detector based on the bird's-eye-view (BEV) detection paradigm,\nwhere the explicit feature projection is beneficial to addressing the geometry\nlearning ambiguity when employing multiple scenarios of data to train\ndetectors. Then, we split the classical BEV detection architecture into two\nstages and propose an uneven BEV grid design to handle the convergence\ninstability caused by the aforementioned challenges. Moreover, we develop a\nsparse BEV feature projection strategy to reduce computational cost and a\nunified domain alignment method to handle heterogeneous domains. Combining\nthese techniques, a unified detector UniMODE is derived, which surpasses the\nprevious state-of-the-art on the challenging Omni3D dataset (a large-scale\ndataset including both indoor and outdoor scenes) by 4.9% AP_3D, revealing the\nfirst successful generalization of a BEV detector to unified 3D object\ndetection.", + "Recently, 3D content creation from text prompts has demonstrated remarkable\nprogress by utilizing 2D and 3D diffusion models. While 3D diffusion models\nensure great multi-view consistency, their ability to generate high-quality and\ndiverse 3D assets is hindered by the limited 3D data. In contrast, 2D diffusion\nmodels find a distillation approach that achieves excellent generalization and\nrich details without any 3D data. However, 2D lifting methods suffer from\ninherent view-agnostic ambiguity thereby leading to serious multi-face Janus\nissues, where text prompts fail to provide sufficient guidance to learn\ncoherent 3D results. Instead of retraining a costly viewpoint-aware model, we\nstudy how to fully exploit easily accessible coarse 3D knowledge to enhance the\nprompts and guide 2D lifting optimization for refinement. In this paper, we\npropose Sherpa3D, a new text-to-3D framework that achieves high-fidelity,\ngeneralizability, and geometric consistency simultaneously.", + "In this paper, we\npropose Sherpa3D, a new text-to-3D framework that achieves high-fidelity,\ngeneralizability, and geometric consistency simultaneously. Specifically, we\ndesign a pair of guiding strategies derived from the coarse 3D prior generated\nby the 3D diffusion model: a structural guidance for geometric fidelity and a\nsemantic guidance for 3D coherence. Employing the two types of guidance, the 2D\ndiffusion model enriches the 3D content with diversified and high-quality\nresults. Extensive experiments show the superiority of our Sherpa3D over the\nstate-of-the-art text-to-3D methods in terms of quality and 3D consistency.", + "In the past few decades, Japanese comics, commonly referred to as Manga, have\ntranscended both cultural and linguistic boundaries to become a true worldwide\nsensation. Yet, the inherent reliance on visual cues and illustration within\nmanga renders it largely inaccessible to individuals with visual impairments.\nIn this work, we seek to address this substantial barrier, with the aim of\nensuring that manga can be appreciated and actively engaged by everyone.\nSpecifically, we tackle the problem of diarisation i.e. generating a\ntranscription of who said what and when, in a fully automatic way.\n To this end, we make the following contributions: (1) we present a unified\nmodel, Magi, that is able to (a) detect panels, text boxes and character boxes,\n(b) cluster characters by identity (without knowing the number of clusters\napriori), and (c) associate dialogues to their speakers; (2) we propose a novel\napproach that is able to sort the detected text boxes in their reading order\nand generate a dialogue transcript; (3) we annotate an evaluation benchmark for\nthis task using publicly available [English] manga pages.", + "The code, evaluation\ndatasets and the pre-trained model can be found at:\nhttps://github.com/ragavsachdeva/magi.", + "Recently, integrating video foundation models and large language models to\nbuild a video understanding system can overcome the limitations of specific\npre-defined vision tasks. Yet, existing systems can only handle videos with\nvery few frames. For long videos, the computation complexity, memory cost, and\nlong-term temporal connection impose additional challenges. Taking advantage of\nthe Atkinson-Shiffrin memory model, with tokens in Transformers being employed\nas the carriers of memory in combination with our specially designed memory\nmechanism, we propose the MovieChat to overcome these challenges. MovieChat\nachieves state-of-the-art performance in long video understanding, along with\nthe released MovieChat-1K benchmark with 1K long video and 14K manual\nannotations for validation of the effectiveness of our method.", + "In order to gain insights about the decision-making of different visual\nrecognition backbones, we propose two methodologies, sub-explanation counting\nand cross-testing, that systematically applies deep explanation algorithms on a\ndataset-wide basis, and compares the statistics generated from the amount and\nnature of the explanations. These methodologies reveal the difference among\nnetworks in terms of two properties called compositionality and disjunctivism.\nTransformers and ConvNeXt are found to be more compositional, in the sense that\nthey jointly consider multiple parts of the image in building their decisions,\nwhereas traditional CNNs and distilled transformers are less compositional and\nmore disjunctive, which means that they use multiple diverse but smaller set of\nparts to achieve a confident prediction. Through further experiments, we\npinpointed the choice of normalization to be especially important in the\ncompositionality of a model, in that batch normalization leads to less\ncompositionality while group and layer normalization lead to more. Finally, we\nalso analyze the features shared by different backbones and plot a landscape of\ndifferent models based on their feature-use similarity.", + "Estimating full-body human motion via sparse tracking signals from\nhead-mounted displays and hand controllers in 3D scenes is crucial to\napplications in AR/VR. One of the biggest challenges to this task is the\none-to-many mapping from sparse observations to dense full-body motions, which\nendowed inherent ambiguities. To help resolve this ambiguous problem, we\nintroduce a new framework to combine rich contextual information provided by\nscenes to benefit full-body motion tracking from sparse observations. To\nestimate plausible human motions given sparse tracking signals and 3D scenes,\nwe develop $\\text{S}^2$Fusion, a unified framework fusing \\underline{S}cene and\nsparse \\underline{S}ignals with a conditional dif\\underline{Fusion} model.\n$\\text{S}^2$Fusion first extracts the spatial-temporal relations residing in\nthe sparse signals via a periodic autoencoder, and then produces time-alignment\nfeature embedding as additional inputs. Subsequently, by drawing initial noisy\nmotion from a pre-trained prior, $\\text{S}^2$Fusion utilizes conditional\ndiffusion to fuse scene geometry and sparse tracking signals to generate\nfull-body scene-aware motions.", + "Subsequently, by drawing initial noisy\nmotion from a pre-trained prior, $\\text{S}^2$Fusion utilizes conditional\ndiffusion to fuse scene geometry and sparse tracking signals to generate\nfull-body scene-aware motions. The sampling procedure of $\\text{S}^2$Fusion is\nfurther guided by a specially designed scene-penetration loss and\nphase-matching loss, which effectively regularizes the motion of the lower body\neven in the absence of any tracking signals, making the generated motion much\nmore plausible and coherent. Extensive experimental results have demonstrated\nthat our $\\text{S}^2$Fusion outperforms the state-of-the-art in terms of\nestimation quality and smoothness.", + "Due to its promising results, density map regression has been widely employed\nfor image-based crowd counting. The approach, however, often suffers from\nsevere performance degradation when tested on data from unseen scenarios, the\nso-called \"domain shift\" problem. To address the problem, we investigate in\nthis work single domain generalization (SDG) for crowd counting. The existing\nSDG approaches are mainly for image classification and segmentation, and can\nhardly be extended to our case due to its regression nature and label ambiguity\n(i.e., ambiguous pixel-level ground truths). We propose MPCount, a novel\neffective SDG approach even for narrow source distribution. MPCount stores\ndiverse density values for density map regression and reconstructs\ndomain-invariant features by means of only one memory bank, a content error\nmask and attention consistency loss. By partitioning the image into grids, it\nemploys patch-wise classification as an auxiliary task to mitigate label\nambiguity. Through extensive experiments on different datasets, MPCount is\nshown to significantly improve counting accuracy compared to the state of the\nart under diverse scenarios unobserved in the training data characterized by\nnarrow source distribution.", + "Through extensive experiments on different datasets, MPCount is\nshown to significantly improve counting accuracy compared to the state of the\nart under diverse scenarios unobserved in the training data characterized by\nnarrow source distribution. Code is available at\nhttps://github.com/Shimmer93/MPCount.", + "Monocular depth estimation has experienced significant progress on\nterrestrial images in recent years, largely due to deep learning advancements.\nHowever, it remains inadequate for underwater scenes, primarily because of data\nscarcity. Given the inherent challenges of light attenuation and backscattering\nin water, acquiring clear underwater images or precise depth information is\nnotably difficult and costly. Consequently, learning-based approaches often\nrely on synthetic data or turn to unsupervised or self-supervised methods to\nmitigate this lack of data. Nonetheless, the performance of these methods is\noften constrained by the domain gap and looser constraints. In this paper, we\npropose a novel pipeline for generating photorealistic underwater images using\naccurate terrestrial depth data. This approach facilitates the training of\nsupervised models for underwater depth estimation, effectively reducing the\nperformance disparity between terrestrial and underwater environments. Contrary\nto prior synthetic datasets that merely apply style transfer to terrestrial\nimages without altering the scene content, our approach uniquely creates\nvibrant, non-existent underwater scenes by leveraging terrestrial depth data\nthrough the innovative Stable Diffusion model.", + "Contrary\nto prior synthetic datasets that merely apply style transfer to terrestrial\nimages without altering the scene content, our approach uniquely creates\nvibrant, non-existent underwater scenes by leveraging terrestrial depth data\nthrough the innovative Stable Diffusion model. Specifically, we introduce a\nunique Depth2Underwater ControlNet, trained on specially prepared \\{Underwater,\nDepth, Text\\} data triplets, for this generation task. Our newly developed\ndataset enables terrestrial depth estimation models to achieve considerable\nimprovements, both quantitatively and qualitatively, on unseen underwater\nimages, surpassing their terrestrial pre-trained counterparts. Moreover, the\nenhanced depth accuracy for underwater scenes also aids underwater image\nrestoration techniques that rely on depth maps, further demonstrating our\ndataset's utility. The dataset will be available at\nhttps://github.com/zkawfanx/Atlantis.", + "Prior research on deep video compression (DVC) for machine tasks typically\nnecessitates training a unique codec for each specific task, mandating a\ndedicated decoder per task. In contrast, traditional video codecs employ a\nflexible encoder controller, enabling the adaptation of a single codec to\ndifferent tasks through mechanisms like mode prediction. Drawing inspiration\nfrom this, we introduce an innovative encoder controller for deep video\ncompression for machines. This controller features a mode prediction and a\nGroup of Pictures (GoP) selection module. Our approach centralizes control at\nthe encoding stage, allowing for adaptable encoder adjustments across different\ntasks, such as detection and tracking, while maintaining compatibility with a\nstandard pre-trained DVC decoder. Empirical evidence demonstrates that our\nmethod is applicable across multiple tasks with various existing pre-trained\nDVCs. Moreover, extensive experiments demonstrate that our method outperforms\nprevious DVC by about 25% bitrate for different tasks, with only one\npre-trained decoder.", + "Human facial action units (AUs) are mutually related in a hierarchical\nmanner, as not only they are associated with each other in both spatial and\ntemporal domains but also AUs located in the same/close facial regions show\nstronger relationships than those of different facial regions. While none of\nexisting approach thoroughly model such hierarchical inter-dependencies among\nAUs, this paper proposes to comprehensively model multi-scale AU-related\ndynamic and hierarchical spatio-temporal relationship among AUs for their\noccurrences recognition. Specifically, we first propose a novel multi-scale\ntemporal differencing network with an adaptive weighting block to explicitly\ncapture facial dynamics across frames at different spatial scales, which\nspecifically considers the heterogeneity of range and magnitude in different\nAUs' activation. Then, a two-stage strategy is introduced to hierarchically\nmodel the relationship among AUs based on their spatial distribution (i.e.,\nlocal and cross-region AU relationship modelling). Experimental results\nachieved on BP4D and DISFA show that our approach is the new state-of-the-art\nin the field of AU occurrence recognition. Our code is publicly available at\nhttps://github.com/CVI-SZU/MDHR.", + "We delve into pseudo-labeling for semi-supervised monocular 3D object\ndetection (SSM3OD) and discover two primary issues: a misalignment between the\nprediction quality of 3D and 2D attributes and the tendency of depth\nsupervision derived from pseudo-labels to be noisy, leading to significant\noptimization conflicts with other reliable forms of supervision. We introduce a\nnovel decoupled pseudo-labeling (DPL) approach for SSM3OD. Our approach\nfeatures a Decoupled Pseudo-label Generation (DPG) module, designed to\nefficiently generate pseudo-labels by separately processing 2D and 3D\nattributes. This module incorporates a unique homography-based method for\nidentifying dependable pseudo-labels in BEV space, specifically for 3D\nattributes. Additionally, we present a DepthGradient Projection (DGP) module to\nmitigate optimization conflicts caused by noisy depth supervision of\npseudo-labels, effectively decoupling the depth gradient and removing\nconflicting gradients. This dual decoupling strategy-at both the pseudo-label\ngeneration and gradient levels-significantly improves the utilization of\npseudo-labels in SSM3OD.", + "This dual decoupling strategy-at both the pseudo-label\ngeneration and gradient levels-significantly improves the utilization of\npseudo-labels in SSM3OD. Our comprehensive experiments on the KITTI benchmark\ndemonstrate the superiority of our method over existing approaches.", + "We propose a novel approach to the action segmentation task for long,\nuntrimmed videos, based on solving an optimal transport problem. By encoding a\ntemporal consistency prior into a Gromov-Wasserstein problem, we are able to\ndecode a temporally consistent segmentation from a noisy affinity/matching cost\nmatrix between video frames and action classes. Unlike previous approaches, our\nmethod does not require knowing the action order for a video to attain temporal\nconsistency. Furthermore, our resulting (fused) Gromov-Wasserstein problem can\nbe efficiently solved on GPUs using a few iterations of projected mirror\ndescent. We demonstrate the effectiveness of our method in an unsupervised\nlearning setting, where our method is used to generate pseudo-labels for\nself-training. We evaluate our segmentation approach and unsupervised learning\npipeline on the Breakfast, 50-Salads, YouTube Instructions and Desktop Assembly\ndatasets, yielding state-of-the-art results for the unsupervised video action\nsegmentation task.", + "Existing prompt learning methods have shown certain capabilities in\nOut-of-Distribution (OOD) detection, but the lack of OOD images in the target\ndataset in their training can lead to mismatches between OOD images and\nIn-Distribution (ID) categories, resulting in a high false positive rate. To\naddress this issue, we introduce a novel OOD detection method, named\n'NegPrompt', to learn a set of negative prompts, each representing a negative\nconnotation of a given class label, for delineating the boundaries between ID\nand OOD images. It learns such negative prompts with ID data only, without any\nreliance on external outlier data. Further, current methods assume the\navailability of samples of all ID classes, rendering them ineffective in\nopen-vocabulary learning scenarios where the inference stage can contain novel\nID classes not present during training. In contrast, our learned negative\nprompts are transferable to novel class labels. Experiments on various ImageNet\nbenchmarks show that NegPrompt surpasses state-of-the-art prompt-learning-based\nOOD detection methods and maintains a consistent lead in hard OOD detection in\nclosed- and open-vocabulary classification scenarios.", + "Experiments on various ImageNet\nbenchmarks show that NegPrompt surpasses state-of-the-art prompt-learning-based\nOOD detection methods and maintains a consistent lead in hard OOD detection in\nclosed- and open-vocabulary classification scenarios. Code is available at\nhttps://github.com/mala-lab/negprompt.", + "We propose a voxel-based optimization framework, ReVoRF, for few-shot\nradiance fields that strategically address the unreliability in pseudo novel\nview synthesis. Our method pivots on the insight that relative depth\nrelationships within neighboring regions are more reliable than the absolute\ncolor values in disoccluded areas. Consequently, we devise a bilateral\ngeometric consistency loss that carefully navigates the trade-off between color\nfidelity and geometric accuracy in the context of depth consistency for\nuncertain regions. Moreover, we present a reliability-guided learning strategy\nto discern and utilize the variable quality across synthesized views,\ncomplemented by a reliability-aware voxel smoothing algorithm that smoothens\nthe transition between reliable and unreliable data patches. Our approach\nallows for a more nuanced use of all available data, promoting enhanced\nlearning from regions previously considered unsuitable for high-quality\nreconstruction. Extensive experiments across diverse datasets reveal that our\napproach attains significant gains in efficiency and accuracy, delivering\nrendering speeds of 3 FPS, 7 mins to train a $360^\\circ$ scene, and a 5\\%\nimprovement in PSNR over existing few-shot methods. Code is available at\nhttps://github.com/HKCLynn/ReVoRF.", + "Monocular egocentric 3D human motion capture is a challenging and actively\nresearched problem. Existing methods use synchronously operating visual sensors\n(e.g. RGB cameras) and often fail under low lighting and fast motions, which\ncan be restricting in many applications involving head-mounted devices. In\nresponse to the existing limitations, this paper 1) introduces a new problem,\ni.e., 3D human motion capture from an egocentric monocular event camera with a\nfisheye lens, and 2) proposes the first approach to it called EventEgo3D\n(EE3D). Event streams have high temporal resolution and provide reliable cues\nfor 3D human motion capture under high-speed human motions and rapidly changing\nillumination. The proposed EE3D framework is specifically tailored for learning\nwith event streams in the LNES representation, enabling high 3D reconstruction\naccuracy. We also design a prototype of a mobile head-mounted device with an\nevent camera and record a real dataset with event observations and the\nground-truth 3D human poses (in addition to the synthetic dataset).", + "We also design a prototype of a mobile head-mounted device with an\nevent camera and record a real dataset with event observations and the\nground-truth 3D human poses (in addition to the synthetic dataset). Our EE3D\ndemonstrates robustness and superior 3D accuracy compared to existing solutions\nacross various challenging experiments while supporting real-time 3D pose\nupdate rates of 140Hz.", + "Co-salient object detection (CoSOD) aims to identify the common and salient\n(usually in the foreground) regions across a given group of images. Although\nachieving significant progress, state-of-the-art CoSODs could be easily\naffected by some adversarial perturbations, leading to substantial accuracy\nreduction. The adversarial perturbations can mislead CoSODs but do not change\nthe high-level semantic information (e.g., concept) of the co-salient objects.\nIn this paper, we propose a novel robustness enhancement framework by first\nlearning the concept of the co-salient objects based on the input group images\nand then leveraging this concept to purify adversarial perturbations, which are\nsubsequently fed to CoSODs for robustness enhancement. Specifically, we propose\nCosalPure containing two modules, i.e., group-image concept learning and\nconcept-guided diffusion purification. For the first module, we adopt a\npre-trained text-to-image diffusion model to learn the concept of co-salient\nobjects within group images where the learned concept is robust to adversarial\nexamples.", + "For the first module, we adopt a\npre-trained text-to-image diffusion model to learn the concept of co-salient\nobjects within group images where the learned concept is robust to adversarial\nexamples. For the second module, we map the adversarial image to the latent\nspace and then perform diffusion generation by embedding the learned concept\ninto the noise prediction function as an extra condition. Our method can\neffectively alleviate the influence of the SOTA adversarial attack containing\ndifferent adversarial patterns, including exposure and noise. The extensive\nresults demonstrate that our method could enhance the robustness of CoSODs\nsignificantly.", + "Existing diffusion-based video editing models have made gorgeous advances for\nediting attributes of a source video over time but struggle to manipulate the\nmotion information while preserving the original protagonist's appearance and\nbackground. To address this, we propose MotionEditor, a diffusion model for\nvideo motion editing. MotionEditor incorporates a novel content-aware motion\nadapter into ControlNet to capture temporal motion correspondence. While\nControlNet enables direct generation based on skeleton poses, it encounters\nchallenges when modifying the source motion in the inverted noise due to\ncontradictory signals between the noise (source) and the condition (reference).\nOur adapter complements ControlNet by involving source content to transfer\nadapted control signals seamlessly. Further, we build up a two-branch\narchitecture (a reconstruction branch and an editing branch) with a\nhigh-fidelity attention injection mechanism facilitating branch interaction.\nThis mechanism enables the editing branch to query the key and value from the\nreconstruction branch in a decoupled manner, making the editing branch retain\nthe original background and protagonist appearance. We also propose a skeleton\nalignment algorithm to address the discrepancies in pose size and position.\nExperiments demonstrate the promising motion editing ability of MotionEditor,\nboth qualitatively and quantitatively.", + "It is a well-known fact that the performance of deep learning models\ndeteriorates when they encounter a distribution shift at test time. Test-time\nadaptation (TTA) algorithms have been proposed to adapt the model online while\ninferring test data. However, existing research predominantly focuses on\nclassification tasks through the optimization of batch normalization layers or\nclassification heads, but this approach limits its applicability to various\nmodel architectures like Transformers and makes it challenging to apply to\nother tasks, such as object detection. In this paper, we propose a novel online\nadaption approach for object detection in continually changing test domains,\nconsidering which part of the model to update, how to update it, and when to\nperform the update. By introducing architecture-agnostic and lightweight\nadaptor modules and only updating these while leaving the pre-trained backbone\nunchanged, we can rapidly adapt to new test domains in an efficient way and\nprevent catastrophic forgetting. Furthermore, we present a practical and\nstraightforward class-wise feature aligning method for object detection to\nresolve domain shifts. Additionally, we enhance efficiency by determining when\nthe model is sufficiently adapted or when additional adaptation is needed due\nto changes in the test distribution.", + "Furthermore, we present a practical and\nstraightforward class-wise feature aligning method for object detection to\nresolve domain shifts. Additionally, we enhance efficiency by determining when\nthe model is sufficiently adapted or when additional adaptation is needed due\nto changes in the test distribution. Our approach surpasses baselines on widely\nused benchmarks, achieving improvements of up to 4.9\\%p and 7.9\\%p in mAP for\nCOCO $\\rightarrow$ COCO-corrupted and SHIFT, respectively, while maintaining\nabout 20 FPS or higher.", + "Large foundation models, known for their strong zero-shot generalization,\nhave excelled in visual and language applications. However, applying them to\nmedical image segmentation, a domain with diverse imaging types and target\nlabels, remains an open challenge. Current approaches, such as adapting\ninteractive segmentation models like Segment Anything Model (SAM), require user\nprompts for each sample during inference. Alternatively, transfer learning\nmethods like few/one-shot models demand labeled samples, leading to high costs.\nThis paper introduces a new paradigm toward the universal medical image\nsegmentation, termed 'One-Prompt Segmentation.' One-Prompt Segmentation\ncombines the strengths of one-shot and interactive methods. In the inference\nstage, with just \\textbf{one prompted sample}, it can adeptly handle the unseen\ntask in a single forward pass. We train One-Prompt Model on 64 open-source\nmedical datasets, accompanied by the collection of over 3,000 clinician-labeled\nprompts. Tested on 14 previously unseen datasets, the One-Prompt Model\nshowcases superior zero-shot segmentation capabilities, outperforming a wide\nrange of related methods. The code and data is released as\nhttps://github.com/KidsWithTokens/one-prompt.", + "Low-shot image classification is a fundamental task in computer vision, and\nthe emergence of large-scale vision-language models such as CLIP has greatly\nadvanced the forefront of research in this field. However, most existing\nCLIP-based methods lack the flexibility to effectively incorporate other\npre-trained models that encompass knowledge distinct from CLIP. To bridge the\ngap, this work proposes a simple and effective probabilistic model ensemble\nframework based on Gaussian processes, which have previously demonstrated\nremarkable efficacy in processing small data. We achieve the integration of\nprior knowledge by specifying the mean function with CLIP and the kernel\nfunction with an ensemble of deep kernels built upon various pre-trained\nmodels. By regressing the classification label directly, our framework enables\nanalytical inference, straightforward uncertainty quantification, and\nprincipled hyper-parameter tuning. Through extensive experiments on standard\nbenchmarks, we demonstrate that our method consistently outperforms competitive\nensemble baselines regarding predictive performance. Additionally, we assess\nthe robustness of our method and the quality of the yielded uncertainty\nestimates on out-of-distribution datasets. We also illustrate that our method,\ndespite relying on label regression, still enjoys superior model calibration\ncompared to most deterministic baselines.", + "Most multimodal large language models (MLLMs) learn language-to-object\ngrounding through causal language modeling where grounded objects are captured\nby bounding boxes as sequences of location tokens. This paradigm lacks\npixel-level representations that are important for fine-grained visual\nunderstanding and diagnosis. In this work, we introduce GROUNDHOG, an MLLM\ndeveloped by grounding Large Language Models to holistic segmentation.\nGROUNDHOG incorporates a masked feature extractor and converts extracted\nfeatures into visual entity tokens for the MLLM backbone, which then connects\ngroundable phrases to unified grounding masks by retrieving and merging the\nentity masks. To train GROUNDHOG, we carefully curated M3G2, a grounded visual\ninstruction tuning dataset with Multi-Modal Multi-Grained Grounding, by\nharvesting a collection of segmentation-grounded datasets with rich\nannotations. Our experimental results show that GROUNDHOG achieves superior\nperformance on various language grounding tasks without task-specific\nfine-tuning, and significantly reduces object hallucination. GROUNDHOG also\ndemonstrates better grounding towards complex forms of visual input and\nprovides easy-to-understand diagnosis in failure cases.", + "We study text-based image editing (TBIE) of a single image by counterfactual\ninference because it is an elegant formulation to precisely address the\nrequirement: the edited image should retain the fidelity of the original one.\nThrough the lens of the formulation, we find that the crux of TBIE is that\nexisting techniques hardly achieve a good trade-off between editability and\nfidelity, mainly due to the overfitting of the single-image fine-tuning. To\nthis end, we propose a Doubly Abductive Counterfactual inference framework\n(DAC). We first parameterize an exogenous variable as a UNet LoRA, whose\nabduction can encode all the image details. Second, we abduct another exogenous\nvariable parameterized by a text encoder LoRA, which recovers the lost\neditability caused by the overfitted first abduction. Thanks to the second\nabduction, which exclusively encodes the visual transition from post-edit to\npre-edit, its inversion -- subtracting the LoRA -- effectively reverts pre-edit\nback to post-edit, thereby accomplishing the edit. Through extensive\nexperiments, our DAC achieves a good trade-off between editability and\nfidelity.", + "Through extensive\nexperiments, our DAC achieves a good trade-off between editability and\nfidelity. Thus, we can support a wide spectrum of user editing intents,\nincluding addition, removal, manipulation, replacement, style transfer, and\nfacial change, which are extensively validated in both qualitative and\nquantitative evaluations. Codes are in https://github.com/xuesong39/DAC.", + "We tackle semi-supervised object detection based on motion cues. Recent\nresults suggest that heuristic-based clustering methods in conjunction with\nobject trackers can be used to pseudo-label instances of moving objects and use\nthese as supervisory signals to train 3D object detectors in Lidar data without\nmanual supervision. We re-think this approach and suggest that both, object\ndetection, as well as motion-inspired pseudo-labeling, can be tackled in a\ndata-driven manner. We leverage recent advances in scene flow estimation to\nobtain point trajectories from which we extract long-term, class-agnostic\nmotion patterns. Revisiting correlation clustering in the context of message\npassing networks, we learn to group those motion patterns to cluster points to\nobject instances. By estimating the full extent of the objects, we obtain\nper-scan 3D bounding boxes that we use to supervise a Lidar object detection\nnetwork. Our method not only outperforms prior heuristic-based approaches (57.5\nAP, +14 improvement over prior work), more importantly, we show we can\npseudo-label and train object detectors across datasets.", + "The boundless possibility of neural networks which can be used to solve a\nproblem -- each with different performance -- leads to a situation where a Deep\nLearning expert is required to identify the best neural network. This goes\nagainst the hope of removing the need for experts. Neural Architecture Search\n(NAS) offers a solution to this by automatically identifying the best\narchitecture. However, to date, NAS work has focused on a small set of datasets\nwhich we argue are not representative of real-world problems. We introduce\neight new datasets created for a series of NAS Challenges: AddNIST, Language,\nMultNIST, CIFARTile, Gutenberg, Isabella, GeoClassing, and Chesseract. These\ndatasets and challenges are developed to direct attention to issues in NAS\ndevelopment and to encourage authors to consider how their models will perform\non datasets unknown to them at development time. We present experimentation\nusing standard Deep Learning methods as well as the best results from challenge\nparticipants.", + "Spatio-temporal video grounding (or STVG) task aims at locating a\nspatio-temporal tube for a specific instance given a text query. Despite\nadvancements, current methods easily suffer the distractors or heavy object\nappearance variations in videos due to insufficient object information from the\ntext, leading to degradation. Addressing this, we propose a novel framework,\ncontext-guided STVG (CG-STVG), which mines discriminative instance context for\nobject in videos and applies it as a supplementary guidance for target\nlocalization. The key of CG-STVG lies in two specially designed modules,\nincluding instance context generation (ICG), which focuses on discovering\nvisual context information (in both appearance and motion) of the instance, and\ninstance context refinement (ICR), which aims to improve the instance context\nfrom ICG by eliminating irrelevant or even harmful information from the\ncontext. During grounding, ICG, together with ICR, are deployed at each\ndecoding stage of a Transformer architecture for instance context learning.", + "During grounding, ICG, together with ICR, are deployed at each\ndecoding stage of a Transformer architecture for instance context learning.\nParticularly, instance context learned from one decoding stage is fed to the\nnext stage, and leveraged as a guidance containing rich and discriminative\nobject feature to enhance the target-awareness in decoding feature, which\nconversely benefits generating better new instance context for improving\nlocalization finally. Compared to existing methods, CG-STVG enjoys object\ninformation in text query and guidance from mined instance visual context for\nmore accurate target localization. In our experiments on three benchmarks,\nincluding HCSTVG-v1/-v2 and VidSTG, CG-STVG sets new state-of-the-arts in\nm_tIoU and m_vIoU on all of them, showing its efficacy. The code will be\nreleased at https://github.com/HengLan/CGSTVG.", + "The many variations of Implicit Neural Representations (INRs), where a neural\nnetwork is trained as a continuous representation of a signal, have tremendous\npractical utility for downstream tasks including novel view synthesis, video\ncompression, and image superresolution. Unfortunately, the inner workings of\nthese networks are seriously under-studied. Our work, eXplaining the Implicit\nNeural Canvas (XINC), is a unified framework for explaining properties of INRs\nby examining the strength of each neuron's contribution to each output pixel.\nWe call the aggregate of these contribution maps the Implicit Neural Canvas and\nwe use this concept to demonstrate that the INRs which we study learn to\n''see'' the frames they represent in surprising ways. For example, INRs tend to\nhave highly distributed representations. While lacking high-level object\nsemantics, they have a significant bias for color and edges, and are almost\nentirely space-agnostic. We arrive at our conclusions by examining how objects\nare represented across time in video INRs, using clustering to visualize\nsimilar neurons across layers and architectures, and show that this is\ndominated by motion. These insights demonstrate the general usefulness of our\nanalysis framework.", + "We arrive at our conclusions by examining how objects\nare represented across time in video INRs, using clustering to visualize\nsimilar neurons across layers and architectures, and show that this is\ndominated by motion. These insights demonstrate the general usefulness of our\nanalysis framework. Our project page is available at\nhttps://namithap10.github.io/xinc.", + "While real-world anime super-resolution (SR) has gained increasing attention\nin the SR community, existing methods still adopt techniques from the\nphotorealistic domain. In this paper, we analyze the anime production workflow\nand rethink how to use characteristics of it for the sake of the real-world\nanime SR. First, we argue that video networks and datasets are not necessary\nfor anime SR due to the repetition use of hand-drawing frames. Instead, we\npropose an anime image collection pipeline by choosing the least compressed and\nthe most informative frames from the video sources. Based on this pipeline, we\nintroduce the Anime Production-oriented Image (API) dataset. In addition, we\nidentify two anime-specific challenges of distorted and faint hand-drawn lines\nand unwanted color artifacts. We address the first issue by introducing a\nprediction-oriented compression module in the image degradation model and a\npseudo-ground truth preparation with enhanced hand-drawn lines. In addition, we\nintroduce the balanced twin perceptual loss combining both anime and\nphotorealistic high-level features to mitigate unwanted color artifacts and\nincrease visual clarity. We evaluate our method through extensive experiments\non the public benchmark, showing our method outperforms state-of-the-art anime\ndataset-trained approaches.", + "Trajectory prediction plays an important role in various applications,\nincluding autonomous driving, robotics, and scene understanding. Existing\napproaches mainly focus on developing compact neural networks to increase\nprediction precision on public datasets, typically employing a standardized\ninput duration. However, a notable issue arises when these models are evaluated\nwith varying observation lengths, leading to a significant performance drop, a\nphenomenon we term the Observation Length Shift. To address this issue, we\nintroduce a general and effective framework, the FlexiLength Network (FLN), to\nenhance the robustness of existing trajectory prediction techniques against\nvarying observation periods. Specifically, FLN integrates trajectory data with\ndiverse observation lengths, incorporates FlexiLength Calibration (FLC) to\nacquire temporal invariant representations, and employs FlexiLength Adaptation\n(FLA) to further refine these representations for more accurate future\ntrajectory predictions. Comprehensive experiments on multiple datasets, ie,\nETH/UCY, nuScenes, and Argoverse 1, demonstrate the effectiveness and\nflexibility of our proposed FLN framework.", + "Three-dimensional (3D) reconstruction from a single image is an ill-posed\nproblem with inherent ambiguities, i.e. scale. Predicting a 3D scene from text\ndescription(s) is similarly ill-posed, i.e. spatial arrangements of objects\ndescribed. We investigate the question of whether two inherently ambiguous\nmodalities can be used in conjunction to produce metric-scaled reconstructions.\nTo test this, we focus on monocular depth estimation, the problem of predicting\na dense depth map from a single image, but with an additional text caption\ndescribing the scene. To this end, we begin by encoding the text caption as a\nmean and standard deviation; using a variational framework, we learn the\ndistribution of the plausible metric reconstructions of 3D scenes corresponding\nto the text captions as a prior. To \"select\" a specific reconstruction or depth\nmap, we encode the given image through a conditional sampler that samples from\nthe latent space of the variational text encoder, which is then decoded to the\noutput depth map.", + "To \"select\" a specific reconstruction or depth\nmap, we encode the given image through a conditional sampler that samples from\nthe latent space of the variational text encoder, which is then decoded to the\noutput depth map. Our approach is trained alternatingly between the text and\nimage branches: in one optimization step, we predict the mean and standard\ndeviation from the text description and sample from a standard Gaussian, and in\nthe other, we sample using a (image) conditional sampler. Once trained, we\ndirectly predict depth from the encoded text using the conditional sampler. We\ndemonstrate our approach on indoor (NYUv2) and outdoor (KITTI) scenarios, where\nwe show that language can consistently improve performance in both.", + "Imaging through scattering media is a fundamental and pervasive challenge in\nfields ranging from medical diagnostics to astronomy. A promising strategy to\novercome this challenge is wavefront modulation, which induces measurement\ndiversity during image acquisition. Despite its importance, designing optimal\nwavefront modulations to image through scattering remains under-explored. This\npaper introduces a novel learning-based framework to address the gap. Our\napproach jointly optimizes wavefront modulations and a computationally\nlightweight feedforward \"proxy\" reconstruction network. This network is trained\nto recover scenes obscured by scattering, using measurements that are modified\nby these modulations. The learned modulations produced by our framework\ngeneralize effectively to unseen scattering scenarios and exhibit remarkable\nversatility. During deployment, the learned modulations can be decoupled from\nthe proxy network to augment other more computationally expensive restoration\nalgorithms. Through extensive experiments, we demonstrate our approach\nsignificantly advances the state of the art in imaging through scattering\nmedia. Our project webpage is at https://wavemo-2024.github.io/.", + "Humans constantly interact with their surrounding environments. Current\nhuman-centric generative models mainly focus on synthesizing humans plausibly\ninteracting with static scenes and objects, while the dynamic human\naction-reaction synthesis for ubiquitous causal human-human interactions is\nless explored. Human-human interactions can be regarded as asymmetric with\nactors and reactors in atomic interaction periods. In this paper, we\ncomprehensively analyze the asymmetric, dynamic, synchronous, and detailed\nnature of human-human interactions and propose the first multi-setting human\naction-reaction synthesis benchmark to generate human reactions conditioned on\ngiven human actions. To begin with, we propose to annotate the actor-reactor\norder of the interaction sequences for the NTU120, InterHuman, and Chi3D\ndatasets. Based on them, a diffusion-based generative model with a Transformer\ndecoder architecture called ReGenNet together with an explicit distance-based\ninteraction loss is proposed to predict human reactions in an online manner,\nwhere the future states of actors are unavailable to reactors. Quantitative and\nqualitative results show that our method can generate instant and plausible\nhuman reactions compared to the baselines, and can generalize to unseen actor\nmotions and viewpoint changes.", + "3D hand pose estimation has found broad application in areas such as gesture\nrecognition and human-machine interaction tasks. As performance improves, the\ncomplexity of the systems also increases, which can limit the comparative\nanalysis and practical implementation of these methods. In this paper, we\npropose a simple yet effective baseline that not only surpasses\nstate-of-the-art (SOTA) methods but also demonstrates computational efficiency.\nTo establish this baseline, we abstract existing work into two components: a\ntoken generator and a mesh regressor, and then examine their core structures. A\ncore structure, in this context, is one that fulfills intrinsic functions,\nbrings about significant improvements, and achieves excellent performance\nwithout unnecessary complexities. Our proposed approach is decoupled from any\nmodifications to the backbone, making it adaptable to any modern models. Our\nmethod outperforms existing solutions, achieving state-of-the-art (SOTA)\nresults across multiple datasets. On the FreiHAND dataset, our approach\nproduced a PA-MPJPE of 5.7mm and a PA-MPVPE of 6.0mm.", + "Our\nmethod outperforms existing solutions, achieving state-of-the-art (SOTA)\nresults across multiple datasets. On the FreiHAND dataset, our approach\nproduced a PA-MPJPE of 5.7mm and a PA-MPVPE of 6.0mm. Similarly, on the Dexycb\ndataset, we observed a PA-MPJPE of 5.5mm and a PA-MPVPE of 5.0mm. As for\nperformance speed, our method reached up to 33 frames per second (fps) when\nusing HRNet and up to 70 fps when employing FastViT-MA36", + "In the realm of computer vision and graphics, accurately establishing\ncorrespondences between geometric 3D shapes is pivotal for applications like\nobject tracking, registration, texture transfer, and statistical shape\nanalysis. Moving beyond traditional hand-crafted and data-driven feature\nlearning methods, we incorporate spectral methods with deep learning, focusing\non functional maps (FMs) and optimal transport (OT). Traditional OT-based\napproaches, often reliant on entropy regularization OT in learning-based\nframework, face computational challenges due to their quadratic cost. Our key\ncontribution is to employ the sliced Wasserstein distance (SWD) for OT, which\nis a valid fast optimal transport metric in an unsupervised shape matching\nframework. This unsupervised framework integrates functional map regularizers\nwith a novel OT-based loss derived from SWD, enhancing feature alignment\nbetween shapes treated as discrete probability measures. We also introduce an\nadaptive refinement process utilizing entropy regularized OT, further refining\nfeature alignments for accurate point-to-point correspondences. Our method\ndemonstrates superior performance in non-rigid shape matching, including\nnear-isometric and non-isometric scenarios, and excels in downstream tasks like\nsegmentation transfer.", + "Our method\ndemonstrates superior performance in non-rigid shape matching, including\nnear-isometric and non-isometric scenarios, and excels in downstream tasks like\nsegmentation transfer. The empirical results on diverse datasets highlight our\nframework's effectiveness and generalization capabilities, setting new\nstandards in non-rigid shape matching with efficient OT metrics and an adaptive\nrefinement module.", + "Recent advances in text-to-image generation have made remarkable progress in\nsynthesizing realistic human photos conditioned on given text prompts. However,\nexisting personalized generation methods cannot simultaneously satisfy the\nrequirements of high efficiency, promising identity (ID) fidelity, and flexible\ntext controllability. In this work, we introduce PhotoMaker, an efficient\npersonalized text-to-image generation method, which mainly encodes an arbitrary\nnumber of input ID images into a stack ID embedding for preserving ID\ninformation. Such an embedding, serving as a unified ID representation, can not\nonly encapsulate the characteristics of the same input ID comprehensively, but\nalso accommodate the characteristics of different IDs for subsequent\nintegration. This paves the way for more intriguing and practically valuable\napplications. Besides, to drive the training of our PhotoMaker, we propose an\nID-oriented data construction pipeline to assemble the training data. Under the\nnourishment of the dataset constructed through the proposed pipeline, our\nPhotoMaker demonstrates better ID preservation ability than test-time\nfine-tuning based methods, yet provides significant speed improvements,\nhigh-quality generation results, strong generalization capabilities, and a wide\nrange of applications. Our project page is available at\nhttps://photo-maker.github.io/", + "We present Score-Guided Human Mesh Recovery (ScoreHMR), an approach for\nsolving inverse problems for 3D human pose and shape reconstruction. These\ninverse problems involve fitting a human body model to image observations,\ntraditionally solved through optimization techniques. ScoreHMR mimics model\nfitting approaches, but alignment with the image observation is achieved\nthrough score guidance in the latent space of a diffusion model. The diffusion\nmodel is trained to capture the conditional distribution of the human model\nparameters given an input image. By guiding its denoising process with a\ntask-specific score, ScoreHMR effectively solves inverse problems for various\napplications without the need for retraining the task-agnostic diffusion model.\nWe evaluate our approach on three settings/applications. These are: (i)\nsingle-frame model fitting; (ii) reconstruction from multiple uncalibrated\nviews; (iii) reconstructing humans in video sequences. ScoreHMR consistently\noutperforms all optimization baselines on popular benchmarks across all\nsettings. We make our code and models available at the\nhttps://statho.github.io/ScoreHMR.", + "Diffusion models have recently achieved remarkable progress in generating\nrealistic images. However, challenges remain in accurately understanding and\nsynthesizing the layout requirements in the textual prompts. To align the\ngenerated image with layout instructions, we present a training-free layout\ncalibration system SimM that intervenes in the generative process on the fly\nduring inference time. Specifically, following a \"check-locate-rectify\"\npipeline, the system first analyses the prompt to generate the target layout\nand compares it with the intermediate outputs to automatically detect errors.\nThen, by moving the located activations and making intra- and inter-map\nadjustments, the rectification process can be performed with negligible\ncomputational overhead. To evaluate SimM over a range of layout requirements,\nwe present a benchmark SimMBench that compensates for the lack of superlative\nspatial relations in existing datasets. And both quantitative and qualitative\nresults demonstrate the effectiveness of the proposed SimM in calibrating the\nlayout inconsistencies. Our project page is at https://simm-t2i.github.io/SimM.", + "Unpaired image dehazing (UID) holds significant research importance due to\nthe challenges in acquiring haze/clear image pairs with identical backgrounds.\nThis paper proposes a novel method for UID named Orthogonal Decoupling\nContrastive Regularization (ODCR). Our method is grounded in the assumption\nthat an image consists of both haze-related features, which influence the\ndegree of haze, and haze-unrelated features, such as texture and semantic\ninformation. ODCR aims to ensure that the haze-related features of the dehazing\nresult closely resemble those of the clear image, while the haze-unrelated\nfeatures align with the input hazy image. To accomplish the motivation,\nOrthogonal MLPs optimized geometrically on the Stiefel manifold are proposed,\nwhich can project image features into an orthogonal space, thereby reducing the\nrelevance between different features. Furthermore, a task-driven Depth-wise\nFeature Classifier (DWFC) is proposed, which assigns weights to the orthogonal\nfeatures based on the contribution of each channel's feature in predicting\nwhether the feature source is hazy or clear in a self-supervised fashion.", + "Furthermore, a task-driven Depth-wise\nFeature Classifier (DWFC) is proposed, which assigns weights to the orthogonal\nfeatures based on the contribution of each channel's feature in predicting\nwhether the feature source is hazy or clear in a self-supervised fashion.\nFinally, a Weighted PatchNCE (WPNCE) loss is introduced to achieve the pulling\nof haze-related features in the output image toward those of clear images,\nwhile bringing haze-unrelated features close to those of the hazy input.\nExtensive experiments demonstrate the superior performance of our ODCR method\non UID.", + "Towards holistic understanding of 3D scenes, a general 3D segmentation method\nis needed that can segment diverse objects without restrictions on object\nquantity or categories, while also reflecting the inherent hierarchical\nstructure. To achieve this, we propose OmniSeg3D, an omniversal segmentation\nmethod aims for segmenting anything in 3D all at once. The key insight is to\nlift multi-view inconsistent 2D segmentations into a consistent 3D feature\nfield through a hierarchical contrastive learning framework, which is\naccomplished by two steps. Firstly, we design a novel hierarchical\nrepresentation based on category-agnostic 2D segmentations to model the\nmulti-level relationship among pixels. Secondly, image features rendered from\nthe 3D feature field are clustered at different levels, which can be further\ndrawn closer or pushed apart according to the hierarchical relationship between\ndifferent levels. In tackling the challenges posed by inconsistent 2D\nsegmentations, this framework yields a global consistent 3D feature field,\nwhich further enables hierarchical segmentation, multi-object selection, and\nglobal discretization.", + "In tackling the challenges posed by inconsistent 2D\nsegmentations, this framework yields a global consistent 3D feature field,\nwhich further enables hierarchical segmentation, multi-object selection, and\nglobal discretization. Extensive experiments demonstrate the effectiveness of\nour method on high-quality 3D segmentation and accurate hierarchical structure\nunderstanding. A graphical user interface further facilitates flexible\ninteraction for omniversal 3D segmentation.", + "Many problems in computer vision can be formulated as geometric estimation\nproblems, i.e. given a collection of measurements (e.g. point correspondences)\nwe wish to fit a model (e.g. an essential matrix) that agrees with our\nobservations. This necessitates some measure of how much an observation\n``agrees\" with a given model. A natural choice is to consider the smallest\nperturbation that makes the observation exactly satisfy the constraints.\nHowever, for many problems, this metric is expensive or otherwise intractable\nto compute. The so-called Sampson error approximates this geometric error\nthrough a linearization scheme. For epipolar geometry, the Sampson error is a\npopular choice and in practice known to yield very tight approximations of the\ncorresponding geometric residual (the reprojection error).\n In this paper we revisit the Sampson approximation and provide new\ntheoretical insights as to why and when this approximation works, as well as\nprovide explicit bounds on the tightness under some mild assumptions. Our\ntheoretical results are validated in several experiments on real data and in\nthe context of different geometric estimation tasks.", + "We introduce the Fixed Point Diffusion Model (FPDM), a novel approach to\nimage generation that integrates the concept of fixed point solving into the\nframework of diffusion-based generative modeling. Our approach embeds an\nimplicit fixed point solving layer into the denoising network of a diffusion\nmodel, transforming the diffusion process into a sequence of closely-related\nfixed point problems. Combined with a new stochastic training method, this\napproach significantly reduces model size, reduces memory usage, and\naccelerates training. Moreover, it enables the development of two new\ntechniques to improve sampling efficiency: reallocating computation across\ntimesteps and reusing fixed point solutions between timesteps. We conduct\nextensive experiments with state-of-the-art models on ImageNet, FFHQ,\nCelebA-HQ, and LSUN-Church, demonstrating substantial improvements in\nperformance and efficiency. Compared to the state-of-the-art DiT model, FPDM\ncontains 87% fewer parameters, consumes 60% less memory during training, and\nimproves image generation quality in situations where sampling computation or\ntime is limited. Our code and pretrained models are available at\nhttps://lukemelas.github.io/fixed-point-diffusion-models.", + "Learning from a limited amount of data, namely Few-Shot Learning, stands out\nas a challenging computer vision task. Several works exploit semantics and\ndesign complicated semantic fusion mechanisms to compensate for rare\nrepresentative features within restricted data. However, relying on naive\nsemantics such as class names introduces biases due to their brevity, while\nacquiring extensive semantics from external knowledge takes a huge time and\neffort. This limitation severely constrains the potential of semantics in\nFew-Shot Learning. In this paper, we design an automatic way called Semantic\nEvolution to generate high-quality semantics. The incorporation of high-quality\nsemantics alleviates the need for complex network structures and learning\nalgorithms used in previous works. Hence, we employ a simple two-layer network\ntermed Semantic Alignment Network to transform semantics and visual features\ninto robust class prototypes with rich discriminative features for few-shot\nclassification. The experimental results show our framework outperforms all\nprevious methods on six benchmarks, demonstrating a simple network with\nhigh-quality semantics can beat intricate multi-modal modules on few-shot\nclassification tasks. Code is available at\nhttps://github.com/zhangdoudou123/SemFew.", + "Defocus blur is a persistent problem in microscope imaging that poses harm to\npathology interpretation and medical intervention in cell microscopy and\nmicroscope surgery. To address this problem, a unified framework including the\nmulti-pyramid transformer (MPT) and extended frequency contrastive\nregularization (EFCR) is proposed to tackle two outstanding challenges in\nmicroscopy deblur: longer attention span and data deficiency. The MPT employs\nan explicit pyramid structure at each network stage that integrates the\ncross-scale window attention (CSWA), the intra-scale channel attention (ISCA),\nand the feature-enhancing feed-forward network (FEFN) to capture long-range\ncross-scale spatial interaction and global channel context. The EFCR addresses\nthe data deficiency problem by exploring latent deblur signals from different\nfrequency bands. It also enables deblur knowledge transfer to learn\ncross-domain information from extra data, improving deblur performance for\nlabeled and unlabeled data. Extensive experiments and downstream task\nvalidation show the framework achieves state-of-the-art performance across\nmultiple datasets. Project page: https://github.com/PieceZhang/MPT-CataBlur.", + "CLIP showcases exceptional cross-modal matching capabilities due to its\ntraining on image-text contrastive learning tasks. However, without specific\noptimization for unimodal scenarios, its performance in single-modality feature\nextraction might be suboptimal. Despite this, some studies have directly used\nCLIP's image encoder for tasks like few-shot classification, introducing a\nmisalignment between its pre-training objectives and feature extraction\nmethods. This inconsistency can diminish the quality of the image's feature\nrepresentation, adversely affecting CLIP's effectiveness in target tasks. In\nthis paper, we view text features as precise neighbors of image features in\nCLIP's space and present a novel CrOss-moDal nEighbor Representation(CODER)\nbased on the distance structure between images and their neighbor texts. This\nfeature extraction method aligns better with CLIP's pre-training objectives,\nthereby fully leveraging CLIP's robust cross-modal capabilities. The key to\nconstruct a high-quality CODER lies in how to create a vast amount of\nhigh-quality and diverse texts to match with images. We introduce the Auto Text\nGenerator(ATG) to automatically generate the required texts in a data-free and\ntraining-free manner.", + "The key to\nconstruct a high-quality CODER lies in how to create a vast amount of\nhigh-quality and diverse texts to match with images. We introduce the Auto Text\nGenerator(ATG) to automatically generate the required texts in a data-free and\ntraining-free manner. We apply CODER to CLIP's zero-shot and few-shot image\nclassification tasks. Experiment results across various datasets and models\nconfirm CODER's effectiveness. Code is available\nat:https://github.com/YCaigogogo/CVPR24-CODER.", + "Existing object recognition models have been shown to lack robustness in\ndiverse geographical scenarios due to domain shifts in design and context.\nClass representations need to be adapted to more accurately reflect an object\nconcept under these shifts. In the absence of training data from target\ngeographies, we hypothesize that geographically diverse descriptive knowledge\nof categories can enhance robustness. For this purpose, we explore the\nfeasibility of probing a large language model for geography-based object\nknowledge, and we examine the effects of integrating knowledge into zero-shot\nand learnable soft prompting with CLIP. Within this exploration, we propose\ngeography knowledge regularization to ensure that soft prompts trained on a\nsource set of geographies generalize to an unseen target set. Accuracy gains\nover prompting baselines on DollarStreet while training only on Europe data are\nup to +2.8/1.2/1.6 on target data from Africa/Asia/Americas, and +4.6 overall\non the hardest classes. Competitive performance is shown vs. few-shot target\ntraining, and analysis is provided to direct future study of geographical\nrobustness.", + "Deep neural networks are vulnerable to adversarial attacks, often leading to\nerroneous outputs. Adversarial training has been recognized as one of the most\neffective methods to counter such attacks. However, existing adversarial\ntraining techniques have predominantly been tested on balanced datasets,\nwhereas real-world data often exhibit a long-tailed distribution, casting doubt\non the efficacy of these methods in practical scenarios.\n In this paper, we delve into adversarial training under long-tailed\ndistributions. Through an analysis of the previous work \"RoBal\", we discover\nthat utilizing Balanced Softmax Loss alone can achieve performance comparable\nto the complete RoBal approach while significantly reducing training overheads.\nAdditionally, we reveal that, similar to uniform distributions, adversarial\ntraining under long-tailed distributions also suffers from robust overfitting.\nTo address this, we explore data augmentation as a solution and unexpectedly\ndiscover that, unlike results obtained with balanced data, data augmentation\nnot only effectively alleviates robust overfitting but also significantly\nimproves robustness. We further investigate the reasons behind the improvement\nof robustness through data augmentation and identify that it is attributable to\nthe increased diversity of examples.", + "We further investigate the reasons behind the improvement\nof robustness through data augmentation and identify that it is attributable to\nthe increased diversity of examples. Extensive experiments further corroborate\nthat data augmentation alone can significantly improve robustness. Finally,\nbuilding on these findings, we demonstrate that compared to RoBal, the\ncombination of BSL and data augmentation leads to a +6.66% improvement in model\nrobustness under AutoAttack on CIFAR-10-LT. Our code is available at\nhttps://github.com/NISPLab/AT-BSL .", + "This paper presents a new approach for the detection of fake videos, based on\nthe analysis of style latent vectors and their abnormal behavior in temporal\nchanges in the generated videos. We discovered that the generated facial videos\nsuffer from the temporal distinctiveness in the temporal changes of style\nlatent vectors, which are inevitable during the generation of temporally stable\nvideos with various facial expressions and geometric transformations. Our\nframework utilizes the StyleGRU module, trained by contrastive learning, to\nrepresent the dynamic properties of style latent vectors. Additionally, we\nintroduce a style attention module that integrates StyleGRU-generated features\nwith content-based features, enabling the detection of visual and temporal\nartifacts. We demonstrate our approach across various benchmark scenarios in\ndeepfake detection, showing its superiority in cross-dataset and\ncross-manipulation scenarios. Through further analysis, we also validate the\nimportance of using temporal changes of style latent vectors to improve the\ngenerality of deepfake video detection.", + "Vision-Language Models (VLMs), such as Flamingo and GPT-4V, have shown\nimmense potential by integrating large language models with vision systems.\nNevertheless, these models face challenges in the fundamental computer vision\ntask of object localisation, due to their training on multimodal data\ncontaining mostly captions without explicit spatial grounding. While it is\npossible to construct custom, supervised training pipelines with bounding box\nannotations that integrate with VLMs, these result in specialized and\nhard-to-scale models. In this paper, we aim to explore the limits of\ncaption-based VLMs and instead propose to tackle the challenge in a simpler\nmanner by i) keeping the weights of a caption-based VLM frozen and ii) not\nusing any supervised detection data. To this end, we introduce an\ninput-agnostic Positional Insert (PIN), a learnable spatial prompt, containing\na minimal set of parameters that are slid inside the frozen VLM, unlocking\nobject localisation capabilities. Our PIN module is trained with a simple\nnext-token prediction task on synthetic data without requiring the introduction\nof new output heads.", + "Our PIN module is trained with a simple\nnext-token prediction task on synthetic data without requiring the introduction\nof new output heads. Our experiments demonstrate strong zero-shot localisation\nperformances on a variety of images, including Pascal VOC, COCO, LVIS, and\ndiverse images like paintings or cartoons.", + "Garment manipulation (e.g., unfolding, folding and hanging clothes) is\nessential for future robots to accomplish home-assistant tasks, while highly\nchallenging due to the diversity of garment configurations, geometries and\ndeformations. Although able to manipulate similar shaped garments in a certain\ntask, previous works mostly have to design different policies for different\ntasks, could not generalize to garments with diverse geometries, and often rely\nheavily on human-annotated data. In this paper, we leverage the property that,\ngarments in a certain category have similar structures, and then learn the\ntopological dense (point-level) visual correspondence among garments in the\ncategory level with different deformations in the self-supervised manner. The\ntopological correspondence can be easily adapted to the functional\ncorrespondence to guide the manipulation policies for various downstream tasks,\nwithin only one or few-shot demonstrations. Experiments over garments in 3\ndifferent categories on 3 representative tasks in diverse scenarios, using one\nor two arms, taking one or more steps, inputting flat or messy garments,\ndemonstrate the effectiveness of our proposed method. Project page:\nhttps://warshallrho.github.io/unigarmentmanip.", + "Recent weakly supervised semantic segmentation (WSSS) methods strive to\nincorporate contextual knowledge to improve the completeness of class\nactivation maps (CAM). In this work, we argue that the knowledge bias between\ninstances and contexts affects the capability of the prototype to sufficiently\nunderstand instance semantics. Inspired by prototype learning theory, we\npropose leveraging prototype awareness to capture diverse and fine-grained\nfeature attributes of instances. The hypothesis is that contextual prototypes\nmight erroneously activate similar and frequently co-occurring object\ncategories due to this knowledge bias. Therefore, we propose to enhance the\nprototype representation ability by mitigating the bias to better capture\nspatial coverage in semantic object regions. With this goal, we present a\nContext Prototype-Aware Learning (CPAL) strategy, which leverages semantic\ncontext to enrich instance comprehension. The core of this method is to\naccurately capture intra-class variations in object features through\ncontext-aware prototypes, facilitating the adaptation to the semantic\nattributes of various instances. We design feature distribution alignment to\noptimize prototype awareness, aligning instance feature distributions with\ndense features.", + "The core of this method is to\naccurately capture intra-class variations in object features through\ncontext-aware prototypes, facilitating the adaptation to the semantic\nattributes of various instances. We design feature distribution alignment to\noptimize prototype awareness, aligning instance feature distributions with\ndense features. In addition, a unified training framework is proposed to\ncombine label-guided classification supervision and prototypes-guided\nself-supervision. Experimental results on PASCAL VOC 2012 and MS COCO 2014 show\nthat CPAL significantly improves off-the-shelf methods and achieves\nstate-of-the-art performance. The project is available at\nhttps://github.com/Barrett-python/CPAL.", + "In this paper, we explore the potential of Snapshot Compressive Imaging (SCI)\ntechnique for recovering the underlying 3D scene representation from a single\ntemporal compressed image. SCI is a cost-effective method that enables the\nrecording of high-dimensional data, such as hyperspectral or temporal\ninformation, into a single image using low-cost 2D imaging sensors. To achieve\nthis, a series of specially designed 2D masks are usually employed, which not\nonly reduces storage requirements but also offers potential privacy protection.\nInspired by this, to take one step further, our approach builds upon the\npowerful 3D scene representation capabilities of neural radiance fields (NeRF).\nSpecifically, we formulate the physical imaging process of SCI as part of the\ntraining of NeRF, allowing us to exploit its impressive performance in\ncapturing complex scene structures. To assess the effectiveness of our method,\nwe conduct extensive evaluations using both synthetic data and real data\ncaptured by our SCI system. Extensive experimental results demonstrate that our\nproposed approach surpasses the state-of-the-art methods in terms of image\nreconstruction and novel view image synthesis.", + "To assess the effectiveness of our method,\nwe conduct extensive evaluations using both synthetic data and real data\ncaptured by our SCI system. Extensive experimental results demonstrate that our\nproposed approach surpasses the state-of-the-art methods in terms of image\nreconstruction and novel view image synthesis. Moreover, our method also\nexhibits the ability to restore high frame-rate multi-view consistent images by\nleveraging SCI and the rendering capabilities of NeRF. The code is available at\nhttps://github.com/WU-CVGL/SCINeRF.", + "Vision-and-language models trained to match images with text can be combined\nwith visual explanation methods to point to the locations of specific objects\nin an image. Our work shows that the localization --\"grounding\"-- abilities of\nthese models can be further improved by finetuning for self-consistent visual\nexplanations. We propose a strategy for augmenting existing text-image datasets\nwith paraphrases using a large language model, and SelfEQ, a weakly-supervised\nstrategy on visual explanation maps for paraphrases that encourages\nself-consistency. Specifically, for an input textual phrase, we attempt to\ngenerate a paraphrase and finetune the model so that the phrase and paraphrase\nmap to the same region in the image. We posit that this both expands the\nvocabulary that the model is able to handle, and improves the quality of the\nobject locations highlighted by gradient-based visual explanation methods (e.g.\nGradCAM). We demonstrate that SelfEQ improves performance on Flickr30k,\nReferIt, and RefCOCO+ over a strong baseline method and several prior works.", + "GradCAM). We demonstrate that SelfEQ improves performance on Flickr30k,\nReferIt, and RefCOCO+ over a strong baseline method and several prior works.\nParticularly, comparing to other methods that do not use any type of box\nannotations, we obtain 84.07% on Flickr30k (an absolute improvement of 4.69%),\n67.40% on ReferIt (an absolute improvement of 7.68%), and 75.10%, 55.49% on\nRefCOCO+ test sets A and B respectively (an absolute improvement of 3.74% on\naverage).", + "Large Multimodal Models (LMMs) have shown promise in vision-language tasks\nbut struggle with high-resolution input and detailed scene understanding.\nAddressing these challenges, we introduce Monkey to enhance LMM capabilities.\nFirstly, Monkey processes input images by dividing them into uniform patches,\neach matching the size (e.g., 448x448) used in the original training of the\nwell-trained vision encoder. Equipped with individual adapter for each patch,\nMonkey can handle higher resolutions up to 1344x896 pixels, enabling the\ndetailed capture of complex visual information. Secondly, it employs a\nmulti-level description generation method, enriching the context for\nscene-object associations. This two-part strategy ensures more effective\nlearning from generated data: the higher resolution allows for a more detailed\ncapture of visuals, which in turn enhances the effectiveness of comprehensive\ndescriptions. Extensive ablative results validate the effectiveness of our\ndesigns. Additionally, experiments on 18 datasets further demonstrate that\nMonkey surpasses existing LMMs in many tasks like Image Captioning and various\nVisual Question Answering formats.", + "Extensive ablative results validate the effectiveness of our\ndesigns. Additionally, experiments on 18 datasets further demonstrate that\nMonkey surpasses existing LMMs in many tasks like Image Captioning and various\nVisual Question Answering formats. Specially, in qualitative tests focused on\ndense text question answering, Monkey has exhibited encouraging results\ncompared with GPT4V. Code is available at\nhttps://github.com/Yuliang-Liu/Monkey.", + "We propose FlashAvatar, a novel and lightweight 3D animatable avatar\nrepresentation that could reconstruct a digital avatar from a short monocular\nvideo sequence in minutes and render high-fidelity photo-realistic images at\n300FPS on a consumer-grade GPU. To achieve this, we maintain a uniform 3D\nGaussian field embedded in the surface of a parametric face model and learn\nextra spatial offset to model non-surface regions and subtle facial details.\nWhile full use of geometric priors can capture high-frequency facial details\nand preserve exaggerated expressions, proper initialization can help reduce the\nnumber of Gaussians, thus enabling super-fast rendering speed. Extensive\nexperimental results demonstrate that FlashAvatar outperforms existing works\nregarding visual quality and personalized details and is almost an order of\nmagnitude faster in rendering speed. Project page:\nhttps://ustc3dv.github.io/FlashAvatar/", + "In recent years, there has been significant progress in the development of\ntext-to-image generative models. Evaluating the quality of the generative\nmodels is one essential step in the development process. Unfortunately, the\nevaluation process could consume a significant amount of computational\nresources, making the required periodic evaluation of model performance (e.g.,\nmonitoring training progress) impractical. Therefore, we seek to improve the\nevaluation efficiency by selecting the representative subset of the text-image\ndataset. We systematically investigate the design choices, including the\nselection criteria (textural features or image-based metrics) and the selection\ngranularity (prompt-level or set-level). We find that the insights from prior\nwork on subset selection for training data do not generalize to this problem,\nand we propose FlashEval, an iterative search algorithm tailored to evaluation\ndata selection. We demonstrate the effectiveness of FlashEval on ranking\ndiffusion models with various configurations, including architectures,\nquantization levels, and sampler schedules on COCO and DiffusionDB datasets.\nOur searched 50-item subset could achieve comparable evaluation quality to the\nrandomly sampled 500-item subset for COCO annotations on unseen models,\nachieving a 10x evaluation speedup.", + "Our searched 50-item subset could achieve comparable evaluation quality to the\nrandomly sampled 500-item subset for COCO annotations on unseen models,\nachieving a 10x evaluation speedup. We release the condensed subset of these\ncommonly used datasets to help facilitate diffusion algorithm design and\nevaluation, and open-source FlashEval as a tool for condensing future datasets,\naccessible at https://github.com/thu-nics/FlashEval.", + "The 3D Human Pose Estimation (3D HPE) task uses 2D images or videos to\npredict human joint coordinates in 3D space. Despite recent advancements in\ndeep learning-based methods, they mostly ignore the capability of coupling\naccessible texts and naturally feasible knowledge of humans, missing out on\nvaluable implicit supervision to guide the 3D HPE task. Moreover, previous\nefforts often study this task from the perspective of the whole human body,\nneglecting fine-grained guidance hidden in different body parts. To this end,\nwe present a new Fine-Grained Prompt-Driven Denoiser based on a diffusion model\nfor 3D HPE, named \\textbf{FinePOSE}. It consists of three core blocks enhancing\nthe reverse process of the diffusion model: (1) Fine-grained Part-aware Prompt\nlearning (FPP) block constructs fine-grained part-aware prompts via coupling\naccessible texts and naturally feasible knowledge of body parts with learnable\nprompts to model implicit guidance. (2) Fine-grained Prompt-pose Communication\n(FPC) block establishes fine-grained communications between learned part-aware\nprompts and poses to improve the denoising quality.", + "(2) Fine-grained Prompt-pose Communication\n(FPC) block establishes fine-grained communications between learned part-aware\nprompts and poses to improve the denoising quality. (3) Prompt-driven Timestamp\nStylization (PTS) block integrates learned prompt embedding and temporal\ninformation related to the noise level to enable adaptive adjustment at each\ndenoising step. Extensive experiments on public single-human pose estimation\ndatasets show that FinePOSE outperforms state-of-the-art methods. We further\nextend FinePOSE to multi-human pose estimation. Achieving 34.3mm average MPJPE\non the EgoHumans dataset demonstrates the potential of FinePOSE to deal with\ncomplex multi-human scenarios. Code is available at\nhttps://github.com/PKU-ICST-MIPL/FinePOSE_CVPR2024.", + "Data mixing methods play a crucial role in semi-supervised learning (SSL),\nbut their application is unexplored in long-tailed semi-supervised learning\n(LTSSL). The primary reason is that the in-batch mixing manner fails to address\nclass imbalance. Furthermore, existing LTSSL methods mainly focus on\nre-balancing data quantity but ignore class-wise uncertainty, which is also\nvital for class balance. For instance, some classes with sufficient samples\nmight still exhibit high uncertainty due to indistinguishable features. To this\nend, this paper introduces the Balanced and Entropy-based Mix (BEM), a\npioneering mixing approach to re-balance the class distribution of both data\nquantity and uncertainty. Specifically, we first propose a class balanced mix\nbank to store data of each class for mixing. This bank samples data based on\nthe estimated quantity distribution, thus re-balancing data quantity. Then, we\npresent an entropy-based learning approach to re-balance class-wise\nuncertainty, including entropy-based sampling strategy, entropy-based selection\nmodule, and entropy-based class balanced loss.", + "This bank samples data based on\nthe estimated quantity distribution, thus re-balancing data quantity. Then, we\npresent an entropy-based learning approach to re-balance class-wise\nuncertainty, including entropy-based sampling strategy, entropy-based selection\nmodule, and entropy-based class balanced loss. Our BEM first leverages data\nmixing for improving LTSSL, and it can also serve as a complement to the\nexisting re-balancing methods. Experimental results show that BEM significantly\nenhances various LTSSL frameworks and achieves state-of-the-art performances\nacross multiple benchmarks.", + "Holistic understanding of urban scenes based on RGB images is a challenging\nyet important problem. It encompasses understanding both the geometry and\nappearance to enable novel view synthesis, parsing semantic labels, and\ntracking moving objects. Despite considerable progress, existing approaches\noften focus on specific aspects of this task and require additional inputs such\nas LiDAR scans or manually annotated 3D bounding boxes. In this paper, we\nintroduce a novel pipeline that utilizes 3D Gaussian Splatting for holistic\nurban scene understanding. Our main idea involves the joint optimization of\ngeometry, appearance, semantics, and motion using a combination of static and\ndynamic 3D Gaussians, where moving object poses are regularized via physical\nconstraints. Our approach offers the ability to render new viewpoints in\nreal-time, yielding 2D and 3D semantic information with high accuracy, and\nreconstruct dynamic scenes, even in scenarios where 3D bounding box detection\nare highly noisy. Experimental results on KITTI, KITTI-360, and Virtual KITTI 2\ndemonstrate the effectiveness of our approach.", + "Recent methods such as Score Distillation Sampling (SDS) and Variational\nScore Distillation (VSD) using 2D diffusion models for text-to-3D generation\nhave demonstrated impressive generation quality. However, the long generation\ntime of such algorithms significantly degrades the user experience. To tackle\nthis problem, we propose DreamPropeller, a drop-in acceleration algorithm that\ncan be wrapped around any existing text-to-3D generation pipeline based on\nscore distillation. Our framework generalizes Picard iterations, a classical\nalgorithm for parallel sampling an ODE path, and can account for non-ODE paths\nsuch as momentum-based gradient updates and changes in dimensions during the\noptimization process as in many cases of 3D generation. We show that our\nalgorithm trades parallel compute for wallclock time and empirically achieves\nup to 4.7x speedup with a negligible drop in generation quality for all tested\nframeworks.", + "Diffusion models have recently gained unprecedented attention in the field of\nimage synthesis due to their remarkable generative capabilities.\nNotwithstanding their prowess, these models often incur substantial\ncomputational costs, primarily attributed to the sequential denoising process\nand cumbersome model size. Traditional methods for compressing diffusion models\ntypically involve extensive retraining, presenting cost and feasibility\nchallenges. In this paper, we introduce DeepCache, a novel training-free\nparadigm that accelerates diffusion models from the perspective of model\narchitecture. DeepCache capitalizes on the inherent temporal redundancy\nobserved in the sequential denoising steps of diffusion models, which caches\nand retrieves features across adjacent denoising stages, thereby curtailing\nredundant computations. Utilizing the property of the U-Net, we reuse the\nhigh-level features while updating the low-level features in a very cheap way.", + "Utilizing the property of the U-Net, we reuse the\nhigh-level features while updating the low-level features in a very cheap way.\nThis innovative strategy, in turn, enables a speedup factor of 2.3$\\times$ for\nStable Diffusion v1.5 with only a 0.05 decline in CLIP Score, and 4.1$\\times$\nfor LDM-4-G with a slight decrease of 0.22 in FID on ImageNet. Our experiments\nalso demonstrate DeepCache's superiority over existing pruning and distillation\nmethods that necessitate retraining and its compatibility with current sampling\ntechniques. Furthermore, we find that under the same throughput, DeepCache\neffectively achieves comparable or even marginally improved results with DDIM\nor PLMS. The code is available at https://github.com/horseee/DeepCache", + "Point clouds captured by different sensors such as RGB-D cameras and LiDAR\npossess non-negligible domain gaps. Most existing methods design different\nnetwork architectures and train separately on point clouds from various\nsensors. Typically, point-based methods achieve outstanding performances on\neven-distributed dense point clouds from RGB-D cameras, while voxel-based\nmethods are more efficient for large-range sparse LiDAR point clouds. In this\npaper, we propose geometry-to-voxel auxiliary learning to enable voxel\nrepresentations to access point-level geometric information, which supports\nbetter generalisation of the voxel-based backbone with additional\ninterpretations of multi-sensor point clouds. Specifically, we construct\nhierarchical geometry pools generated by a voxel-guided dynamic point network,\nwhich efficiently provide auxiliary fine-grained geometric information adapted\nto different stages of voxel features. We conduct experiments on joint\nmulti-sensor datasets to demonstrate the effectiveness of GeoAuxNet. Enjoying\nelaborate geometric information, our method outperforms other models\ncollectively trained on multi-sensor datasets, and achieve competitive results\nwith the-state-of-art experts on each single dataset.", + "Humans possess a remarkable ability to integrate auditory and visual\ninformation, enabling a deeper understanding of the surrounding environment.\nThis early fusion of audio and visual cues, demonstrated through cognitive\npsychology and neuroscience research, offers promising potential for developing\nmultimodal perception models. However, training early fusion architectures\nposes significant challenges, as the increased model expressivity requires\nrobust learning frameworks to harness their enhanced capabilities. In this\npaper, we address this challenge by leveraging the masked reconstruction\nframework, previously successful in unimodal settings, to train audio-visual\nencoders with early fusion. Additionally, we propose an attention-based fusion\nmodule that captures interactions between local audio and visual\nrepresentations, enhancing the model's ability to capture fine-grained\ninteractions. While effective, this procedure can become computationally\nintractable, as the number of local representations increases. Thus, to address\nthe computational complexity, we propose an alternative procedure that\nfactorizes the local representations before representing audio-visual\ninteractions. Extensive evaluations on a variety of datasets demonstrate the\nsuperiority of our approach in audio-event classification, visual sound\nlocalization, sound separation, and audio-visual segmentation.", + "Extensive evaluations on a variety of datasets demonstrate the\nsuperiority of our approach in audio-event classification, visual sound\nlocalization, sound separation, and audio-visual segmentation. These\ncontributions enable the efficient training of deeply integrated audio-visual\nmodels and significantly advance the usefulness of early fusion architectures.", + "We introduce a new attention mechanism, dubbed structural self-attention\n(StructSA), that leverages rich correlation patterns naturally emerging in\nkey-query interactions of attention. StructSA generates attention maps by\nrecognizing space-time structures of key-query correlations via convolution and\nuses them to dynamically aggregate local contexts of value features. This\neffectively leverages rich structural patterns in images and videos such as\nscene layouts, object motion, and inter-object relations. Using StructSA as a\nmain building block, we develop the structural vision transformer (StructViT)\nand evaluate its effectiveness on both image and video classification tasks,\nachieving state-of-the-art results on ImageNet-1K, Kinetics-400,\nSomething-Something V1 & V2, Diving-48, and FineGym.", + "Understanding the anatomy of renal pathology is crucial for advancing disease\ndiagnostics, treatment evaluation, and clinical research. The complex kidney\nsystem comprises various components across multiple levels, including regions\n(cortex, medulla), functional units (glomeruli, tubules), and cells (podocytes,\nmesangial cells in glomerulus). Prior studies have predominantly overlooked the\nintricate spatial interrelations among objects from clinical knowledge. In this\nresearch, we introduce a novel universal proposition learning approach, called\npanoramic renal pathology segmentation (PrPSeg), designed to segment\ncomprehensively panoramic structures within kidney by integrating extensive\nknowledge of kidney anatomy.\n In this paper, we propose (1) the design of a comprehensive universal\nproposition matrix for renal pathology, facilitating the incorporation of\nclassification and spatial relationships into the segmentation process; (2) a\ntoken-based dynamic head single network architecture, with the improvement of\nthe partial label image segmentation and capability for future data\nenlargement; and (3) an anatomy loss function, quantifying the inter-object\nrelationships across the kidney.", + "Characters are an important aspect of any storyline and identifying and\nincluding them in descriptions is necessary for story understanding. While\nprevious work has largely ignored identity and generated captions with someone\n(anonymized names), recent work formulates id-aware captioning as a\nfill-in-the-blanks (FITB) task, where, given a caption with blanks, the goal is\nto predict person id labels. However, to predict captions with ids, a two-stage\napproach is required: first predict captions with someone, then fill in\nidentities. In this work, we present a new single stage approach that can\nseamlessly switch between id-aware caption generation or FITB when given a\ncaption with blanks. Our model, Movie-Identity Captioner (MICap), uses a shared\nauto-regressive decoder that benefits from training with FITB and full-caption\ngeneration objectives, while the encoder can benefit from or disregard captions\nwith blanks as input. Another challenge with id-aware captioning is the lack of\na metric to capture subtle differences between person ids. To this end, we\nintroduce iSPICE, a caption evaluation metric that focuses on identity tuples\ncreated through intermediate scene graphs.", + "Another challenge with id-aware captioning is the lack of\na metric to capture subtle differences between person ids. To this end, we\nintroduce iSPICE, a caption evaluation metric that focuses on identity tuples\ncreated through intermediate scene graphs. We evaluate MICap on Large-Scale\nMovie Description Challenge (LSMDC), where we show a 4.2% improvement in FITB\naccuracy, and a 1-2% bump in classic captioning metrics.", + "We present GLEE in this work, an object-level foundation model for locating\nand identifying objects in images and videos. Through a unified framework, GLEE\naccomplishes detection, segmentation, tracking, grounding, and identification\nof arbitrary objects in the open world scenario for various object perception\ntasks. Adopting a cohesive learning strategy, GLEE acquires knowledge from\ndiverse data sources with varying supervision levels to formulate general\nobject representations, excelling in zero-shot transfer to new data and tasks.\nSpecifically, we employ an image encoder, text encoder, and visual prompter to\nhandle multi-modal inputs, enabling to simultaneously solve various\nobject-centric downstream tasks while maintaining state-of-the-art performance.\nDemonstrated through extensive training on over five million images from\ndiverse benchmarks, GLEE exhibits remarkable versatility and improved\ngeneralization performance, efficiently tackling downstream tasks without the\nneed for task-specific adaptation. By integrating large volumes of\nautomatically labeled data, we further enhance its zero-shot generalization\ncapabilities. Additionally, GLEE is capable of being integrated into Large\nLanguage Models, serving as a foundational model to provide universal\nobject-level information for multi-modal tasks.", + "By integrating large volumes of\nautomatically labeled data, we further enhance its zero-shot generalization\ncapabilities. Additionally, GLEE is capable of being integrated into Large\nLanguage Models, serving as a foundational model to provide universal\nobject-level information for multi-modal tasks. We hope that the versatility\nand universality of our method will mark a significant step in the development\nof efficient visual foundation models for AGI systems. The model and code will\nbe released at https://glee-vision.github.io .", + "Heterogeneous Federated Learning (HtFL) enables collaborative learning on\nmultiple clients with different model architectures while preserving privacy.\nDespite recent research progress, knowledge sharing in HtFL is still difficult\ndue to data and model heterogeneity. To tackle this issue, we leverage the\nknowledge stored in pre-trained generators and propose a new upload-efficient\nknowledge transfer scheme called Federated Knowledge-Transfer Loop (FedKTL).\nOur FedKTL can produce client-task-related prototypical image-vector pairs via\nthe generator's inference on the server. With these pairs, each client can\ntransfer pre-existing knowledge from the generator to its local model through\nan additional supervised local task. We conduct extensive experiments on four\ndatasets under two types of data heterogeneity with 14 kinds of models\nincluding CNNs and ViTs. Results show that our upload-efficient FedKTL\nsurpasses seven state-of-the-art methods by up to 7.31% in accuracy. Moreover,\nour knowledge transfer scheme is applicable in scenarios with only one edge\nclient. Code: https://github.com/TsingZ0/FedKTL", + "We introduce MeshGPT, a new approach for generating triangle meshes that\nreflects the compactness typical of artist-created meshes, in contrast to dense\ntriangle meshes extracted by iso-surfacing methods from neural fields. Inspired\nby recent advances in powerful large language models, we adopt a sequence-based\napproach to autoregressively generate triangle meshes as sequences of\ntriangles. We first learn a vocabulary of latent quantized embeddings, using\ngraph convolutions, which inform these embeddings of the local mesh geometry\nand topology. These embeddings are sequenced and decoded into triangles by a\ndecoder, ensuring that they can effectively reconstruct the mesh. A transformer\nis then trained on this learned vocabulary to predict the index of the next\nembedding given previous embeddings. Once trained, our model can be\nautoregressively sampled to generate new triangle meshes, directly generating\ncompact meshes with sharp edges, more closely imitating the efficient\ntriangulation patterns of human-crafted meshes. MeshGPT demonstrates a notable\nimprovement over state of the art mesh generation methods, with a 9% increase\nin shape coverage and a 30-point enhancement in FID scores across various\ncategories.", + "As a new embodied vision task, Instance ImageGoal Navigation (IIN) aims to\nnavigate to a specified object depicted by a goal image in an unexplored\nenvironment.\n The main challenge of this task lies in identifying the target object from\ndifferent viewpoints while rejecting similar distractors.\n Existing ImageGoal Navigation methods usually adopt the simple\nExploration-Exploitation framework and ignore the identification of specific\ninstance during navigation.\n In this work, we propose to imitate the human behaviour of ``getting closer\nto confirm\" when distinguishing objects from a distance.\n Specifically, we design a new modular navigation framework named\nInstance-aware Exploration-Verification-Exploitation (IEVE) for instance-level\nimage goal navigation.\n Our method allows for active switching among the exploration, verification,\nand exploitation actions, thereby facilitating the agent in making reasonable\ndecisions under different situations.\n On the challenging HabitatMatterport 3D semantic (HM3D-SEM) dataset, our\nmethod surpasses previous state-of-the-art work, with a classical segmentation\nmodel (0.684 vs. 0.561 success) or a robust model (0.702 vs. 0.561 success)", + "Training deep neural networks has become a common approach for addressing\nimage restoration problems. An alternative for training a \"task-specific\"\nnetwork for each observation model is to use pretrained deep denoisers for\nimposing only the signal's prior within iterative algorithms, without\nadditional training. Recently, a sampling-based variant of this approach has\nbecome popular with the rise of diffusion/score-based generative models. Using\ndenoisers for general purpose restoration requires guiding the iterations to\nensure agreement of the signal with the observations. In low-noise settings,\nguidance that is based on back-projection (BP) has been shown to be a promising\nstrategy (used recently also under the names \"pseudoinverse\" or\n\"range/null-space\" guidance). However, the presence of noise in the\nobservations hinders the gains from this approach. In this paper, we propose a\nnovel guidance technique, based on preconditioning that allows traversing from\nBP-based guidance to least squares based guidance along the restoration scheme.\nThe proposed approach is robust to noise while still having much simpler\nimplementation than alternative methods (e.g., it does not require SVD or a\nlarge number of iterations).", + "The proposed approach is robust to noise while still having much simpler\nimplementation than alternative methods (e.g., it does not require SVD or a\nlarge number of iterations). We use it within both an optimization scheme and a\nsampling-based scheme, and demonstrate its advantages over existing methods for\nimage deblurring and super-resolution.", + "We present Readout Guidance, a method for controlling text-to-image diffusion\nmodels with learned signals. Readout Guidance uses readout heads, lightweight\nnetworks trained to extract signals from the features of a pre-trained, frozen\ndiffusion model at every timestep. These readouts can encode single-image\nproperties, such as pose, depth, and edges; or higher-order properties that\nrelate multiple images, such as correspondence and appearance similarity.\nFurthermore, by comparing the readout estimates to a user-defined target, and\nback-propagating the gradient through the readout head, these estimates can be\nused to guide the sampling process. Compared to prior methods for conditional\ngeneration, Readout Guidance requires significantly fewer added parameters and\ntraining samples, and offers a convenient and simple recipe for reproducing\ndifferent forms of conditional control under a single framework, with a single\narchitecture and sampling procedure. We showcase these benefits in the\napplications of drag-based manipulation, identity-consistent generation, and\nspatially aligned control. Project page: https://readout-guidance.github.io.", + "We present GaussianAvatar, an efficient approach to creating realistic human\navatars with dynamic 3D appearances from a single video. We start by\nintroducing animatable 3D Gaussians to explicitly represent humans in various\nposes and clothing styles. Such an explicit and animatable representation can\nfuse 3D appearances more efficiently and consistently from 2D observations. Our\nrepresentation is further augmented with dynamic properties to support\npose-dependent appearance modeling, where a dynamic appearance network along\nwith an optimizable feature tensor is designed to learn the\nmotion-to-appearance mapping. Moreover, by leveraging the differentiable motion\ncondition, our method enables a joint optimization of motions and appearances\nduring avatar modeling, which helps to tackle the long-standing issue of\ninaccurate motion estimation in monocular settings. The efficacy of\nGaussianAvatar is validated on both the public dataset and our collected\ndataset, demonstrating its superior performances in terms of appearance quality\nand rendering efficiency.", + "Multi-target multi-camera tracking is a crucial task that involves\nidentifying and tracking individuals over time using video streams from\nmultiple cameras. This task has practical applications in various fields, such\nas visual surveillance, crowd behavior analysis, and anomaly detection.\nHowever, due to the difficulty and cost of collecting and labeling data,\nexisting datasets for this task are either synthetically generated or\nartificially constructed within a controlled camera network setting, which\nlimits their ability to model real-world dynamics and generalize to diverse\ncamera configurations. To address this issue, we present MTMMC, a real-world,\nlarge-scale dataset that includes long video sequences captured by 16\nmulti-modal cameras in two different environments - campus and factory - across\nvarious time, weather, and season conditions. This dataset provides a\nchallenging test-bed for studying multi-camera tracking under diverse\nreal-world complexities and includes an additional input modality of spatially\naligned and temporally synchronized RGB and thermal cameras, which enhances the\naccuracy of multi-camera tracking. MTMMC is a super-set of existing datasets,\nbenefiting independent fields such as person detection, re-identification, and\nmultiple object tracking.", + "MTMMC is a super-set of existing datasets,\nbenefiting independent fields such as person detection, re-identification, and\nmultiple object tracking. We provide baselines and new learning setups on this\ndataset and set the reference scores for future studies. The datasets, models,\nand test server will be made publicly available.", + "Patch-based adversarial attacks were proven to compromise the robustness and\nreliability of computer vision systems. However, their conspicuous and easily\ndetectable nature challenge their practicality in real-world setting. To\naddress this, recent work has proposed using Generative Adversarial Networks\n(GANs) to generate naturalistic patches that may not attract human attention.\nHowever, such approaches suffer from a limited latent space making it\nchallenging to produce a patch that is efficient, stealthy, and robust to\nmultiple real-world transformations. This paper introduces a novel approach\nthat produces a Dynamic Adversarial Patch (DAP) designed to overcome these\nlimitations. DAP maintains a naturalistic appearance while optimizing attack\nefficiency and robustness to real-world transformations. The approach involves\nredefining the optimization problem and introducing a novel objective function\nthat incorporates a similarity metric to guide the patch's creation. Unlike\nGAN-based techniques, the DAP directly modifies pixel values within the patch,\nproviding increased flexibility and adaptability to multiple transformations.\nFurthermore, most clothing-based physical attacks assume static objects and\nignore the possible transformations caused by non-rigid deformation due to\nchanges in a person's pose.", + "Unlike\nGAN-based techniques, the DAP directly modifies pixel values within the patch,\nproviding increased flexibility and adaptability to multiple transformations.\nFurthermore, most clothing-based physical attacks assume static objects and\nignore the possible transformations caused by non-rigid deformation due to\nchanges in a person's pose. To address this limitation, a 'Creases\nTransformation' (CT) block is introduced, enhancing the patch's resilience to a\nvariety of real-world distortions. Experimental results demonstrate that the\nproposed approach outperforms state-of-the-art attacks, achieving a success\nrate of up to 82.28% in the digital world when targeting the YOLOv7 detector\nand 65% in the physical world when targeting YOLOv3tiny detector deployed in\nedge-based smart cameras.", + "Current diffusion or flow-based generative models for 3D shapes divide to\ntwo: distilling pre-trained 2D image diffusion models, and training directly on\n3D shapes. When training a diffusion or flow models on 3D shapes a crucial\ndesign choice is the shape representation. An effective shape representation\nneeds to adhere three design principles: it should allow an efficient\nconversion of large 3D datasets to the representation form; it should provide a\ngood tradeoff of approximation power versus number of parameters; and it should\nhave a simple tensorial form that is compatible with existing powerful neural\narchitectures. While standard 3D shape representations such as volumetric grids\nand point clouds do not adhere to all these principles simultaneously, we\nadvocate in this paper a new representation that does. We introduce Mosaic-SDF\n(M-SDF): a simple 3D shape representation that approximates the Signed Distance\nFunction (SDF) of a given shape by using a set of local grids spread near the\nshape's boundary.", + "We introduce Mosaic-SDF\n(M-SDF): a simple 3D shape representation that approximates the Signed Distance\nFunction (SDF) of a given shape by using a set of local grids spread near the\nshape's boundary. The M-SDF representation is fast to compute for each shape\nindividually making it readily parallelizable; it is parameter efficient as it\nonly covers the space around the shape's boundary; and it has a simple matrix\nform, compatible with Transformer-based architectures. We demonstrate the\nefficacy of the M-SDF representation by using it to train a 3D generative flow\nmodel including class-conditioned generation with the 3D Warehouse dataset, and\ntext-to-3D generation using a dataset of about 600k caption-shape pairs.", + "Diffusion Handles is a novel approach to enabling 3D object edits on\ndiffusion images. We accomplish these edits using existing pre-trained\ndiffusion models, and 2D image depth estimation, without any fine-tuning or 3D\nobject retrieval. The edited results remain plausible, photo-real, and preserve\nobject identity. Diffusion Handles address a critically missing facet of\ngenerative image based creative design, and significantly advance the\nstate-of-the-art in generative image editing. Our key insight is to lift\ndiffusion activations for an object to 3D using a proxy depth, 3D-transform the\ndepth and associated activations, and project them back to image space. The\ndiffusion process applied to the manipulated activations with identity control,\nproduces plausible edited images showing complex 3D occlusion and lighting\neffects. We evaluate Diffusion Handles: quantitatively, on a large synthetic\ndata benchmark; and qualitatively by a user study, showing our output to be\nmore plausible, and better than prior art at both, 3D editing and identity\ncontrol. Project Webpage: https://diffusionhandles.github.io/", + "Sharpness-Aware Minimization (SAM) has been instrumental in improving deep\nneural network training by minimizing both training loss and loss sharpness.\nDespite the practical success, the mechanisms behind SAM's generalization\nenhancements remain elusive, limiting its progress in deep learning\noptimization. In this work, we investigate SAM's core components for\ngeneralization improvement and introduce \"Friendly-SAM\" (F-SAM) to further\nenhance SAM's generalization. Our investigation reveals the key role of\nbatch-specific stochastic gradient noise within the adversarial perturbation,\ni.e., the current minibatch gradient, which significantly influences SAM's\ngeneralization performance. By decomposing the adversarial perturbation in SAM\ninto full gradient and stochastic gradient noise components, we discover that\nrelying solely on the full gradient component degrades generalization while\nexcluding it leads to improved performance. The possible reason lies in the\nfull gradient component's increase in sharpness loss for the entire dataset,\ncreating inconsistencies with the subsequent sharpness minimization step solely\non the current minibatch data. Inspired by these insights, F-SAM aims to\nmitigate the negative effects of the full gradient component.", + "Inspired by these insights, F-SAM aims to\nmitigate the negative effects of the full gradient component. It removes the\nfull gradient estimated by an exponentially moving average (EMA) of historical\nstochastic gradients, and then leverages stochastic gradient noise for improved\ngeneralization. Moreover, we provide theoretical validation for the EMA\napproximation and prove the convergence of F-SAM on non-convex problems.\nExtensive experiments demonstrate the superior generalization performance and\nrobustness of F-SAM over vanilla SAM. Code is available at\nhttps://github.com/nblt/F-SAM.", + "Diffusion models have made tremendous progress in text-driven image and video\ngeneration. Now text-to-image foundation models are widely applied to various\ndownstream image synthesis tasks, such as controllable image generation and\nimage editing, while downstream video synthesis tasks are less explored for\nseveral reasons. First, it requires huge memory and computation overhead to\ntrain a video generation foundation model. Even with video foundation models,\nadditional costly training is still required for downstream video synthesis\ntasks. Second, although some works extend image diffusion models into videos in\na training-free manner, temporal consistency cannot be well preserved. Finally,\nthese adaption methods are specifically designed for one task and fail to\ngeneralize to different tasks. To mitigate these issues, we propose a\ntraining-free general-purpose video synthesis framework, coined as {\\bf\nBIVDiff}, via bridging specific image diffusion models and general\ntext-to-video foundation diffusion models.", + "To mitigate these issues, we propose a\ntraining-free general-purpose video synthesis framework, coined as {\\bf\nBIVDiff}, via bridging specific image diffusion models and general\ntext-to-video foundation diffusion models. Specifically, we first use a\nspecific image diffusion model (e.g., ControlNet and Instruct Pix2Pix) for\nframe-wise video generation, then perform Mixed Inversion on the generated\nvideo, and finally input the inverted latents into the video diffusion models\n(e.g., VidRD and ZeroScope) for temporal smoothing. This decoupled framework\nenables flexible image model selection for different purposes with strong task\ngeneralization and high efficiency. To validate the effectiveness and general\nuse of BIVDiff, we perform a wide range of video synthesis tasks, including\ncontrollable video generation, video editing, video inpainting, and\noutpainting.", + "The complex dynamicity of open-world objects presents non-negligible\nchallenges for multi-object tracking (MOT), often manifested as severe\ndeformations, fast motion, and occlusions. Most methods that solely depend on\ncoarse-grained object cues, such as boxes and the overall appearance of the\nobject, are susceptible to degradation due to distorted internal relationships\nof dynamic objects. To address this problem, this work proposes NetTrack, an\nefficient, generic, and affordable tracking framework to introduce fine-grained\nlearning that is robust to dynamicity. Specifically, NetTrack constructs a\ndynamicity-aware association with a fine-grained Net, leveraging point-level\nvisual cues. Correspondingly, a fine-grained sampler and matching method have\nbeen incorporated. Furthermore, NetTrack learns object-text correspondence for\nfine-grained localization. To evaluate MOT in extremely dynamic open-world\nscenarios, a bird flock tracking (BFT) dataset is constructed, which exhibits\nhigh dynamicity with diverse species and open-world scenarios.", + "Furthermore, NetTrack learns object-text correspondence for\nfine-grained localization. To evaluate MOT in extremely dynamic open-world\nscenarios, a bird flock tracking (BFT) dataset is constructed, which exhibits\nhigh dynamicity with diverse species and open-world scenarios. Comprehensive\nevaluation on BFT validates the effectiveness of fine-grained learning on\nobject dynamicity, and thorough transfer experiments on challenging open-world\nbenchmarks, i.e., TAO, TAO-OW, AnimalTrack, and GMOT-40, validate the strong\ngeneralization ability of NetTrack even without finetuning. Project page:\nhttps://george-zhuang.github.io/nettrack/.", + "Existing approaches to video understanding, mainly designed for short videos\nfrom a third-person perspective, are limited in their applicability in certain\nfields, such as robotics. In this paper, we delve into open-ended\nquestion-answering (QA) in long, egocentric videos, which allows individuals or\nrobots to inquire about their own past visual experiences. This task presents\nunique challenges, including the complexity of temporally grounding queries\nwithin extensive video content, the high resource demands for precise data\nannotation, and the inherent difficulty of evaluating open-ended answers due to\ntheir ambiguous nature. Our proposed approach tackles these challenges by (i)\nintegrating query grounding and answering within a unified model to reduce\nerror propagation; (ii) employing large language models for efficient and\nscalable data synthesis; and (iii) introducing a close-ended QA task for\nevaluation, to manage answer ambiguity. Extensive experiments demonstrate the\neffectiveness of our method, which also achieves state-of-the-art performance\non the QaEgo4D and Ego4D-NLQ benchmarks. Code, data, and models are available\nat https://github.com/Becomebright/GroundVQA.", + "Predicting the trajectories of road agents is essential for autonomous\ndriving systems. The recent mainstream methods follow a static paradigm, which\npredicts the future trajectory by using a fixed duration of historical frames.\nThese methods make the predictions independently even at adjacent time steps,\nwhich leads to potential instability and temporal inconsistency. As successive\ntime steps have largely overlapping historical frames, their forecasting should\nhave intrinsic correlation, such as overlapping predicted trajectories should\nbe consistent, or be different but share the same motion goal depending on the\nroad situation. Motivated by this, in this work, we introduce HPNet, a novel\ndynamic trajectory forecasting method. Aiming for stable and accurate\ntrajectory forecasting, our method leverages not only historical frames\nincluding maps and agent states, but also historical predictions. Specifically,\nwe newly design a Historical Prediction Attention module to automatically\nencode the dynamic relationship between successive predictions. Besides, it\nalso extends the attention range beyond the currently visible window\nbenefitting from the use of historical predictions.", + "Specifically,\nwe newly design a Historical Prediction Attention module to automatically\nencode the dynamic relationship between successive predictions. Besides, it\nalso extends the attention range beyond the currently visible window\nbenefitting from the use of historical predictions. The proposed Historical\nPrediction Attention together with the Agent Attention and Mode Attention is\nfurther formulated as the Triple Factorized Attention module, serving as the\ncore design of HPNet.Experiments on the Argoverse and INTERACTION datasets show\nthat HPNet achieves state-of-the-art performance, and generates accurate and\nstable future trajectories. Our code are available at\nhttps://github.com/XiaolongTang23/HPNet.", + "Generating realistic hand motion sequences in interaction with objects has\ngained increasing attention with the growing interest in digital humans. Prior\nwork has illustrated the effectiveness of employing occupancy-based or\ndistance-based virtual sensors to extract hand-object interaction features.\nNonetheless, these methods show limited generalizability across object\ncategories, shapes and sizes. We hypothesize that this is due to two reasons:\n1) the limited expressiveness of employed virtual sensors, and 2) scarcity of\navailable training data. To tackle this challenge, we introduce a novel\njoint-centered sensor designed to reason about local object geometry near\npotential interaction regions. The sensor queries for object surface points in\nthe neighbourhood of each hand joint. As an important step towards mitigating\nthe learning complexity, we transform the points from global frame to hand\ntemplate frame and use a shared module to process sensor features of each\nindividual joint. This is followed by a spatio-temporal transformer network\naimed at capturing correlation among the joints in different dimensions.\nMoreover, we devise simple heuristic rules to augment the limited training\nsequences with vast static hand grasping samples.", + "This is followed by a spatio-temporal transformer network\naimed at capturing correlation among the joints in different dimensions.\nMoreover, we devise simple heuristic rules to augment the limited training\nsequences with vast static hand grasping samples. This leads to a broader\nspectrum of grasping types observed during training, in turn enhancing our\nmodel's generalization capability. We evaluate on two public datasets, GRAB and\nInterCap, where our method shows superiority over baselines both quantitatively\nand perceptually.", + "We study the underexplored but fundamental vision problem of machine\nunderstanding of abstract freehand scene sketches. We introduce a sketch\nencoder that results in semantically-aware feature space, which we evaluate by\ntesting its performance on a semantic sketch segmentation task. To train our\nmodel we rely only on the availability of bitmap sketches with their brief\ncaptions and do not require any pixel-level annotations. To obtain\ngeneralization to a large set of sketches and categories, we build on a vision\ntransformer encoder pretrained with the CLIP model. We freeze the text encoder\nand perform visual-prompt tuning of the visual encoder branch while introducing\na set of critical modifications. Firstly, we augment the classical key-query\n(k-q) self-attention blocks with value-value (v-v) self-attention blocks.\nCentral to our model is a two-level hierarchical network design that enables\nefficient semantic disentanglement: The first level ensures holistic scene\nsketch encoding, and the second level focuses on individual categories. We,\nthen, in the second level of the hierarchy, introduce a cross-attention between\ntextual and visual branches.", + "We,\nthen, in the second level of the hierarchy, introduce a cross-attention between\ntextual and visual branches. Our method outperforms zero-shot CLIP pixel\naccuracy of segmentation results by 37 points, reaching an accuracy of $85.5\\%$\non the FS-COCO sketch dataset. Finally, we conduct a user study that allows us\nto identify further improvements needed over our method to reconcile machine\nand human understanding of scene sketches.", + "We present IntrinsicAvatar, a novel approach to recovering the intrinsic\nproperties of clothed human avatars including geometry, albedo, material, and\nenvironment lighting from only monocular videos. Recent advancements in\nhuman-based neural rendering have enabled high-quality geometry and appearance\nreconstruction of clothed humans from just monocular videos. However, these\nmethods bake intrinsic properties such as albedo, material, and environment\nlighting into a single entangled neural representation. On the other hand, only\na handful of works tackle the problem of estimating geometry and disentangled\nappearance properties of clothed humans from monocular videos. They usually\nachieve limited quality and disentanglement due to approximations of secondary\nshading effects via learned MLPs. In this work, we propose to model secondary\nshading effects explicitly via Monte-Carlo ray tracing. We model the rendering\nprocess of clothed humans as a volumetric scattering process, and combine ray\ntracing with body articulation. Our approach can recover high-quality geometry,\nalbedo, material, and lighting properties of clothed humans from a single\nmonocular video, without requiring supervised pre-training using ground truth\nmaterials.", + "Our approach can recover high-quality geometry,\nalbedo, material, and lighting properties of clothed humans from a single\nmonocular video, without requiring supervised pre-training using ground truth\nmaterials. Furthermore, since we explicitly model the volumetric scattering\nprocess and ray tracing, our model naturally generalizes to novel poses,\nenabling animation of the reconstructed avatar in novel lighting conditions.", + "In this work, we present Vlogger, a generic AI system for generating a\nminute-level video blog (i.e., vlog) of user descriptions. Different from short\nvideos with a few seconds, vlog often contains a complex storyline with\ndiversified scenes, which is challenging for most existing video generation\napproaches. To break through this bottleneck, our Vlogger smartly leverages\nLarge Language Model (LLM) as Director and decomposes a long video generation\ntask of vlog into four key stages, where we invoke various foundation models to\nplay the critical roles of vlog professionals, including (1) Script, (2) Actor,\n(3) ShowMaker, and (4) Voicer. With such a design of mimicking human beings,\nour Vlogger can generate vlogs through explainable cooperation of top-down\nplanning and bottom-up shooting. Moreover, we introduce a novel video diffusion\nmodel, ShowMaker, which serves as a videographer in our Vlogger for generating\nthe video snippet of each shooting scene. By incorporating Script and Actor\nattentively as textual and visual prompts, it can effectively enhance\nspatial-temporal coherence in the snippet.", + "By incorporating Script and Actor\nattentively as textual and visual prompts, it can effectively enhance\nspatial-temporal coherence in the snippet. Besides, we design a concise mixed\ntraining paradigm for ShowMaker, boosting its capacity for both T2V generation\nand prediction. Finally, the extensive experiments show that our method\nachieves state-of-the-art performance on zero-shot T2V generation and\nprediction tasks. More importantly, Vlogger can generate over 5-minute vlogs\nfrom open-world descriptions, without loss of video coherence on script and\nactor. The code and model is all available at\nhttps://github.com/zhuangshaobin/Vlogger.", + "Label noise, commonly found in real-world datasets, has a detrimental impact\non a model's generalization. To effectively detect incorrectly labeled\ninstances, previous works have mostly relied on distinguishable training\nsignals, such as training loss, as indicators to differentiate between clean\nand noisy labels. However, they have limitations in that the training signals\nincompletely reveal the model's behavior and are not effectively generalized to\nvarious noise types, resulting in limited detection accuracy. In this paper, we\npropose DynaCor framework that distinguishes incorrectly labeled instances from\ncorrectly labeled ones based on the dynamics of the training signals. To cope\nwith the absence of supervision for clean and noisy labels, DynaCor first\nintroduces a label corruption strategy that augments the original dataset with\nintentionally corrupted labels, enabling indirect simulation of the model's\nbehavior on noisy labels. Then, DynaCor learns to identify clean and noisy\ninstances by inducing two clearly distinguishable clusters from the latent\nrepresentations of training dynamics. Our comprehensive experiments show that\nDynaCor outperforms the state-of-the-art competitors and shows strong\nrobustness to various noise types and noise rates.", + "We present Neural 3D Strokes, a novel technique to generate stylized images\nof a 3D scene at arbitrary novel views from multi-view 2D images. Different\nfrom existing methods which apply stylization to trained neural radiance fields\nat the voxel level, our approach draws inspiration from image-to-painting\nmethods, simulating the progressive painting process of human artwork with\nvector strokes. We develop a palette of stylized 3D strokes from basic\nprimitives and splines, and consider the 3D scene stylization task as a\nmulti-view reconstruction process based on these 3D stroke primitives. Instead\nof directly searching for the parameters of these 3D strokes, which would be\ntoo costly, we introduce a differentiable renderer that allows optimizing\nstroke parameters using gradient descent, and propose a training scheme to\nalleviate the vanishing gradient issue. The extensive evaluation demonstrates\nthat our approach effectively synthesizes 3D scenes with significant geometric\nand aesthetic stylization while maintaining a consistent appearance across\ndifferent views. Our method can be further integrated with style loss and\nimage-text contrastive models to extend its applications, including color\ntransfer and text-driven 3D scene drawing.", + "Our method can be further integrated with style loss and\nimage-text contrastive models to extend its applications, including color\ntransfer and text-driven 3D scene drawing. Results and code are available at\nhttp://buaavrcg.github.io/Neural3DStrokes.", + "We present a 3D-aware one-shot head reenactment method based on a fully\nvolumetric neural disentanglement framework for source appearance and driver\nexpressions. Our method is real-time and produces high-fidelity and\nview-consistent output, suitable for 3D teleconferencing systems based on\nholographic displays. Existing cutting-edge 3D-aware reenactment methods often\nuse neural radiance fields or 3D meshes to produce view-consistent appearance\nencoding, but, at the same time, they rely on linear face models, such as 3DMM,\nto achieve its disentanglement with facial expressions. As a result, their\nreenactment results often exhibit identity leakage from the driver or have\nunnatural expressions. To address these problems, we propose a neural\nself-supervised disentanglement approach that lifts both the source image and\ndriver video frame into a shared 3D volumetric representation based on\ntri-planes. This representation can then be freely manipulated with expression\ntri-planes extracted from the driving images and rendered from an arbitrary\nview using neural radiance fields.", + "This representation can then be freely manipulated with expression\ntri-planes extracted from the driving images and rendered from an arbitrary\nview using neural radiance fields. We achieve this disentanglement via\nself-supervised learning on a large in-the-wild video dataset. We further\nintroduce a highly effective fine-tuning approach to improve the\ngeneralizability of the 3D lifting using the same real-world data. We\ndemonstrate state-of-the-art performance on a wide range of datasets, and also\nshowcase high-quality 3D-aware head reenactment on highly challenging and\ndiverse subjects, including non-frontal head poses and complex expressions for\nboth source and driver.", + "Existing automatic captioning methods for visual content face challenges such\nas lack of detail, content hallucination, and poor instruction following. In\nthis work, we propose VisualFactChecker (VFC), a flexible training-free\npipeline that generates high-fidelity and detailed captions for both 2D images\nand 3D objects. VFC consists of three steps: 1) proposal, where image-to-text\ncaptioning models propose multiple initial captions; 2) verification, where a\nlarge language model (LLM) utilizes tools such as object detection and VQA\nmodels to fact-check proposed captions; 3) captioning, where an LLM generates\nthe final caption by summarizing caption proposals and the fact check\nverification results. In this step, VFC can flexibly generate captions in\nvarious styles following complex instructions. We conduct comprehensive\ncaptioning evaluations using four metrics: 1) CLIP-Score for image-text\nsimilarity; 2) CLIP-Image-Score for measuring the image-image similarity\nbetween the original and the reconstructed image generated by a text-to-image\nmodel using the caption.", + "We conduct comprehensive\ncaptioning evaluations using four metrics: 1) CLIP-Score for image-text\nsimilarity; 2) CLIP-Image-Score for measuring the image-image similarity\nbetween the original and the reconstructed image generated by a text-to-image\nmodel using the caption. 3) human study on Amazon Mechanical Turk; 4) GPT-4V\nfor fine-grained evaluation. Evaluation results show that VFC outperforms\nstate-of-the-art open-sourced captioning methods for 2D images on the COCO\ndataset and 3D assets on the Objaverse dataset. Our study demonstrates that by\ncombining open-source models into a pipeline, we can attain captioning\ncapability comparable to proprietary models such as GPT-4V, despite being over\n10x smaller in model size.", + "Collaborative perception empowers each agent to improve its perceptual\nability through the exchange of perceptual messages with other agents. It\ninherently results in a fundamental trade-off between perception ability and\ncommunication cost. To address this bottleneck issue, our core idea is to\noptimize the collaborative messages from two key aspects: representation and\nselection. The proposed codebook-based message representation enables the\ntransmission of integer codes, rather than high-dimensional feature maps. The\nproposed information-filling-driven message selection optimizes local messages\nto collectively fill each agent's information demand, preventing information\noverflow among multiple agents. By integrating these two designs, we propose\nCodeFilling, a novel communication-efficient collaborative perception system,\nwhich significantly advances the perception-communication trade-off and is\ninclusive to both homogeneous and heterogeneous collaboration settings. We\nevaluate CodeFilling in both a real-world dataset, DAIR-V2X, and a new\nsimulation dataset, OPV2VH+. Results show that CodeFilling outperforms previous\nSOTA Where2comm on DAIR-V2X/OPV2VH+ with 1,333/1,206 times lower communication\nvolume.", + "Results show that CodeFilling outperforms previous\nSOTA Where2comm on DAIR-V2X/OPV2VH+ with 1,333/1,206 times lower communication\nvolume. Our code is available at https://github.com/PhyllisH/CodeFilling.", + "Federated learning (FL) has emerged as a powerful paradigm for learning from\ndecentralized data, and federated domain generalization further considers the\ntest dataset (target domain) is absent from the decentralized training data\n(source domains). However, most existing FL methods assume that domain labels\nare provided during training, and their evaluation imposes explicit constraints\non the number of domains, which must strictly match the number of clients.\nBecause of the underutilization of numerous edge devices and additional\ncross-client domain annotations in the real world, such restrictions may be\nimpractical and involve potential privacy leaks. In this paper, we propose an\nefficient and novel approach, called Disentangled Prompt Tuning (DiPrompT), a\nmethod that tackles the above restrictions by learning adaptive prompts for\ndomain generalization in a distributed manner. Specifically, we first design\ntwo types of prompts, i.e., global prompt to capture general knowledge across\nall clients and domain prompts to capture domain-specific knowledge. They\neliminate the restriction on the one-to-one mapping between source domains and\nlocal clients.", + "Specifically, we first design\ntwo types of prompts, i.e., global prompt to capture general knowledge across\nall clients and domain prompts to capture domain-specific knowledge. They\neliminate the restriction on the one-to-one mapping between source domains and\nlocal clients. Furthermore, a dynamic query metric is introduced to\nautomatically search the suitable domain label for each sample, which includes\ntwo-substep text-image alignments based on prompt tuning without\nlabor-intensive annotation. Extensive experiments on multiple datasets\ndemonstrate that our DiPrompT achieves superior domain generalization\nperformance over state-of-the-art FL methods when domain labels are not\nprovided, and even outperforms many centralized learning methods using domain\nlabels.", + "Low-light scenes are prevalent in real-world applications (e.g. autonomous\ndriving and surveillance at night). Recently, multi-object tracking in various\npractical use cases have received much attention, but multi-object tracking in\ndark scenes is rarely considered. In this paper, we focus on multi-object\ntracking in dark scenes. To address the lack of datasets, we first build a\nLow-light Multi-Object Tracking (LMOT) dataset. LMOT provides well-aligned\nlow-light video pairs captured by our dual-camera system, and high-quality\nmulti-object tracking annotations for all videos. Then, we propose a low-light\nmulti-object tracking method, termed as LTrack. We introduce the adaptive\nlow-pass downsample module to enhance low-frequency components of images\noutside the sensor noises. The degradation suppression learning strategy\nenables the model to learn invariant information under noise disturbance and\nimage quality degradation. These components improve the robustness of\nmulti-object tracking in dark scenes. We conducted a comprehensive analysis of\nour LMOT dataset and proposed LTrack. Experimental results demonstrate the\nsuperiority of the proposed method and its competitiveness in real night\nlow-light scenes. Dataset and Code: https: //github.com/ying-fu/LMOT", + "Human image editing includes tasks like changing a person's pose, their\nclothing, or editing the image according to a text prompt. However, prior work\noften tackles these tasks separately, overlooking the benefit of mutual\nreinforcement from learning them jointly. In this paper, we propose UniHuman, a\nunified model that addresses multiple facets of human image editing in\nreal-world settings. To enhance the model's generation quality and\ngeneralization capacity, we leverage guidance from human visual encoders and\nintroduce a lightweight pose-warping module that can exploit different pose\nrepresentations, accommodating unseen textures and patterns. Furthermore, to\nbridge the disparity between existing human editing benchmarks with real-world\ndata, we curated 400K high-quality human image-text pairs for training and\ncollected 2K human images for out-of-domain testing, both encompassing diverse\nclothing styles, backgrounds, and age groups. Experiments on both in-domain and\nout-of-domain test sets demonstrate that UniHuman outperforms task-specific\nmodels by a significant margin. In user studies, UniHuman is preferred by the\nusers in an average of 77% of cases.", + "Experiments on both in-domain and\nout-of-domain test sets demonstrate that UniHuman outperforms task-specific\nmodels by a significant margin. In user studies, UniHuman is preferred by the\nusers in an average of 77% of cases. Our project is available at\nhttps://github.com/NannanLi999/UniHuman.", + "Text-to-image (T2I) generative models have attracted significant attention\nand found extensive applications within and beyond academic research. For\nexample, the Civitai community, a platform for T2I innovation, currently hosts\nan impressive array of 74,492 distinct models. However, this diversity presents\na formidable challenge in selecting the most appropriate model and parameters,\na process that typically requires numerous trials. Drawing inspiration from the\ntool usage research of large language models (LLMs), we introduce DiffAgent, an\nLLM agent designed to screen the accurate selection in seconds via API calls.\nDiffAgent leverages a novel two-stage training framework, SFTA, enabling it to\naccurately align T2I API responses with user input in accordance with human\npreferences. To train and evaluate DiffAgent's capabilities, we present\nDABench, a comprehensive dataset encompassing an extensive range of T2I APIs\nfrom the community. Our evaluations reveal that DiffAgent not only excels in\nidentifying the appropriate T2I API but also underscores the effectiveness of\nthe SFTA training framework. Codes are available at\nhttps://github.com/OpenGVLab/DiffAgent.", + "Neural field is an emerging paradigm in data representation that trains a\nneural network to approximate the given signal. A key obstacle that prevents\nits widespread adoption is the encoding speed-generating neural fields requires\nan overfitting of a neural network, which can take a significant number of SGD\nsteps to reach the desired fidelity level. In this paper, we delve into the\nimpacts of data transformations on the speed of neural field training,\nspecifically focusing on how permuting pixel locations affect the convergence\nspeed of SGD. Counterintuitively, we find that randomly permuting the pixel\nlocations can considerably accelerate the training. To explain this phenomenon,\nwe examine the neural field training through the lens of PSNR curves, loss\nlandscapes, and error patterns. Our analyses suggest that the random pixel\npermutations remove the easy-to-fit patterns, which facilitate easy\noptimization in the early stage but hinder capturing fine details of the\nsignal.", + "We present a method for reconstructing 3D shape of arbitrary Lambertian\nobjects based on measurements by miniature, energy-efficient, low-cost\nsingle-photon cameras. These cameras, operating as time resolved image sensors,\nilluminate the scene with a very fast pulse of diffuse light and record the\nshape of that pulse as it returns back from the scene at a high temporal\nresolution. We propose to model this image formation process, account for its\nnon-idealities, and adapt neural rendering to reconstruct 3D geometry from a\nset of spatially distributed sensors with known poses. We show that our\napproach can successfully recover complex 3D shapes from simulated data. We\nfurther demonstrate 3D object reconstruction from real-world captures,\nutilizing measurements from a commodity proximity sensor. Our work draws a\nconnection between image-based modeling and active range scanning and is a step\ntowards 3D vision with single-photon cameras.", + "We introduce WonderJourney, a modularized framework for perpetual 3D scene\ngeneration. Unlike prior work on view generation that focuses on a single type\nof scenes, we start at any user-provided location (by a text description or an\nimage) and generate a journey through a long sequence of diverse yet coherently\nconnected 3D scenes. We leverage an LLM to generate textual descriptions of the\nscenes in this journey, a text-driven point cloud generation pipeline to make a\ncompelling and coherent sequence of 3D scenes, and a large VLM to verify the\ngenerated scenes. We show compelling, diverse visual results across various\nscene types and styles, forming imaginary \"wonderjourneys\". Project website:\nhttps://kovenyu.com/WonderJourney/", + "We present the training recipe and results of scaling up PaLI-X, a\nmultilingual vision and language model, both in terms of size of the components\nand the breadth of its training task mixture. Our model achieves new levels of\nperformance on a wide-range of varied and complex tasks, including multiple\nimage-based captioning and question-answering tasks, image-based document\nunderstanding and few-shot (in-context) learning, as well as object detection,\nvideo question answering, and video captioning. PaLI-X advances the\nstate-of-the-art on most vision-and-language benchmarks considered (25+ of\nthem). Finally, we observe emerging capabilities, such as complex counting and\nmultilingual object detection, tasks that are not explicitly in the training\nmix.", + "In this paper, we introduce the problem of zero-shot text-guided exploration\nof the solutions to open-domain image super-resolution. Our goal is to allow\nusers to explore diverse, semantically accurate reconstructions that preserve\ndata consistency with the low-resolution inputs for different large\ndownsampling factors without explicitly training for these specific\ndegradations. We propose two approaches for zero-shot text-guided\nsuper-resolution - i) modifying the generative process of text-to-image\n\\textit{T2I} diffusion models to promote consistency with low-resolution\ninputs, and ii) incorporating language guidance into zero-shot diffusion-based\nrestoration methods. We show that the proposed approaches result in diverse\nsolutions that match the semantic meaning provided by the text prompt while\npreserving data consistency with the degraded inputs. We evaluate the proposed\nbaselines for the task of extreme super-resolution and demonstrate advantages\nin terms of restoration quality, diversity, and explorability of solutions.", + "Recent approaches such as ControlNet offer users fine-grained spatial control\nover text-to-image (T2I) diffusion models. However, auxiliary modules have to\nbe trained for each type of spatial condition, model architecture, and\ncheckpoint, putting them at odds with the diverse intents and preferences a\nhuman designer would like to convey to the AI models during the content\ncreation process. In this work, we present FreeControl, a training-free\napproach for controllable T2I generation that supports multiple conditions,\narchitectures, and checkpoints simultaneously. FreeControl designs structure\nguidance to facilitate the structure alignment with a guidance image, and\nappearance guidance to enable the appearance sharing between images generated\nusing the same seed. Extensive qualitative and quantitative experiments\ndemonstrate the superior performance of FreeControl across a variety of\npre-trained T2I models. In particular, FreeControl facilitates convenient\ntraining-free control over many different architectures and checkpoints, allows\nthe challenging input conditions on which most of the existing training-free\nmethods fail, and achieves competitive synthesis quality with training-based\napproaches.", + "Text-to-video diffusion models have advanced video generation significantly.\nHowever, customizing these models to generate videos with tailored motions\npresents a substantial challenge. In specific, they encounter hurdles in (a)\naccurately reproducing motion from a target video, and (b) creating diverse\nvisual variations. For example, straightforward extensions of static image\ncustomization methods to video often lead to intricate entanglements of\nappearance and motion data. To tackle this, here we present the Video Motion\nCustomization (VMC) framework, a novel one-shot tuning approach crafted to\nadapt temporal attention layers within video diffusion models. Our approach\nintroduces a novel motion distillation objective using residual vectors between\nconsecutive frames as a motion reference. The diffusion process then preserves\nlow-frequency motion trajectories while mitigating high-frequency\nmotion-unrelated noise in image space. We validate our method against\nstate-of-the-art video generative models across diverse real-world motions and\ncontexts. Our codes, data and the project demo can be found at\nhttps://video-motion-customization.github.io", + "3D simulated environments play a critical role in Embodied AI, but their\ncreation requires expertise and extensive manual effort, restricting their\ndiversity and scope. To mitigate this limitation, we present Holodeck, a system\nthat generates 3D environments to match a user-supplied prompt fully\nautomatedly. Holodeck can generate diverse scenes, e.g., arcades, spas, and\nmuseums, adjust the designs for styles, and can capture the semantics of\ncomplex queries such as \"apartment for a researcher with a cat\" and \"office of\na professor who is a fan of Star Wars\". Holodeck leverages a large language\nmodel (i.e., GPT-4) for common sense knowledge about what the scene might look\nlike and uses a large collection of 3D assets from Objaverse to populate the\nscene with diverse objects. To address the challenge of positioning objects\ncorrectly, we prompt GPT-4 to generate spatial relational constraints between\nobjects and then optimize the layout to satisfy those constraints.", + "To address the challenge of positioning objects\ncorrectly, we prompt GPT-4 to generate spatial relational constraints between\nobjects and then optimize the layout to satisfy those constraints. Our\nlarge-scale human evaluation shows that annotators prefer Holodeck over\nmanually designed procedural baselines in residential scenes and that Holodeck\ncan produce high-quality outputs for diverse scene types. We also demonstrate\nan exciting application of Holodeck in Embodied AI, training agents to navigate\nin novel scenes like music rooms and daycares without human-constructed data,\nwhich is a significant step forward in developing general-purpose embodied\nagents.", + "The proliferation of large-scale AI models trained on extensive datasets has\nrevolutionized machine learning. With these models taking on increasingly\ncentral roles in various applications, the need to understand their behavior\nand enhance interpretability has become paramount. To investigate the impact of\nchanges in training data on a pre-trained model, a common approach is\nleave-one-out retraining. This entails systematically altering the training\ndataset by removing specific samples to observe resulting changes within the\nmodel. However, retraining the model for each altered dataset presents a\nsignificant computational challenge, given the need to perform this operation\nfor every dataset variation. In this paper, we introduce an efficient framework\nfor assessing data impact, comprising offline training and online evaluation\nstages. During the offline training phase, we approximate the influence of\ntraining data on the target model through a distilled synset, formulated as a\nreversed gradient matching problem. For online evaluation, we expedite the\nleave-one-out process using the synset, which is then utilized to compute the\nattribution matrix based on the evaluation objective.", + "For online evaluation, we expedite the\nleave-one-out process using the synset, which is then utilized to compute the\nattribution matrix based on the evaluation objective. Experimental evaluations,\nincluding training data attribution and assessments of data quality,\ndemonstrate that our proposed method achieves comparable model behavior\nevaluation while significantly speeding up the process compared to the direct\nretraining method.", + "Diffusion models have achieved great success in synthesizing high-quality\nimages. However, generating high-resolution images with diffusion models is\nstill challenging due to the enormous computational costs, resulting in a\nprohibitive latency for interactive applications. In this paper, we propose\nDistriFusion to tackle this problem by leveraging parallelism across multiple\nGPUs. Our method splits the model input into multiple patches and assigns each\npatch to a GPU. However, naively implementing such an algorithm breaks the\ninteraction between patches and loses fidelity, while incorporating such an\ninteraction will incur tremendous communication overhead. To overcome this\ndilemma, we observe the high similarity between the input from adjacent\ndiffusion steps and propose displaced patch parallelism, which takes advantage\nof the sequential nature of the diffusion process by reusing the pre-computed\nfeature maps from the previous timestep to provide context for the current\nstep. Therefore, our method supports asynchronous communication, which can be\npipelined by computation.", + "Therefore, our method supports asynchronous communication, which can be\npipelined by computation. Extensive experiments show that our method can be\napplied to recent Stable Diffusion XL with no quality degradation and achieve\nup to a 6.1$\\times$ speedup on eight NVIDIA A100s compared to one. Our code is\npublicly available at https://github.com/mit-han-lab/distrifuser.", + "The success of large language models has inspired the computer vision\ncommunity to explore image segmentation foundation model that is able to\nzero/few-shot generalize through prompt engineering. Segment-Anything(SAM),\namong others, is the state-of-the-art image segmentation foundation model\ndemonstrating strong zero/few-shot generalization. Despite the success, recent\nstudies reveal the weakness of SAM under strong distribution shift. In\nparticular, SAM performs awkwardly on corrupted natural images, camouflaged\nimages, medical images, etc. Motivated by the observations, we aim to develop a\nself-training based strategy to adapt SAM to target distribution. Given the\nunique challenges of large source dataset, high computation cost and incorrect\npseudo label, we propose a weakly supervised self-training architecture with\nanchor regularization and low-rank finetuning to improve the robustness and\ncomputation efficiency of adaptation. We validate the effectiveness on 5 types\nof downstream segmentation tasks including natural clean/corrupted images,\nmedical images, camouflaged images and robotic images. Our proposed method is\ntask-agnostic in nature and outperforms pre-trained SAM and state-of-the-art\ndomain adaptation methods on almost all downstream tasks with the same testing\nprompt inputs.", + "Recent self-training techniques have shown notable improvements in\nunsupervised domain adaptation for 3D object detection (3D UDA). These\ntechniques typically select pseudo labels, i.e., 3D boxes, to supervise models\nfor the target domain. However, this selection process inevitably introduces\nunreliable 3D boxes, in which 3D points cannot be definitively assigned as\nforeground or background. Previous techniques mitigate this by reweighting\nthese boxes as pseudo labels, but these boxes can still poison the training\nprocess. To resolve this problem, in this paper, we propose a novel pseudo\nlabel refinery framework. Specifically, in the selection process, to improve\nthe reliability of pseudo boxes, we propose a complementary augmentation\nstrategy. This strategy involves either removing all points within an\nunreliable box or replacing it with a high-confidence box. Moreover, the point\nnumbers of instances in high-beam datasets are considerably higher than those\nin low-beam datasets, also degrading the quality of pseudo labels during the\ntraining process. We alleviate this issue by generating additional proposals\nand aligning RoI features across different domains.", + "Moreover, the point\nnumbers of instances in high-beam datasets are considerably higher than those\nin low-beam datasets, also degrading the quality of pseudo labels during the\ntraining process. We alleviate this issue by generating additional proposals\nand aligning RoI features across different domains. Experimental results\ndemonstrate that our method effectively enhances the quality of pseudo labels\nand consistently surpasses the state-of-the-art methods on six autonomous\ndriving benchmarks. Code will be available at\nhttps://github.com/Zhanwei-Z/PERE.", + "We present an approach that can reconstruct hands in 3D from monocular input.\nOur approach for Hand Mesh Recovery, HaMeR, follows a fully transformer-based\narchitecture and can analyze hands with significantly increased accuracy and\nrobustness compared to previous work. The key to HaMeR's success lies in\nscaling up both the data used for training and the capacity of the deep network\nfor hand reconstruction. For training data, we combine multiple datasets that\ncontain 2D or 3D hand annotations. For the deep model, we use a large scale\nVision Transformer architecture. Our final model consistently outperforms the\nprevious baselines on popular 3D hand pose benchmarks. To further evaluate the\neffect of our design in non-controlled settings, we annotate existing\nin-the-wild datasets with 2D hand keypoint annotations. On this newly collected\ndataset of annotations, HInt, we demonstrate significant improvements over\nexisting baselines. We make our code, data and models available on the project\nwebsite: https://geopavlakos.github.io/hamer/.", + "Contrastive Vision-Language Pre-training, known as CLIP, has shown promising\neffectiveness in addressing downstream image recognition tasks. However, recent\nworks revealed that the CLIP model can be implanted with a downstream-oriented\nbackdoor. On downstream tasks, one victim model performs well on clean samples\nbut predicts a specific target class whenever a specific trigger is present.\nFor injecting a backdoor, existing attacks depend on a large amount of\nadditional data to maliciously fine-tune the entire pre-trained CLIP model,\nwhich makes them inapplicable to data-limited scenarios. In this work,\nmotivated by the recent success of learnable prompts, we address this problem\nby injecting a backdoor into the CLIP model in the prompt learning stage. Our\nmethod named BadCLIP is built on a novel and effective mechanism in backdoor\nattacks on CLIP, i.e., influencing both the image and text encoders with the\ntrigger. It consists of a learnable trigger applied to images and a\ntrigger-aware context generator, such that the trigger can change text features\nvia trigger-aware prompts, resulting in a powerful and generalizable attack.", + "It consists of a learnable trigger applied to images and a\ntrigger-aware context generator, such that the trigger can change text features\nvia trigger-aware prompts, resulting in a powerful and generalizable attack.\nExtensive experiments conducted on 11 datasets verify that the clean accuracy\nof BadCLIP is similar to those of advanced prompt learning methods and the\nattack success rate is higher than 99% in most cases. BadCLIP is also\ngeneralizable to unseen classes, and shows a strong generalization capability\nunder cross-dataset and cross-domain settings.", + "In real-world scenarios, image recognition tasks, such as semantic\nsegmentation and object detection, often pose greater challenges due to the\nlack of information available within low-resolution (LR) content. Image\nsuper-resolution (SR) is one of the promising solutions for addressing the\nchallenges. However, due to the ill-posed property of SR, it is challenging for\ntypical SR methods to restore task-relevant high-frequency contents, which may\ndilute the advantage of utilizing the SR method. Therefore, in this paper, we\npropose Super-Resolution for Image Recognition (SR4IR) that effectively guides\nthe generation of SR images beneficial to achieving satisfactory image\nrecognition performance when processing LR images. The critical component of\nour SR4IR is the task-driven perceptual (TDP) loss that enables the SR network\nto acquire task-specific knowledge from a network tailored for a specific task.\nMoreover, we propose a cross-quality patch mix and an alternate training\nframework that significantly enhances the efficacy of the TDP loss by\naddressing potential problems when employing the TDP loss.", + "Moreover, we propose a cross-quality patch mix and an alternate training\nframework that significantly enhances the efficacy of the TDP loss by\naddressing potential problems when employing the TDP loss. Through extensive\nexperiments, we demonstrate that our SR4IR achieves outstanding task\nperformance by generating SR images useful for a specific image recognition\ntask, including semantic segmentation, object detection, and image\nclassification. The implementation code is available at\nhttps://github.com/JaehaKim97/SR4IR.", + "Applying a pre-trained large model to downstream tasks is prohibitive under\nresource-constrained conditions. Recent dominant approaches for addressing\nefficiency issues involve adding a few learnable parameters to the fixed\nbackbone model. This strategy, however, leads to more challenges in loading\nlarge models for downstream fine-tuning with limited resources. In this paper,\nwe propose a novel method for increasing the parameter efficiency of\npre-trained models by introducing an intermediate pre-training stage. To this\nend, we first employ low-rank approximation to compress the original large\nmodel and then devise a feature distillation module and a weight perturbation\nregularization module. These modules are specifically designed to enhance the\nlow-rank model. In particular, we update only the low-rank model while freezing\nthe backbone parameters during pre-training. This allows for direct and\nefficient utilization of the low-rank model for downstream fine-tuning tasks.\nThe proposed method achieves both efficiencies in terms of required parameters\nand computation time while maintaining comparable results with minimal\nmodifications to the backbone architecture.", + "This allows for direct and\nefficient utilization of the low-rank model for downstream fine-tuning tasks.\nThe proposed method achieves both efficiencies in terms of required parameters\nand computation time while maintaining comparable results with minimal\nmodifications to the backbone architecture. Specifically, when applied to three\nvision-only and one vision-language Transformer models, our approach often\ndemonstrates a merely $\\sim$0.6 point decrease in performance while reducing\nthe original parameter size by 1/3 to 2/3.", + "Conventional image sensors digitize high-resolution images at fast frame\nrates, producing a large amount of data that needs to be transmitted off the\nsensor for further processing. This is challenging for perception systems\noperating on edge devices, because communication is power inefficient and\ninduces latency. Fueled by innovations in stacked image sensor fabrication,\nemerging sensor-processors offer programmability and minimal processing\ncapabilities directly on the sensor. We exploit these capabilities by\ndeveloping an efficient recurrent neural network architecture, PixelRNN, that\nencodes spatio-temporal features on the sensor using purely binary operations.\nPixelRNN reduces the amount of data to be transmitted off the sensor by a\nfactor of 64x compared to conventional systems while offering competitive\naccuracy for hand gesture recognition and lip reading tasks. We experimentally\nvalidate PixelRNN using a prototype implementation on the SCAMP-5\nsensor-processor platform.", + "Both limited annotation and domain shift are prevalent challenges in medical\nimage segmentation. Traditional semi-supervised segmentation and unsupervised\ndomain adaptation methods address one of these issues separately. However, the\ncoexistence of limited annotation and domain shift is quite common, which\nmotivates us to introduce a novel and challenging scenario: Mixed Domain\nSemi-supervised medical image Segmentation (MiDSS). In this scenario, we handle\ndata from multiple medical centers, with limited annotations available for a\nsingle domain and a large amount of unlabeled data from multiple domains. We\nfound that the key to solving the problem lies in how to generate reliable\npseudo labels for the unlabeled data in the presence of domain shift with\nlabeled data. To tackle this issue, we employ Unified Copy-Paste (UCP) between\nimages to construct intermediate domains, facilitating the knowledge transfer\nfrom the domain of labeled data to the domains of unlabeled data. To fully\nutilize the information within the intermediate domain, we propose a symmetric\nGuidance training strategy (SymGD), which additionally offers direct guidance\nto unlabeled data by merging pseudo labels from intermediate samples.", + "To fully\nutilize the information within the intermediate domain, we propose a symmetric\nGuidance training strategy (SymGD), which additionally offers direct guidance\nto unlabeled data by merging pseudo labels from intermediate samples.\nSubsequently, we introduce a Training Process aware Random Amplitude MixUp\n(TP-RAM) to progressively incorporate style-transition components into\nintermediate samples. Compared with existing state-of-the-art approaches, our\nmethod achieves a notable 13.57% improvement in Dice score on Prostate dataset,\nas demonstrated on three public datasets. Our code is available at\nhttps://github.com/MQinghe/MiDSS .", + "Multi-view stereo reconstruction (MVS) in the wild requires to first estimate\nthe camera parameters e.g. intrinsic and extrinsic parameters. These are\nusually tedious and cumbersome to obtain, yet they are mandatory to triangulate\ncorresponding pixels in 3D space, which is the core of all best performing MVS\nalgorithms. In this work, we take an opposite stance and introduce DUSt3R, a\nradically novel paradigm for Dense and Unconstrained Stereo 3D Reconstruction\nof arbitrary image collections, i.e. operating without prior information about\ncamera calibration nor viewpoint poses. We cast the pairwise reconstruction\nproblem as a regression of pointmaps, relaxing the hard constraints of usual\nprojective camera models. We show that this formulation smoothly unifies the\nmonocular and binocular reconstruction cases. In the case where more than two\nimages are provided, we further propose a simple yet effective global alignment\nstrategy that expresses all pairwise pointmaps in a common reference frame. We\nbase our network architecture on standard Transformer encoders and decoders,\nallowing us to leverage powerful pretrained models.", + "In the case where more than two\nimages are provided, we further propose a simple yet effective global alignment\nstrategy that expresses all pairwise pointmaps in a common reference frame. We\nbase our network architecture on standard Transformer encoders and decoders,\nallowing us to leverage powerful pretrained models. Our formulation directly\nprovides a 3D model of the scene as well as depth information, but\ninterestingly, we can seamlessly recover from it, pixel matches, relative and\nabsolute camera. Exhaustive experiments on all these tasks showcase that the\nproposed DUSt3R can unify various 3D vision tasks and set new SoTAs on\nmonocular/multi-view depth estimation as well as relative pose estimation. In\nsummary, DUSt3R makes many geometric 3D vision tasks easy.", + "Action understanding has attracted long-term attention. It can be formed as\nthe mapping from the physical space to the semantic space. Typically,\nresearchers built datasets according to idiosyncratic choices to define classes\nand push the envelope of benchmarks respectively. Datasets are incompatible\nwith each other like \"Isolated Islands\" due to semantic gaps and various class\ngranularities, e.g., do housework in dataset A and wash plate in dataset B. We\nargue that we need a more principled semantic space to concentrate the\ncommunity efforts and use all datasets together to pursue generalizable action\nlearning. To this end, we design a structured action semantic space given verb\ntaxonomy hierarchy and covering massive actions. By aligning the classes of\nprevious datasets to our semantic space, we gather (image/video/skeleton/MoCap)\ndatasets into a unified database in a unified label system, i.e., bridging\n\"isolated islands\" into a \"Pangea\". Accordingly, we propose a novel model\nmapping from the physical space to semantic space to fully use Pangea. In\nextensive experiments, our new system shows significant superiority, especially\nin transfer learning.", + "Accordingly, we propose a novel model\nmapping from the physical space to semantic space to fully use Pangea. In\nextensive experiments, our new system shows significant superiority, especially\nin transfer learning. Our code and data will be made public at\nhttps://mvig-rhos.com/pangea.", + "The perception of autonomous vehicles using radars has attracted increased\nresearch interest due its ability to operate in fog and bad weather. However,\ntraining radar models is hindered by the cost and difficulty of annotating\nlarge-scale radar data. To overcome this bottleneck, we propose a\nself-supervised learning framework to leverage the large amount of unlabeled\nradar data to pre-train radar-only embeddings for self-driving perception\ntasks. The proposed method combines radar-to-radar and radar-to-vision\ncontrastive losses to learn a general representation from unlabeled radar\nheatmaps paired with their corresponding camera images. When used for\ndownstream object detection, we demonstrate that the proposed self-supervision\nframework can improve the accuracy of state-of-the-art supervised baselines by\n$5.8\\%$ in mAP. Code is available at \\url{https://github.com/yiduohao/Radical}.", + "We propose a new class of generative diffusion models, called functional\ndiffusion. In contrast to previous work, functional diffusion works on samples\nthat are represented by functions with a continuous domain. Functional\ndiffusion can be seen as an extension of classical diffusion models to an\ninfinite-dimensional domain. Functional diffusion is very versatile as images,\nvideos, audio, 3D shapes, deformations, \\etc, can be handled by the same\nframework with minimal changes. In addition, functional diffusion is especially\nsuited for irregular data or data defined in non-standard domains. In our work,\nwe derive the necessary foundations for functional diffusion and propose a\nfirst implementation based on the transformer architecture. We show generative\nresults on complicated signed distance functions and deformation functions\ndefined on 3D surfaces.", + "Adversarial training (AT) is currently one of the most effective ways to\nobtain the robustness of deep neural networks against adversarial attacks.\nHowever, most AT methods suffer from robust overfitting, i.e., a significant\ngeneralization gap in adversarial robustness between the training and testing\ncurves. In this paper, we first identify a connection between robust\noverfitting and the excessive memorization of noisy labels in AT from a view of\ngradient norm. As such label noise is mainly caused by a distribution mismatch\nand improper label assignments, we are motivated to propose a label refinement\napproach for AT. Specifically, our Self-Guided Label Refinement first\nself-refines a more accurate and informative label distribution from\nover-confident hard labels, and then it calibrates the training by dynamically\nincorporating knowledge from self-distilled models into the current model and\nthus requiring no external teachers. Empirical results demonstrate that our\nmethod can simultaneously boost the standard accuracy and robust performance\nacross multiple benchmark datasets, attack types, and architectures.", + "Empirical results demonstrate that our\nmethod can simultaneously boost the standard accuracy and robust performance\nacross multiple benchmark datasets, attack types, and architectures. In\naddition, we also provide a set of analyses from the perspectives of\ninformation theory to dive into our method and suggest the importance of soft\nlabels for robust generalization.", + "Monocular 3D detection (M3D) aims for precise 3D object localization from a\nsingle-view image which usually involves labor-intensive annotation of 3D\ndetection boxes. Weakly supervised M3D has recently been studied to obviate the\n3D annotation process by leveraging many existing 2D annotations, but it often\nrequires extra training data such as LiDAR point clouds or multi-view images\nwhich greatly degrades its applicability and usability in various applications.\nWe propose SKD-WM3D, a weakly supervised monocular 3D detection framework that\nexploits depth information to achieve M3D with a single-view image exclusively\nwithout any 3D annotations or other training data. One key design in SKD-WM3D\nis a self-knowledge distillation framework, which transforms image features\ninto 3D-like representations by fusing depth information and effectively\nmitigates the inherent depth ambiguity in monocular scenarios with little\ncomputational overhead in inference. In addition, we design an\nuncertainty-aware distillation loss and a gradient-targeted transfer modulation\nstrategy which facilitate knowledge acquisition and knowledge transfer,\nrespectively.", + "In addition, we design an\nuncertainty-aware distillation loss and a gradient-targeted transfer modulation\nstrategy which facilitate knowledge acquisition and knowledge transfer,\nrespectively. Extensive experiments show that SKD-WM3D surpasses the\nstate-of-the-art clearly and is even on par with many fully supervised methods.", + "Unsupervised landmarks discovery (ULD) for an object category is a\nchallenging computer vision problem. In pursuit of developing a robust ULD\nframework, we explore the potential of a recent paradigm of self-supervised\nlearning algorithms, known as diffusion models. Some recent works have shown\nthat these models implicitly contain important correspondence cues. Towards\nharnessing the potential of diffusion models for the ULD task, we make the\nfollowing core contributions. First, we propose a ZeroShot ULD baseline based\non simple clustering of random pixel locations with nearest neighbour matching.\nIt delivers better results than existing ULD methods. Second, motivated by the\nZeroShot performance, we develop a ULD algorithm based on diffusion features\nusing self-training and clustering which also outperforms prior methods by\nnotable margins. Third, we introduce a new proxy task based on generating\nlatent pose codes and also propose a two-stage clustering mechanism to\nfacilitate effective pseudo-labeling, resulting in a significant performance\nimprovement. Overall, our approach consistently outperforms state-of-the-art\nmethods on four challenging benchmarks AFLW, MAFL, CatHeads and LS3D by\nsignificant margins.", + "The study of complex human interactions and group activities has become a\nfocal point in human-centric computer vision. However, progress in related\ntasks is often hindered by the challenges of obtaining large-scale labeled\ndatasets from real-world scenarios. To address the limitation, we introduce\nM3Act, a synthetic data generator for multi-view multi-group multi-person human\natomic actions and group activities. Powered by Unity Engine, M3Act features\nmultiple semantic groups, highly diverse and photorealistic images, and a\ncomprehensive set of annotations, which facilitates the learning of\nhuman-centered tasks across single-person, multi-person, and multi-group\nconditions. We demonstrate the advantages of M3Act across three core\nexperiments. The results suggest our synthetic dataset can significantly\nimprove the performance of several downstream methods and replace real-world\ndatasets to reduce cost. Notably, M3Act improves the state-of-the-art MOTRv2 on\nDanceTrack dataset, leading to a hop on the leaderboard from 10th to 2nd place.\nMoreover, M3Act opens new research for controllable 3D group activity\ngeneration. We define multiple metrics and propose a competitive baseline for\nthe novel task.", + "Moreover, M3Act opens new research for controllable 3D group activity\ngeneration. We define multiple metrics and propose a competitive baseline for\nthe novel task. Our code and data are available at our project page:\nhttp://cjerry1243.github.io/M3Act.", + "Significant progress has been made in scene text detection models since the\nrise of deep learning, but scene text layout analysis, which aims to group\ndetected text instances as paragraphs, has not kept pace. Previous works either\ntreated text detection and grouping using separate models, or train a model\nfrom scratch while using a unified one. All of them have not yet made full use\nof the already well-trained text detectors and easily obtainable detection\ndatasets. In this paper, we present Text Grouping Adapter (TGA), a module that\ncan enable the utilization of various pre-trained text detectors to learn\nlayout analysis, allowing us to adopt a well-trained text detector right off\nthe shelf or just fine-tune it efficiently. Designed to be compatible with\nvarious text detector architectures, TGA takes detected text regions and image\nfeatures as universal inputs to assemble text instance features. To capture\nbroader contextual information for layout analysis, we propose to predict text\ngroup masks from text instance features by one-to-many assignment.", + "Designed to be compatible with\nvarious text detector architectures, TGA takes detected text regions and image\nfeatures as universal inputs to assemble text instance features. To capture\nbroader contextual information for layout analysis, we propose to predict text\ngroup masks from text instance features by one-to-many assignment. Our\ncomprehensive experiments demonstrate that, even with frozen pre-trained\nmodels, incorporating our TGA into various pre-trained text detectors and text\nspotters can achieve superior layout analysis performance, simultaneously\ninheriting generalized text detection ability from pre-training. In the case of\nfull parameter fine-tuning, we can further improve layout analysis performance.", + "Whole Slide Image (WSI) classification is often formulated as a Multiple\nInstance Learning (MIL) problem. Recently, Vision-Language Models (VLMs) have\ndemonstrated remarkable performance in WSI classification. However, existing\nmethods leverage coarse-grained pathogenetic descriptions for visual\nrepresentation supervision, which are insufficient to capture the complex\nvisual appearance of pathogenetic images, hindering the generalizability of\nmodels on diverse downstream tasks. Additionally, processing high-resolution\nWSIs can be computationally expensive. In this paper, we propose a novel\n\"Fine-grained Visual-Semantic Interaction\" (FiVE) framework for WSI\nclassification. It is designed to enhance the model's generalizability by\nleveraging the interaction between localized visual patterns and fine-grained\npathological semantics. Specifically, with meticulously designed queries, we\nstart by utilizing a large language model to extract fine-grained pathological\ndescriptions from various non-standardized raw reports. The output descriptions\nare then reconstructed into fine-grained labels used for training.", + "Specifically, with meticulously designed queries, we\nstart by utilizing a large language model to extract fine-grained pathological\ndescriptions from various non-standardized raw reports. The output descriptions\nare then reconstructed into fine-grained labels used for training. By\nintroducing a Task-specific Fine-grained Semantics (TFS) module, we enable\nprompts to capture crucial visual information in WSIs, which enhances\nrepresentation learning and augments generalization capabilities significantly.\nFurthermore, given that pathological visual patterns are redundantly\ndistributed across tissue slices, we sample a subset of visual instances during\ntraining. Our method demonstrates robust generalizability and strong\ntransferability, dominantly outperforming the counterparts on the TCGA Lung\nCancer dataset with at least 9.19% higher accuracy in few-shot experiments. The\ncode is available at: https://github.com/ls1rius/WSI_FiVE.", + "Mitigating hallucinations in large vision-language models (LVLMs) remains an\nopen problem. Recent benchmarks do not address hallucinations in open-ended\nfree-form responses, which we term \"Type I hallucinations\". Instead, they focus\non hallucinations responding to very specific question formats -- typically a\nmultiple-choice response regarding a particular object or attribute -- which we\nterm \"Type II hallucinations\". Additionally, such benchmarks often require\nexternal API calls to models which are subject to change. In practice, we\nobserve that a reduction in Type II hallucinations does not lead to a reduction\nin Type I hallucinations but rather that the two forms of hallucinations are\noften anti-correlated. To address this, we propose THRONE, a novel object-based\nautomatic framework for quantitatively evaluating Type I hallucinations in LVLM\nfree-form outputs. We use public language models (LMs) to identify\nhallucinations in LVLM responses and compute informative metrics.", + "To address this, we propose THRONE, a novel object-based\nautomatic framework for quantitatively evaluating Type I hallucinations in LVLM\nfree-form outputs. We use public language models (LMs) to identify\nhallucinations in LVLM responses and compute informative metrics. By evaluating\na large selection of recent LVLMs using public datasets, we show that an\nimprovement in existing metrics do not lead to a reduction in Type I\nhallucinations, and that established benchmarks for measuring Type I\nhallucinations are incomplete. Finally, we provide a simple and effective data\naugmentation method to reduce Type I and Type II hallucinations as a strong\nbaseline.", + "Creating multi-view wire art (MVWA), a static 3D sculpture with diverse\ninterpretations from different viewpoints, is a complex task even for skilled\nartists. In response, we present DreamWire, an AI system enabling everyone to\ncraft MVWA easily. Users express their vision through text prompts or\nscribbles, freeing them from intricate 3D wire organisation. Our approach\nsynergises 3D B\\'ezier curves, Prim's algorithm, and knowledge distillation\nfrom diffusion models or their variants (e.g., ControlNet). This blend enables\nthe system to represent 3D wire art, ensuring spatial continuity and overcoming\ndata scarcity. Extensive evaluation and analysis are conducted to shed insight\non the inner workings of the proposed system, including the trade-off between\nconnectivity and visual aesthetics.", + "Lithic Use-Wear Analysis (LUWA) using microscopic images is an underexplored\nvision-for-science research area. It seeks to distinguish the worked material,\nwhich is critical for understanding archaeological artifacts, material\ninteractions, tool functionalities, and dental records. However, this\nchallenging task goes beyond the well-studied image classification problem for\ncommon objects. It is affected by many confounders owing to the complex wear\nmechanism and microscopic imaging, which makes it difficult even for human\nexperts to identify the worked material successfully. In this paper, we\ninvestigate the following three questions on this unique vision task for the\nfirst time:(i) How well can state-of-the-art pre-trained models (like DINOv2)\ngeneralize to the rarely seen domain? (ii) How can few-shot learning be\nexploited for scarce microscopic images? (iii) How do the ambiguous\nmagnification and sensing modality influence the classification accuracy? To\nstudy these, we collaborated with archaeologists and built the first\nopen-source and the largest LUWA dataset containing 23,130 microscopic images\nwith different magnifications and sensing modalities.", + "(iii) How do the ambiguous\nmagnification and sensing modality influence the classification accuracy? To\nstudy these, we collaborated with archaeologists and built the first\nopen-source and the largest LUWA dataset containing 23,130 microscopic images\nwith different magnifications and sensing modalities. Extensive experiments\nshow that existing pre-trained models notably outperform human experts but\nstill leave a large gap for improvements. Most importantly, the LUWA dataset\nprovides an underexplored opportunity for vision and learning communities and\ncomplements existing image classification problems on common objects.", + "In recent years, the thriving development of research related to egocentric\nvideos has provided a unique perspective for the study of conversational\ninteractions, where both visual and audio signals play a crucial role. While\nmost prior work focus on learning about behaviors that directly involve the\ncamera wearer, we introduce the Ego-Exocentric Conversational Graph Prediction\nproblem, marking the first attempt to infer exocentric conversational\ninteractions from egocentric videos. We propose a unified multi-modal framework\n-- Audio-Visual Conversational Attention (AV-CONV), for the joint prediction of\nconversation behaviors -- speaking and listening -- for both the camera wearer\nas well as all other social partners present in the egocentric video.\nSpecifically, we adopt the self-attention mechanism to model the\nrepresentations across-time, across-subjects, and across-modalities. To\nvalidate our method, we conduct experiments on a challenging egocentric video\ndataset that includes multi-speaker and multi-conversation scenarios. Our\nresults demonstrate the superior performance of our method compared to a series\nof baselines. We also present detailed ablation studies to assess the\ncontribution of each component in our model.", + "Our\nresults demonstrate the superior performance of our method compared to a series\nof baselines. We also present detailed ablation studies to assess the\ncontribution of each component in our model. Check our project page at\nhttps://vjwq.github.io/AV-CONV/.", + "The recent wave of AI-generated content has witnessed the great development\nand success of Text-to-Image (T2I) technologies. By contrast, Text-to-Video\n(T2V) still falls short of expectations though attracting increasing interests.\nExisting works either train from scratch or adapt large T2I model to videos,\nboth of which are computation and resource expensive. In this work, we propose\na Simple Diffusion Adapter (SimDA) that fine-tunes only 24M out of 1.1B\nparameters of a strong T2I model, adapting it to video generation in a\nparameter-efficient way. In particular, we turn the T2I model for T2V by\ndesigning light-weight spatial and temporal adapters for transfer learning.\nBesides, we change the original spatial attention to the proposed Latent-Shift\nAttention (LSA) for temporal consistency. With similar model architecture, we\nfurther train a video super-resolution model to generate high-definition\n(1024x1024) videos. In addition to T2V generation in the wild, SimDA could also\nbe utilized in one-shot video editing with only 2 minutes tuning.", + "With similar model architecture, we\nfurther train a video super-resolution model to generate high-definition\n(1024x1024) videos. In addition to T2V generation in the wild, SimDA could also\nbe utilized in one-shot video editing with only 2 minutes tuning. Doing so, our\nmethod could minimize the training effort with extremely few tunable parameters\nfor model adaptation.", + "Dichotomous Image Segmentation (DIS) has recently emerged towards\nhigh-precision object segmentation from high-resolution natural images.\n When designing an effective DIS model, the main challenge is how to balance\nthe semantic dispersion of high-resolution targets in the small receptive field\nand the loss of high-precision details in the large receptive field. Existing\nmethods rely on tedious multiple encoder-decoder streams and stages to\ngradually complete the global localization and local refinement.\n Human visual system captures regions of interest by observing them from\nmultiple views. Inspired by it, we model DIS as a multi-view object perception\nproblem and provide a parsimonious multi-view aggregation network (MVANet),\nwhich unifies the feature fusion of the distant view and close-up view into a\nsingle stream with one encoder-decoder structure. With the help of the proposed\nmulti-view complementary localization and refinement modules, our approach\nestablished long-range, profound visual interactions across multiple views,\nallowing the features of the detailed close-up view to focus on highly slender\nstructures.Experiments on the popular DIS-5K dataset show that our MVANet\nsignificantly outperforms state-of-the-art methods in both accuracy and speed.", + "The source code and datasets will be publicly available at\n\\href{https://github.com/qianyu-dlut/MVANet}{MVANet}.", + "Diffusion-based text-to-video generation has witnessed impressive progress in\nthe past year yet still falls behind text-to-image generation. One of the key\nreasons is the limited scale of publicly available data (e.g., 10M video-text\npairs in WebVid10M vs. 5B image-text pairs in LAION), considering the high cost\nof video captioning. Instead, it could be far easier to collect unlabeled clips\nfrom video platforms like YouTube. Motivated by this, we come up with a novel\ntext-to-video generation framework, termed TF-T2V, which can directly learn\nwith text-free videos. The rationale behind is to separate the process of text\ndecoding from that of temporal modeling. To this end, we employ a content\nbranch and a motion branch, which are jointly optimized with weights shared.", + "The rationale behind is to separate the process of text\ndecoding from that of temporal modeling. To this end, we employ a content\nbranch and a motion branch, which are jointly optimized with weights shared.\nFollowing such a pipeline, we study the effect of doubling the scale of\ntraining set (i.e., video-only WebVid10M) with some randomly collected\ntext-free videos and are encouraged to observe the performance improvement (FID\nfrom 9.67 to 8.19 and FVD from 484 to 441), demonstrating the scalability of\nour approach. We also find that our model could enjoy sustainable performance\ngain (FID from 8.19 to 7.64 and FVD from 441 to 366) after reintroducing some\ntext labels for training. Finally, we validate the effectiveness and\ngeneralizability of our ideology on both native text-to-video generation and\ncompositional video synthesis paradigms. Code and models will be publicly\navailable at https://tf-t2v.github.io/.", + "Object detection in radar imagery with neural networks shows great potential\nfor improving autonomous driving. However, obtaining annotated datasets from\nreal radar images, crucial for training these networks, is challenging,\nespecially in scenarios with long-range detection and adverse weather and\nlighting conditions where radar performance excels. To address this challenge,\nwe present RadSimReal, an innovative physical radar simulation capable of\ngenerating synthetic radar images with accompanying annotations for various\nradar types and environmental conditions, all without the need for real data\ncollection. Remarkably, our findings demonstrate that training object detection\nmodels on RadSimReal data and subsequently evaluating them on real-world data\nproduce performance levels comparable to models trained and tested on real data\nfrom the same dataset, and even achieves better performance when testing across\ndifferent real datasets. RadSimReal offers advantages over other physical radar\nsimulations that it does not necessitate knowledge of the radar design details,\nwhich are often not disclosed by radar suppliers, and has faster run-time. This\ninnovative tool has the potential to advance the development of computer vision\nalgorithms for radar-based autonomous driving applications.", + "We propose residual denoising diffusion models (RDDM), a novel dual diffusion\nprocess that decouples the traditional single denoising diffusion process into\nresidual diffusion and noise diffusion. This dual diffusion framework expands\nthe denoising-based diffusion models, initially uninterpretable for image\nrestoration, into a unified and interpretable model for both image generation\nand restoration by introducing residuals. Specifically, our residual diffusion\nrepresents directional diffusion from the target image to the degraded input\nimage and explicitly guides the reverse generation process for image\nrestoration, while noise diffusion represents random perturbations in the\ndiffusion process. The residual prioritizes certainty, while the noise\nemphasizes diversity, enabling RDDM to effectively unify tasks with varying\ncertainty or diversity requirements, such as image generation and restoration.\nWe demonstrate that our sampling process is consistent with that of DDPM and\nDDIM through coefficient transformation, and propose a partially\npath-independent generation process to better understand the reverse process.\nNotably, our RDDM enables a generic UNet, trained with only an L1 loss and a\nbatch size of 1, to compete with state-of-the-art image restoration methods.", + "Notably, our RDDM enables a generic UNet, trained with only an L1 loss and a\nbatch size of 1, to compete with state-of-the-art image restoration methods. We\nprovide code and pre-trained models to encourage further exploration,\napplication, and development of our innovative framework\n(https://github.com/nachifur/RDDM).", + "To defend deep neural networks from adversarial attacks, adversarial training\nhas been drawing increasing attention for its effectiveness. However, the\naccuracy and robustness resulting from the adversarial training are limited by\nthe architecture, because adversarial training improves accuracy and robustness\nby adjusting the weight connection affiliated to the architecture. In this\nwork, we propose ARNAS to search for accurate and robust architectures for\nadversarial training. First we design an accurate and robust search space, in\nwhich the placement of the cells and the proportional relationship of the\nfilter numbers are carefully determined. With the design, the architectures can\nobtain both accuracy and robustness by deploying accurate and robust structures\nto their sensitive positions, respectively. Then we propose a differentiable\nmulti-objective search strategy, performing gradient descent towards directions\nthat are beneficial for both natural loss and adversarial loss, thus the\naccuracy and robustness can be guaranteed at the same time. We conduct\ncomprehensive experiments in terms of white-box attacks, black-box attacks, and\ntransferability.", + "We conduct\ncomprehensive experiments in terms of white-box attacks, black-box attacks, and\ntransferability. Experimental results show that the searched architecture has\nthe strongest robustness with the competitive accuracy, and breaks the\ntraditional idea that NAS-based architectures cannot transfer well to complex\ntasks in robustness scenarios. By analyzing outstanding architectures searched,\nwe also conclude that accurate and robust neural architectures tend to deploy\ndifferent structures near the input and output, which has great practical\nsignificance on both hand-crafting and automatically designing of accurate and\nrobust architectures.", + "Existing multi-person human reconstruction approaches mainly focus on\nrecovering accurate poses or avoiding penetration, but overlook the modeling of\nclose interactions. In this work, we tackle the task of reconstructing closely\ninteractive humans from a monocular video. The main challenge of this task\ncomes from insufficient visual information caused by depth ambiguity and severe\ninter-person occlusion. In view of this, we propose to leverage knowledge from\nproxemic behavior and physics to compensate the lack of visual information.\nThis is based on the observation that human interaction has specific patterns\nfollowing the social proxemics. Specifically, we first design a latent\nrepresentation based on Vector Quantised-Variational AutoEncoder (VQ-VAE) to\nmodel human interaction. A proxemics and physics guided diffusion model is then\nintroduced to denoise the initial distribution. We design the diffusion model\nas dual branch with each branch representing one individual such that the\ninteraction can be modeled via cross attention. With the learned priors of\nVQ-VAE and physical constraint as the additional information, our proposed\napproach is capable of estimating accurate poses that are also proxemics and\nphysics plausible.", + "With the learned priors of\nVQ-VAE and physical constraint as the additional information, our proposed\napproach is capable of estimating accurate poses that are also proxemics and\nphysics plausible. Experimental results on Hi4D, 3DPW, and CHI3D demonstrate\nthat our method outperforms existing approaches. The code is available at\n\\url{https://github.com/boycehbz/HumanInteraction}.", + "The ability to detect unfamiliar or unexpected images is essential for safe\ndeployment of computer vision systems. In the context of classification, the\ntask of detecting images outside of a model's training domain is known as\nout-of-distribution (OOD) detection. While there has been a growing research\ninterest in developing post-hoc OOD detection methods, there has been\ncomparably little discussion around how these methods perform when the\nunderlying classifier is not trained on a clean, carefully curated dataset. In\nthis work, we take a closer look at 20 state-of-the-art OOD detection methods\nin the (more realistic) scenario where the labels used to train the underlying\nclassifier are unreliable (e.g. crowd-sourced or web-scraped labels). Extensive\nexperiments across different datasets, noise types & levels, architectures and\ncheckpointing strategies provide insights into the effect of class label noise\non OOD detection, and show that poor separation between incorrectly classified\nID samples vs. OOD samples is an overlooked yet important limitation of\nexisting methods. Code: https://github.com/glhr/ood-labelnoise", + "Recently, the advancement of self-supervised learning techniques, like masked\nautoencoders (MAE), has greatly influenced visual representation learning for\nimages and videos. Nevertheless, it is worth noting that the predominant\napproaches in existing masked image / video modeling rely excessively on\nresource-intensive vision transformers (ViTs) as the feature encoder. In this\npaper, we propose a new approach termed as \\textbf{VideoMAC}, which combines\nvideo masked autoencoders with resource-friendly ConvNets. Specifically,\nVideoMAC employs symmetric masking on randomly sampled pairs of video frames.\nTo prevent the issue of mask pattern dissipation, we utilize ConvNets which are\nimplemented with sparse convolutional operators as encoders. Simultaneously, we\npresent a simple yet effective masked video modeling (MVM) approach, a dual\nencoder architecture comprising an online encoder and an exponential moving\naverage target encoder, aimed to facilitate inter-frame reconstruction\nconsistency in videos.", + "Simultaneously, we\npresent a simple yet effective masked video modeling (MVM) approach, a dual\nencoder architecture comprising an online encoder and an exponential moving\naverage target encoder, aimed to facilitate inter-frame reconstruction\nconsistency in videos. Additionally, we demonstrate that VideoMAC, empowering\nclassical (ResNet) / modern (ConvNeXt) convolutional encoders to harness the\nbenefits of MVM, outperforms ViT-based approaches on downstream tasks,\nincluding video object segmentation (+\\textbf{5.2\\%} / \\textbf{6.4\\%}\n$\\mathcal{J}\\&\\mathcal{F}$), body part propagation (+\\textbf{6.3\\%} /\n\\textbf{3.1\\%} mIoU), and human pose tracking (+\\textbf{10.2\\%} /\n\\textbf{11.1\\%} PCK@0.1).", + "Learning 3D scene flow from LiDAR point clouds presents significant\ndifficulties, including poor generalization from synthetic datasets to real\nscenes, scarcity of real-world 3D labels, and poor performance on real sparse\nLiDAR point clouds. We present a novel approach from the perspective of\nauto-labelling, aiming to generate a large number of 3D scene flow pseudo\nlabels for real-world LiDAR point clouds. Specifically, we employ the\nassumption of rigid body motion to simulate potential object-level rigid\nmovements in autonomous driving scenarios. By updating different motion\nattributes for multiple anchor boxes, the rigid motion decomposition is\nobtained for the whole scene. Furthermore, we developed a novel 3D scene flow\ndata augmentation method for global and local motion. By perfectly synthesizing\ntarget point clouds based on augmented motion parameters, we easily obtain lots\nof 3D scene flow labels in point clouds highly consistent with real scenarios.\nOn multiple real-world datasets including LiDAR KITTI, nuScenes, and Argoverse,\nour method outperforms all previous supervised and unsupervised methods without\nrequiring manual labelling.", + "On multiple real-world datasets including LiDAR KITTI, nuScenes, and Argoverse,\nour method outperforms all previous supervised and unsupervised methods without\nrequiring manual labelling. Impressively, our method achieves a tenfold\nreduction in EPE3D metric on the LiDAR KITTI dataset, reducing it from $0.190m$\nto a mere $0.008m$ error.", + "Neural implicit representation of geometric shapes has witnessed considerable\nadvancements in recent years. However, common distance field based implicit\nrepresentations, specifically signed distance field (SDF) for watertight shapes\nor unsigned distance field (UDF) for arbitrary shapes, routinely suffer from\ndegradation of reconstruction accuracy when converting to explicit surface\npoints and meshes. In this paper, we introduce a novel neural implicit\nrepresentation based on unsigned orthogonal distance fields (UODFs). In UODFs,\nthe minimal unsigned distance from any spatial point to the shape surface is\ndefined solely in one orthogonal direction, contrasting with the\nmulti-directional determination made by SDF and UDF. Consequently, every point\nin the 3D UODFs can directly access its closest surface points along three\northogonal directions. This distinctive feature leverages the accurate\nreconstruction of surface points without interpolation errors. We verify the\neffectiveness of UODFs through a range of reconstruction examples, extending\nfrom simple watertight or non-watertight shapes to complex shapes that include\nhollows, internal or assembling structures.", + "Blind video quality assessment (BVQA) plays a pivotal role in evaluating and\nimproving the viewing experience of end-users across a wide range of\nvideo-based platforms and services. Contemporary deep learning-based models\nprimarily analyze video content in its aggressively subsampled format, while\nbeing blind to the impact of the actual spatial resolution and frame rate on\nvideo quality. In this paper, we propose a modular BVQA model and a method of\ntraining it to improve its modularity. Our model comprises a base quality\npredictor, a spatial rectifier, and a temporal rectifier, responding to the\nvisual content and distortion, spatial resolution, and frame rate changes on\nvideo quality, respectively. During training, spatial and temporal rectifiers\nare dropped out with some probabilities to render the base quality predictor a\nstandalone BVQA model, which should work better with the rectifiers. Extensive\nexperiments on both professionally-generated content and user-generated content\nvideo databases show that our quality model achieves superior or comparable\nperformance to current methods. Additionally, the modularity of our model\noffers an opportunity to analyze existing video quality databases in terms of\ntheir spatial and temporal complexity.", + "Vision-Language (VL) models have gained significant research focus, enabling\nremarkable advances in multimodal reasoning. These architectures typically\ncomprise a vision encoder, a Large Language Model (LLM), and a projection\nmodule that aligns visual features with the LLM's representation space. Despite\ntheir success, a critical limitation persists: the vision encoding process\nremains decoupled from user queries, often in the form of image-related\nquestions. Consequently, the resulting visual features may not be optimally\nattuned to the query-specific elements of the image. To address this, we\nintroduce QA-ViT, a Question Aware Vision Transformer approach for multimodal\nreasoning, which embeds question awareness directly within the vision encoder.\nThis integration results in dynamic visual features focusing on relevant image\naspects to the posed question. QA-ViT is model-agnostic and can be incorporated\nefficiently into any VL architecture. Extensive experiments demonstrate the\neffectiveness of applying our method to various multimodal architectures,\nleading to consistent improvement across diverse tasks and showcasing its\npotential for enhancing visual and scene-text understanding.", + "Due to the resource-intensive nature of training vision-language models on\nexpansive video data, a majority of studies have centered on adapting\npre-trained image-language models to the video domain. Dominant pipelines\npropose to tackle the visual discrepancies with additional temporal learners\nwhile overlooking the substantial discrepancy for web-scaled descriptive\nnarratives and concise action category names, leading to less distinct semantic\nspace and potential performance limitations. In this work, we prioritize the\nrefinement of text knowledge to facilitate generalizable video recognition. To\naddress the limitations of the less distinct semantic space of category names,\nwe prompt a large language model (LLM) to augment action class names into\nSpatio-Temporal Descriptors thus bridging the textual discrepancy and serving\nas a knowledge base for general recognition. Moreover, to assign the best\ndescriptors with different video instances, we propose Optimal Descriptor\nSolver, forming the video recognition problem as solving the optimal matching\nflow across frame-level representations and descriptors. Comprehensive\nevaluations in zero-shot, few-shot, and fully supervised video recognition\nhighlight the effectiveness of our approach. Our best model achieves a\nstate-of-the-art zero-shot accuracy of 75.1% on Kinetics-600.", + "We contribute the Habitat Synthetic Scene Dataset, a dataset of 211\nhigh-quality 3D scenes, and use it to test navigation agent generalization to\nrealistic 3D environments. Our dataset represents real interiors and contains a\ndiverse set of 18,656 models of real-world objects. We investigate the impact\nof synthetic 3D scene dataset scale and realism on the task of training\nembodied agents to find and navigate to objects (ObjectGoal navigation). By\ncomparing to synthetic 3D scene datasets from prior work, we find that scale\nhelps in generalization, but the benefits quickly saturate, making visual\nfidelity and correlation to real-world scenes more important. Our experiments\nshow that agents trained on our smaller-scale dataset can match or outperform\nagents trained on much larger datasets. Surprisingly, we observe that agents\ntrained on just 122 scenes from our dataset outperform agents trained on 10,000\nscenes from the ProcTHOR-10K dataset in terms of zero-shot generalization in\nreal-world scanned environments.", + "Comprehensive capturing of human motions requires both accurate captures of\ncomplex poses and precise localization of the human within scenes. Most of the\nHPE datasets and methods primarily rely on RGB, LiDAR, or IMU data. However,\nsolely using these modalities or a combination of them may not be adequate for\nHPE, particularly for complex and fast movements. For holistic human motion\nunderstanding, we present RELI11D, a high-quality multimodal human motion\ndataset involves LiDAR, IMU system, RGB camera, and Event camera. It records\nthe motions of 10 actors performing 5 sports in 7 scenes, including 3.32 hours\nof synchronized LiDAR point clouds, IMU measurement data, RGB videos and Event\nsteams. Through extensive experiments, we demonstrate that the RELI11D presents\nconsiderable challenges and opportunities as it contains many rapid and complex\nmotions that require precise location. To address the challenge of integrating\ndifferent modalities, we propose LEIR, a multimodal baseline that effectively\nutilizes LiDAR Point Cloud, Event stream, and RGB through our cross-attention\nfusion strategy.", + "To address the challenge of integrating\ndifferent modalities, we propose LEIR, a multimodal baseline that effectively\nutilizes LiDAR Point Cloud, Event stream, and RGB through our cross-attention\nfusion strategy. We show that LEIR exhibits promising results for rapid motions\nand daily motions and that utilizing the characteristics of multiple modalities\ncan indeed improve HPE performance. Both the dataset and source code will be\nreleased publicly to the research community, fostering collaboration and\nenabling further exploration in this field.", + "We present an approach to modeling an image-space prior on scene motion. Our\nprior is learned from a collection of motion trajectories extracted from real\nvideo sequences depicting natural, oscillatory dynamics such as trees, flowers,\ncandles, and clothes swaying in the wind. We model this dense, long-term motion\nprior in the Fourier domain:given a single image, our trained model uses a\nfrequency-coordinated diffusion sampling process to predict a spectral volume,\nwhich can be converted into a motion texture that spans an entire video. Along\nwith an image-based rendering module, these trajectories can be used for a\nnumber of downstream applications, such as turning still images into seamlessly\nlooping videos, or allowing users to realistically interact with objects in\nreal pictures by interpreting the spectral volumes as image-space modal bases,\nwhich approximate object dynamics.", + "The development of large vision-language models, notably CLIP, has catalyzed\nresearch into effective adaptation techniques, with a particular focus on soft\nprompt tuning. Conjointly, test-time augmentation, which utilizes multiple\naugmented views of a single image to enhance zero-shot generalization, is\nemerging as a significant area of interest. This has predominantly directed\nresearch efforts toward test-time prompt tuning. In contrast, we introduce a\nrobust MeanShift for Test-time Augmentation (MTA), which surpasses prompt-based\nmethods without requiring this intensive training procedure. This positions MTA\nas an ideal solution for both standalone and API-based applications.\nAdditionally, our method does not rely on ad hoc rules (e.g., confidence\nthreshold) used in some previous test-time augmentation techniques to filter\nthe augmented views. Instead, MTA incorporates a quality assessment variable\nfor each view directly into its optimization process, termed as the inlierness\nscore. This score is jointly optimized with a density mode seeking process,\nleading to an efficient training- and hyperparameter-free approach. We\nextensively benchmark our method on 15 datasets and demonstrate MTA's\nsuperiority and computational efficiency.", + "This score is jointly optimized with a density mode seeking process,\nleading to an efficient training- and hyperparameter-free approach. We\nextensively benchmark our method on 15 datasets and demonstrate MTA's\nsuperiority and computational efficiency. Deployed easily as plug-and-play\nmodule on top of zero-shot models and state-of-the-art few-shot methods, MTA\nshows systematic and consistent improvements.", + "Large-scale text-to-image (T2I) diffusion models have showcased incredible\ncapabilities in generating coherent images based on textual descriptions,\nenabling vast applications in content generation. While recent advancements\nhave introduced control over factors such as object localization, posture, and\nimage contours, a crucial gap remains in our ability to control the\ninteractions between objects in the generated content. Well-controlling\ninteractions in generated images could yield meaningful applications, such as\ncreating realistic scenes with interacting characters. In this work, we study\nthe problems of conditioning T2I diffusion models with Human-Object Interaction\n(HOI) information, consisting of a triplet label (person, action, object) and\ncorresponding bounding boxes. We propose a pluggable interaction control model,\ncalled InteractDiffusion that extends existing pre-trained T2I diffusion models\nto enable them being better conditioned on interactions. Specifically, we\ntokenize the HOI information and learn their relationships via interaction\nembeddings. A conditioning self-attention layer is trained to map HOI tokens to\nvisual tokens, thereby conditioning the visual tokens better in existing T2I\ndiffusion models.", + "Specifically, we\ntokenize the HOI information and learn their relationships via interaction\nembeddings. A conditioning self-attention layer is trained to map HOI tokens to\nvisual tokens, thereby conditioning the visual tokens better in existing T2I\ndiffusion models. Our model attains the ability to control the interaction and\nlocation on existing T2I diffusion models, which outperforms existing baselines\nby a large margin in HOI detection score, as well as fidelity in FID and KID.\nProject page: https://jiuntian.github.io/interactdiffusion.", + "We propose NViST, a transformer-based model for efficient and generalizable\nnovel-view synthesis from a single image for real-world scenes. In contrast to\nmany methods that are trained on synthetic data, object-centred scenarios, or\nin a category-specific manner, NViST is trained on MVImgNet, a large-scale\ndataset of casually-captured real-world videos of hundreds of object categories\nwith diverse backgrounds. NViST transforms image inputs directly into a\nradiance field, conditioned on camera parameters via adaptive layer\nnormalisation. In practice, NViST exploits fine-tuned masked autoencoder (MAE)\nfeatures and translates them to 3D output tokens via cross-attention, while\naddressing occlusions with self-attention. To move away from object-centred\ndatasets and enable full scene synthesis, NViST adopts a 6-DOF camera pose\nmodel and only requires relative pose, dropping the need for canonicalization\nof the training data, which removes a substantial barrier to it being used on\ncasually captured datasets. We show results on unseen objects and categories\nfrom MVImgNet and even generalization to casual phone captures.", + "We show results on unseen objects and categories\nfrom MVImgNet and even generalization to casual phone captures. We conduct\nqualitative and quantitative evaluations on MVImgNet and ShapeNet to show that\nour model represents a step forward towards enabling true in-the-wild\ngeneralizable novel-view synthesis from a single image. Project webpage:\nhttps://wbjang.github.io/nvist_webpage.", + "In this work, we investigate the potential of a large language model (LLM) to\ndirectly comprehend visual signals without the necessity of fine-tuning on\nmulti-modal datasets. The foundational concept of our method views an image as\na linguistic entity, and translates it to a set of discrete words derived from\nthe LLM's vocabulary. To achieve this, we present the Vision-to-Language\nTokenizer, abbreviated as V2T Tokenizer, which transforms an image into a\n``foreign language'' with the combined aid of an encoder-decoder, the LLM\nvocabulary, and a CLIP model. With this innovative image encoding, the LLM\ngains the ability not only for visual comprehension but also for image\ndenoising and restoration in an auto-regressive fashion-crucially, without any\nfine-tuning. We undertake rigorous experiments to validate our method,\nencompassing understanding tasks like image recognition, image captioning, and\nvisual question answering, as well as image denoising tasks like inpainting,\noutpainting, deblurring, and shift restoration. Code and models are available\nat https://github.com/zh460045050/V2L-Tokenizer.", + "Referring Remote Sensing Image Segmentation (RRSIS) is a new challenge that\ncombines computer vision and natural language processing, delineating specific\nregions in aerial images as described by textual queries. Traditional Referring\nImage Segmentation (RIS) approaches have been impeded by the complex spatial\nscales and orientations found in aerial imagery, leading to suboptimal\nsegmentation results. To address these challenges, we introduce the Rotated\nMulti-Scale Interaction Network (RMSIN), an innovative approach designed for\nthe unique demands of RRSIS. RMSIN incorporates an Intra-scale Interaction\nModule (IIM) to effectively address the fine-grained detail required at\nmultiple scales and a Cross-scale Interaction Module (CIM) for integrating\nthese details coherently across the network. Furthermore, RMSIN employs an\nAdaptive Rotated Convolution (ARC) to account for the diverse orientations of\nobjects, a novel contribution that significantly enhances segmentation\naccuracy. To assess the efficacy of RMSIN, we have curated an expansive dataset\ncomprising 17,402 image-caption-mask triplets, which is unparalleled in terms\nof scale and variety.", + "To assess the efficacy of RMSIN, we have curated an expansive dataset\ncomprising 17,402 image-caption-mask triplets, which is unparalleled in terms\nof scale and variety. This dataset not only presents the model with a wide\nrange of spatial and rotational scenarios but also establishes a stringent\nbenchmark for the RRSIS task, ensuring a rigorous evaluation of performance.\nOur experimental evaluations demonstrate the exceptional performance of RMSIN,\nsurpassing existing state-of-the-art models by a significant margin. All\ndatasets and code are made available at https://github.com/Lsan2401/RMSIN.", + "From image-text pairs, large-scale vision-language models (VLMs) learn to\nimplicitly associate image regions with words, which prove effective for tasks\nlike visual question answering. However, leveraging the learned association for\nopen-vocabulary semantic segmentation remains a challenge. In this paper, we\npropose a simple, yet extremely effective, training-free technique,\nPlug-and-Play Open-Vocabulary Semantic Segmentation (PnP-OVSS) for this task.\nPnP-OVSS leverages a VLM with direct text-to-image cross-attention and an\nimage-text matching loss. To balance between over-segmentation and\nunder-segmentation, we introduce Salience Dropout; by iteratively dropping\npatches that the model is most attentive to, we are able to better resolve the\nentire extent of the segmentation mask. PnP-OVSS does not require any neural\nnetwork training and performs hyperparameter tuning without the need for any\nsegmentation annotations, even for a validation set.", + "PnP-OVSS does not require any neural\nnetwork training and performs hyperparameter tuning without the need for any\nsegmentation annotations, even for a validation set. PnP-OVSS demonstrates\nsubstantial improvements over comparable baselines (+26.2% mIoU on Pascal VOC,\n+20.5% mIoU on MS COCO, +3.1% mIoU on COCO Stuff and +3.0% mIoU on ADE20K). Our\ncodebase is at https://github.com/letitiabanana/PnP-OVSS.", + "The task of online mapping is to predict a local map using current sensor\nobservations, e.g. from lidar and camera, without relying on a pre-built map.\nState-of-the-art methods are based on supervised learning and are trained\npredominantly using two datasets: nuScenes and Argoverse 2. However, these\ndatasets revisit the same geographic locations across training, validation, and\ntest sets. Specifically, over $80$% of nuScenes and $40$% of Argoverse 2\nvalidation and test samples are less than $5$ m from a training sample. At test\ntime, the methods are thus evaluated more on how well they localize within a\nmemorized implicit map built from the training data than on extrapolating to\nunseen locations. Naturally, this data leakage causes inflated performance\nnumbers and we propose geographically disjoint data splits to reveal the true\nperformance in unseen environments. Experimental results show that methods\nperform considerably worse, some dropping more than $45$ mAP, when trained and\nevaluated on proper data splits. Additionally, a reassessment of prior design\nchoices reveals diverging conclusions from those based on the original split.", + "Experimental results show that methods\nperform considerably worse, some dropping more than $45$ mAP, when trained and\nevaluated on proper data splits. Additionally, a reassessment of prior design\nchoices reveals diverging conclusions from those based on the original split.\nNotably, the impact of lifting methods and the support from auxiliary tasks\n(e.g., depth supervision) on performance appears less substantial or follows a\ndifferent trajectory than previously perceived. Splits can be found at\nhttps://github.com/LiljaAdam/geographical-splits", + "We propose a method to control material attributes of objects like roughness,\nmetallic, albedo, and transparency in real images. Our method capitalizes on\nthe generative prior of text-to-image models known for photorealism, employing\na scalar value and instructions to alter low-level material properties.\nAddressing the lack of datasets with controlled material attributes, we\ngenerated an object-centric synthetic dataset with physically-based materials.\nFine-tuning a modified pre-trained text-to-image model on this synthetic\ndataset enables us to edit material properties in real-world images while\npreserving all other attributes. We show the potential application of our model\nto material edited NeRFs.", + "Comparing a user video to a reference how-to video is a key requirement for\nAR/VR technology delivering personalized assistance tailored to the user's\nprogress. However, current approaches for language-based assistance can only\nanswer questions about a single video. We propose an approach that first\nautomatically generates large amounts of visual instruction tuning data\ninvolving pairs of videos from HowTo100M by leveraging existing step\nannotations and accompanying narrations, and then trains a video-conditioned\nlanguage model to jointly reason across multiple raw videos. Our model achieves\nstate-of-the-art performance at identifying differences between video pairs and\nranking videos based on the severity of these differences, and shows promising\nability to perform general reasoning over multiple videos.", + "This work presents Depth Anything, a highly practical solution for robust\nmonocular depth estimation. Without pursuing novel technical modules, we aim to\nbuild a simple yet powerful foundation model dealing with any images under any\ncircumstances. To this end, we scale up the dataset by designing a data engine\nto collect and automatically annotate large-scale unlabeled data (~62M), which\nsignificantly enlarges the data coverage and thus is able to reduce the\ngeneralization error. We investigate two simple yet effective strategies that\nmake data scaling-up promising. First, a more challenging optimization target\nis created by leveraging data augmentation tools. It compels the model to\nactively seek extra visual knowledge and acquire robust representations.\nSecond, an auxiliary supervision is developed to enforce the model to inherit\nrich semantic priors from pre-trained encoders. We evaluate its zero-shot\ncapabilities extensively, including six public datasets and randomly captured\nphotos. It demonstrates impressive generalization ability. Further, through\nfine-tuning it with metric depth information from NYUv2 and KITTI, new SOTAs\nare set. Our better depth model also results in a better depth-conditioned\nControlNet.", + "It demonstrates impressive generalization ability. Further, through\nfine-tuning it with metric depth information from NYUv2 and KITTI, new SOTAs\nare set. Our better depth model also results in a better depth-conditioned\nControlNet. Our models are released at\nhttps://github.com/LiheYoung/Depth-Anything.", + "We present a new self-supervised approach, SelfPose3d, for estimating 3d\nposes of multiple persons from multiple camera views. Unlike current\nstate-of-the-art fully-supervised methods, our approach does not require any 2d\nor 3d ground-truth poses and uses only the multi-view input images from a\ncalibrated camera setup and 2d pseudo poses generated from an off-the-shelf 2d\nhuman pose estimator. We propose two self-supervised learning objectives:\nself-supervised person localization in 3d space and self-supervised 3d pose\nestimation. We achieve self-supervised 3d person localization by training the\nmodel on synthetically generated 3d points, serving as 3d person root\npositions, and on the projected root-heatmaps in all the views. We then model\nthe 3d poses of all the localized persons with a bottleneck representation, map\nthem onto all views obtaining 2d joints, and render them using 2d Gaussian\nheatmaps in an end-to-end differentiable manner.", + "We then model\nthe 3d poses of all the localized persons with a bottleneck representation, map\nthem onto all views obtaining 2d joints, and render them using 2d Gaussian\nheatmaps in an end-to-end differentiable manner. Afterwards, we use the\ncorresponding 2d joints and heatmaps from the pseudo 2d poses for learning. To\nalleviate the intrinsic inaccuracy of the pseudo labels, we propose an adaptive\nsupervision attention mechanism to guide the self-supervision. Our experiments\nand analysis on three public benchmark datasets, including Panoptic, Shelf, and\nCampus, show the effectiveness of our approach, which is comparable to\nfully-supervised methods. Code: https://github.com/CAMMA-public/SelfPose3D.\nVideo demo: https://youtu.be/GAqhmUIr2E8.", + "The success of contrastive language-image pretraining (CLIP) relies on the\nsupervision from the pairing between images and captions, which tends to be\nnoisy in web-crawled data. We present Mixture of Data Experts (MoDE) and learn\na system of CLIP data experts via clustering. Each data expert is trained on\none data cluster, being less sensitive to false negative noises in other\nclusters. At inference time, we ensemble their outputs by applying weights\ndetermined through the correlation between task metadata and cluster\nconditions. To estimate the correlation precisely, the samples in one cluster\nshould be semantically similar, but the number of data experts should still be\nreasonable for training and inference. As such, we consider the ontology in\nhuman language and propose to use fine-grained cluster centers to represent\neach data expert at a coarse-grained level. Experimental studies show that four\nCLIP data experts on ViT-B/16 outperform the ViT-L/14 by OpenAI CLIP and\nOpenCLIP on zero-shot image classification but with less ($<$35\\%) training\ncost.", + "Experimental studies show that four\nCLIP data experts on ViT-B/16 outperform the ViT-L/14 by OpenAI CLIP and\nOpenCLIP on zero-shot image classification but with less ($<$35\\%) training\ncost. Meanwhile, MoDE can train all data expert asynchronously and can flexibly\ninclude new data experts. The code is available at\nhttps://github.com/facebookresearch/MetaCLIP/tree/main/mode.", + "3D human generation is increasingly significant in various applications.\nHowever, the direct use of 2D generative methods in 3D generation often results\nin losing local details, while methods that reconstruct geometry from generated\nimages struggle with global view consistency. In this work, we introduce\nJoint2Human, a novel method that leverages 2D diffusion models to generate\ndetailed 3D human geometry directly, ensuring both global structure and local\ndetails. To achieve this, we employ the Fourier occupancy field (FOF)\nrepresentation, enabling the direct generation of 3D shapes as preliminary\nresults with 2D generative models. With the proposed high-frequency enhancer\nand the multi-view recarving strategy, our method can seamlessly integrate the\ndetails from different views into a uniform global shape. To better utilize the\n3D human prior and enhance control over the generated geometry, we introduce a\ncompact spherical embedding of 3D joints. This allows for an effective guidance\nof pose during the generation process. Additionally, our method can generate 3D\nhumans guided by textual inputs.", + "To better utilize the\n3D human prior and enhance control over the generated geometry, we introduce a\ncompact spherical embedding of 3D joints. This allows for an effective guidance\nof pose during the generation process. Additionally, our method can generate 3D\nhumans guided by textual inputs. Our experimental results demonstrate the\ncapability of our method to ensure global structure, local details, high\nresolution, and low computational cost simultaneously. More results and the\ncode can be found on our project page at\nhttp://cic.tju.edu.cn/faculty/likun/projects/Joint2Human.", + "Text-to-image (T2I) research has grown explosively in the past year, owing to\nthe large-scale pre-trained diffusion models and many emerging personalization\nand editing approaches. Yet, one pain point persists: the text prompt\nengineering, and searching high-quality text prompts for customized results is\nmore art than science. Moreover, as commonly argued: \"an image is worth a\nthousand words\" - the attempt to describe a desired image with texts often ends\nup being ambiguous and cannot comprehensively cover delicate visual details,\nhence necessitating more additional controls from the visual domain. In this\npaper, we take a bold step forward: taking \"Text\" out of a pre-trained T2I\ndiffusion model, to reduce the burdensome prompt engineering efforts for users.\nOur proposed framework, Prompt-Free Diffusion, relies on only visual inputs to\ngenerate new images: it takes a reference image as \"context\", an optional image\nstructural conditioning, and an initial noise, with absolutely no text prompt.\nThe core architecture behind the scene is Semantic Context Encoder (SeeCoder),\nsubstituting the commonly used CLIP-based or LLM-based text encoder.", + "The core architecture behind the scene is Semantic Context Encoder (SeeCoder),\nsubstituting the commonly used CLIP-based or LLM-based text encoder. The\nreusability of SeeCoder also makes it a convenient drop-in component: one can\nalso pre-train a SeeCoder in one T2I model and reuse it for another. Through\nextensive experiments, Prompt-Free Diffusion is experimentally found to (i)\noutperform prior exemplar-based image synthesis approaches; (ii) perform on par\nwith state-of-the-art T2I models using prompts following the best practice; and\n(iii) be naturally extensible to other downstream applications such as anime\nfigure generation and virtual try-on, with promising quality. Our code and\nmodels are open-sourced at https://github.com/SHI-Labs/Prompt-Free-Diffusion.", + "Human pose forecasting garners attention for its diverse applications.\nHowever, challenges in modeling the multi-modal nature of human motion and\nintricate interactions among agents persist, particularly with longer\ntimescales and more agents. In this paper, we propose an interaction-aware\ntrajectory-conditioned long-term multi-agent human pose forecasting model,\nutilizing a coarse-to-fine prediction approach: multi-modal global trajectories\nare initially forecasted, followed by respective local pose forecasts\nconditioned on each mode. In doing so, our Trajectory2Pose model introduces a\ngraph-based agent-wise interaction module for a reciprocal forecast of local\nmotion-conditioned global trajectory and trajectory-conditioned local pose. Our\nmodel effectively handles the multi-modality of human motion and the complexity\nof long-term multi-agent interactions, improving performance in complex\nenvironments. Furthermore, we address the lack of long-term (6s+) multi-agent\n(5+) datasets by constructing a new dataset from real-world images and 2D\nannotations, enabling a comprehensive evaluation of our proposed model.\nState-of-the-art prediction performance on both complex and simpler datasets\nconfirms the generalized effectiveness of our method. The code is available at\nhttps://github.com/Jaewoo97/T2P.", + "Even the best current algorithms for estimating body 3D shape and pose yield\nresults that include body self-intersections. In this paper, we present CLOAF,\nwhich exploits the diffeomorphic nature of Ordinary Differential Equations to\neliminate such self-intersections while still imposing body shape constraints.\nWe show that, unlike earlier approaches to addressing this issue, ours\ncompletely eliminates the self-intersections without compromising the accuracy\nof the reconstructions. Being differentiable, CLOAF can be used to fine-tune\npose and shape estimation baselines to improve their overall performance and\neliminate self-intersections in their predictions. Furthermore, we demonstrate\nhow our CLOAF strategy can be applied to practically any motion field induced\nby the user. CLOAF also makes it possible to edit motion to interact with the\nenvironment without worrying about potential collision or loss of body-shape\nprior.", + "Non-isometric shape correspondence remains a fundamental challenge in\ncomputer vision. Traditional methods using Laplace-Beltrami operator (LBO)\neigenmodes face limitations in characterizing high-frequency extrinsic shape\nchanges like bending and creases. We propose a novel approach of combining the\nnon-orthogonal extrinsic basis of eigenfunctions of the elastic thin-shell\nhessian with the intrinsic ones of the LBO, creating a hybrid spectral space in\nwhich we construct functional maps. To this end, we present a theoretical\nframework to effectively integrate non-orthogonal basis functions into\ndescriptor- and learning-based functional map methods. Our approach can be\nincorporated easily into existing functional map pipelines across varying\napplications and is able to handle complex deformations beyond isometries. We\nshow extensive evaluations across various supervised and unsupervised settings\nand demonstrate significant improvements. Notably, our approach achieves up to\n15% better mean geodesic error for non-isometric correspondence settings and up\nto 45% improvement in scenarios with topological noise.", + "Softassign is a pivotal method in graph matching and other learning tasks.\nMany softassign-based algorithms exhibit performance sensitivity to a parameter\nin the softassign. However, tuning the parameter is challenging and almost done\nempirically. This paper proposes an adaptive softassign method for graph\nmatching by analyzing the relationship between the objective score and the\nparameter. This method can automatically tune the parameter based on a given\nerror bound to guarantee accuracy. The Hadamard-Equipped Sinkhorn formulas\nintroduced in this study significantly enhance the efficiency and stability of\nthe adaptive softassign. Moreover, these formulas can also be used in optimal\ntransport problems. The resulting adaptive softassign graph matching algorithm\nenjoys significantly higher accuracy than previous state-of-the-art large graph\nmatching algorithms while maintaining comparable efficiency.", + "Diffusion models have revolutionized image generation in recent years, yet\nthey are still limited to a few sizes and aspect ratios. We propose\nElasticDiffusion, a novel training-free decoding method that enables pretrained\ntext-to-image diffusion models to generate images with various sizes.\nElasticDiffusion attempts to decouple the generation trajectory of a pretrained\nmodel into local and global signals. The local signal controls low-level pixel\ninformation and can be estimated on local patches, while the global signal is\nused to maintain overall structural consistency and is estimated with a\nreference image. We test our method on CelebA-HQ (faces) and LAION-COCO\n(objects/indoor/outdoor scenes). Our experiments and qualitative results show\nsuperior image coherence quality across aspect ratios compared to\nMultiDiffusion and the standard decoding strategy of Stable Diffusion. Project\npage: https://elasticdiffusion.github.io/", + "We present the Locally Adaptive Morphable Model (LAMM), a highly flexible\nAuto-Encoder (AE) framework for learning to generate and manipulate 3D meshes.\nWe train our architecture following a simple self-supervised training scheme in\nwhich input displacements over a set of sparse control vertices are used to\noverwrite the encoded geometry in order to transform one training sample into\nanother. During inference, our model produces a dense output that adheres\nlocally to the specified sparse geometry while maintaining the overall\nappearance of the encoded object. This approach results in state-of-the-art\nperformance in both disentangling manipulated geometry and 3D mesh\nreconstruction. To the best of our knowledge LAMM is the first end-to-end\nframework that enables direct local control of 3D vertex geometry in a single\nforward pass. A very efficient computational graph allows our network to train\nwith only a fraction of the memory required by previous methods and run faster\nduring inference, generating 12k vertex meshes at $>$60fps on a single CPU\nthread.", + "A very efficient computational graph allows our network to train\nwith only a fraction of the memory required by previous methods and run faster\nduring inference, generating 12k vertex meshes at $>$60fps on a single CPU\nthread. We further leverage local geometry control as a primitive for higher\nlevel editing operations and present a set of derivative capabilities such as\nswapping and sampling object parts. Code and pretrained models can be found at\nhttps://github.com/michaeltrs/LAMM.", + "Neural Radiance Fields (NeRF) exhibit remarkable performance for Novel View\nSynthesis (NVS) given a set of 2D images. However, NeRF training requires\naccurate camera pose for each input view, typically obtained by\nStructure-from-Motion (SfM) pipelines. Recent works have attempted to relax\nthis constraint, but they still often rely on decent initial poses which they\ncan refine. Here we aim at removing the requirement for pose initialization. We\npresent Incremental CONfidence (ICON), an optimization procedure for training\nNeRFs from 2D video frames. ICON only assumes smooth camera motion to estimate\ninitial guess for poses. Further, ICON introduces ``confidence\": an adaptive\nmeasure of model quality used to dynamically reweight gradients. ICON relies on\nhigh-confidence poses to learn NeRF, and high-confidence 3D structure (as\nencoded by NeRF) to learn poses. We show that ICON, without prior pose\ninitialization, achieves superior performance in both CO3D and HO3D versus\nmethods which use SfM pose.", + "Panoramic videos have the advantage of providing an immersive and interactive\nviewing experience. Nevertheless, their spherical nature gives rise to various\nand uncertain user viewing behaviors, which poses significant challenges for\npanoramic video quality assessment (PVQA). In this work, we propose an\nend-to-end optimized, blind PVQA method with explicit modeling of user viewing\npatterns through visual scanpaths. Our method consists of two modules: a\nscanpath generator and a quality assessor. The scanpath generator is initially\ntrained to predict future scanpaths by minimizing their expected code length\nand then jointly optimized with the quality assessor for quality prediction.\nOur blind PVQA method enables direct quality assessment of panoramic images by\ntreating them as videos composed of identical frames. Experiments on three\npublic panoramic image and video quality datasets, encompassing both synthetic\nand authentic distortions, validate the superiority of our blind PVQA model\nover existing methods.", + "Open-vocabulary object detection (OvOD) has transformed detection into a\nlanguage-guided task, empowering users to freely define their class\nvocabularies of interest during inference. However, our initial investigation\nindicates that existing OvOD detectors exhibit significant variability when\ndealing with vocabularies across various semantic granularities, posing a\nconcern for real-world deployment. To this end, we introduce Semantic Hierarchy\nNexus (SHiNe), a novel classifier that uses semantic knowledge from class\nhierarchies. It runs offline in three steps: i) it retrieves relevant\nsuper-/sub-categories from a hierarchy for each target class; ii) it integrates\nthese categories into hierarchy-aware sentences; iii) it fuses these sentence\nembeddings to generate the nexus classifier vector. Our evaluation on various\ndetection benchmarks demonstrates that SHiNe enhances robustness across diverse\nvocabulary granularities, achieving up to +31.9% mAP50 with ground truth\nhierarchies, while retaining improvements using hierarchies generated by large\nlanguage models.", + "Our evaluation on various\ndetection benchmarks demonstrates that SHiNe enhances robustness across diverse\nvocabulary granularities, achieving up to +31.9% mAP50 with ground truth\nhierarchies, while retaining improvements using hierarchies generated by large\nlanguage models. Moreover, when applied to open-vocabulary classification on\nImageNet-1k, SHiNe improves the CLIP zero-shot baseline by +2.8% accuracy.\nSHiNe is training-free and can be seamlessly integrated with any off-the-shelf\nOvOD detector, without incurring additional computational overhead during\ninference. The code is open source.", + "This paper focuses on open-ended video question answering, which aims to find\nthe correct answers from a large answer set in response to a video-related\nquestion. This is essentially a multi-label classification task, since a\nquestion may have multiple answers. However, due to annotation costs, the\nlabels in existing benchmarks are always extremely insufficient, typically one\nanswer per question. As a result, existing works tend to directly treat all the\nunlabeled answers as negative labels, leading to limited ability for\ngeneralization. In this work, we introduce a simple yet effective ranking\ndistillation framework (RADI) to mitigate this problem without additional\nmanual annotation. RADI employs a teacher model trained with incomplete labels\nto generate rankings for potential answers, which contain rich knowledge about\nlabel priority as well as label-associated visual cues, thereby enriching the\ninsufficient labeling information. To avoid overconfidence in the imperfect\nteacher model, we further present two robust and parameter-free ranking\ndistillation approaches: a pairwise approach which introduces adaptive soft\nmargins to dynamically refine the optimization constraints on various pairwise\nrankings, and a listwise approach which adopts sampling-based partial listwise\nlearning to resist the bias in teacher ranking.", + "Extensive experiments on five\npopular benchmarks consistently show that both our pairwise and listwise RADIs\noutperform state-of-the-art methods. Further analysis demonstrates the\neffectiveness of our methods on the insufficient labeling problem.", + "Grouping is inherently ambiguous due to the multiple levels of granularity in\nwhich one can decompose a scene -- should the wheels of an excavator be\nconsidered separate or part of the whole? We present Group Anything with\nRadiance Fields (GARField), an approach for decomposing 3D scenes into a\nhierarchy of semantically meaningful groups from posed image inputs. To do this\nwe embrace group ambiguity through physical scale: by optimizing a\nscale-conditioned 3D affinity feature field, a point in the world can belong to\ndifferent groups of different sizes. We optimize this field from a set of 2D\nmasks provided by Segment Anything (SAM) in a way that respects coarse-to-fine\nhierarchy, using scale to consistently fuse conflicting masks from different\nviewpoints. From this field we can derive a hierarchy of possible groupings via\nautomatic tree construction or user interaction. We evaluate GARField on a\nvariety of in-the-wild scenes and find it effectively extracts groups at many\nlevels: clusters of objects, objects, and various subparts. GARField inherently\nrepresents multi-view consistent groupings and produces higher fidelity groups\nthan the input SAM masks.", + "We evaluate GARField on a\nvariety of in-the-wild scenes and find it effectively extracts groups at many\nlevels: clusters of objects, objects, and various subparts. GARField inherently\nrepresents multi-view consistent groupings and produces higher fidelity groups\nthan the input SAM masks. GARField's hierarchical grouping could have exciting\ndownstream applications such as 3D asset extraction or dynamic scene\nunderstanding. See the project website at https://www.garfield.studio/", + "Online continual learning suffers from an underfitted solution due to\ninsufficient training for prompt model update (e.g., single-epoch training). To\naddress the challenge, we propose an efficient online continual learning method\nusing the neural collapse phenomenon. In particular, we induce neural collapse\nto form a simplex equiangular tight frame (ETF) structure in the representation\nspace so that the continuously learned model with a single epoch can better fit\nto the streamed data by proposing preparatory data training and residual\ncorrection in the representation space. With an extensive set of empirical\nvalidations using CIFAR-10/100, TinyImageNet, ImageNet-200, and ImageNet-1K, we\nshow that our proposed method outperforms state-of-the-art methods by a\nnoticeable margin in various online continual learning scenarios such as\ndisjoint and Gaussian scheduled continuous (i.e., boundary-free) data setups.", + "4D medical images, which represent 3D images with temporal information, are\ncrucial in clinical practice for capturing dynamic changes and monitoring\nlong-term disease progression. However, acquiring 4D medical images poses\nchallenges due to factors such as radiation exposure and imaging duration,\nnecessitating a balance between achieving high temporal resolution and\nminimizing adverse effects. Given these circumstances, not only is data\nacquisition challenging, but increasing the frame rate for each dataset also\nproves difficult. To address this challenge, this paper proposes a simple yet\neffective Unsupervised Volumetric Interpolation framework, UVI-Net. This\nframework facilitates temporal interpolation without the need for any\nintermediate frames, distinguishing it from the majority of other existing\nunsupervised methods. Experiments on benchmark datasets demonstrate significant\nimprovements across diverse evaluation metrics compared to unsupervised and\nsupervised baselines. Remarkably, our approach achieves this superior\nperformance even when trained with a dataset as small as one, highlighting its\nexceptional robustness and efficiency in scenarios with sparse supervision.", + "Experiments on benchmark datasets demonstrate significant\nimprovements across diverse evaluation metrics compared to unsupervised and\nsupervised baselines. Remarkably, our approach achieves this superior\nperformance even when trained with a dataset as small as one, highlighting its\nexceptional robustness and efficiency in scenarios with sparse supervision.\nThis positions UVI-Net as a compelling alternative for 4D medical imaging,\nparticularly in settings where data availability is limited. The source code is\navailable at https://github.com/jungeun122333/UVI-Net.", + "Learning 3D models of all animals on the Earth requires massively scaling up\nexisting solutions. With this ultimate goal in mind, we develop 3D-Fauna, an\napproach that learns a pan-category deformable 3D animal model for more than\n100 animal species jointly. One crucial bottleneck of modeling animals is the\nlimited availability of training data, which we overcome by simply learning\nfrom 2D Internet images. We show that prior category-specific attempts fail to\ngeneralize to rare species with limited training images. We address this\nchallenge by introducing the Semantic Bank of Skinned Models (SBSM), which\nautomatically discovers a small set of base animal shapes by combining\ngeometric inductive priors with semantic knowledge implicitly captured by an\noff-the-shelf self-supervised feature extractor. To train such a model, we also\ncontribute a new large-scale dataset of diverse animal species. At inference\ntime, given a single image of any quadruped animal, our model reconstructs an\narticulated 3D mesh in a feed-forward fashion within seconds.", + "The main function of depth completion is to compensate for an insufficient\nand unpredictable number of sparse depth measurements of hardware sensors.\nHowever, existing research on depth completion assumes that the sparsity -- the\nnumber of points or LiDAR lines -- is fixed for training and testing. Hence,\nthe completion performance drops severely when the number of sparse depths\nchanges significantly. To address this issue, we propose the sparsity-adaptive\ndepth refinement (SDR) framework, which refines monocular depth estimates using\nsparse depth points. For SDR, we propose the masked spatial propagation network\n(MSPN) to perform SDR with a varying number of sparse depths effectively by\ngradually propagating sparse depth information throughout the entire depth map.\nExperimental results demonstrate that MPSN achieves state-of-the-art\nperformance on both SDR and conventional depth completion scenarios.", + "Although perception systems have made remarkable advancements in recent\nyears, they still rely on explicit human instruction or pre-defined categories\nto identify the target objects before executing visual recognition tasks. Such\nsystems cannot actively reason and comprehend implicit user intention. In this\nwork, we propose a new segmentation task -- reasoning segmentation. The task is\ndesigned to output a segmentation mask given a complex and implicit query text.\nFurthermore, we establish a benchmark comprising over one thousand\nimage-instruction-mask data samples, incorporating intricate reasoning and\nworld knowledge for evaluation purposes. Finally, we present LISA: large\nLanguage Instructed Segmentation Assistant, which inherits the language\ngeneration capabilities of multimodal Large Language Models (LLMs) while also\npossessing the ability to produce segmentation masks. We expand the original\nvocabulary with a token and propose the embedding-as-mask paradigm to\nunlock the segmentation capability. Remarkably, LISA can handle cases involving\ncomplex reasoning and world knowledge. Also, it demonstrates robust zero-shot\ncapability when trained exclusively on reasoning-free datasets. In addition,\nfine-tuning the model with merely 239 reasoning segmentation data samples\nresults in further performance enhancement.", + "Remarkably, LISA can handle cases involving\ncomplex reasoning and world knowledge. Also, it demonstrates robust zero-shot\ncapability when trained exclusively on reasoning-free datasets. In addition,\nfine-tuning the model with merely 239 reasoning segmentation data samples\nresults in further performance enhancement. Both quantitative and qualitative\nexperiments show our method effectively unlocks new reasoning segmentation\ncapabilities for multimodal LLMs. Code, models, and data are available at\nhttps://github.com/dvlab-research/LISA.", + "Portrait harmonization aims to composite a subject into a new background,\nadjusting its lighting and color to ensure harmony with the background scene.\nExisting harmonization techniques often only focus on adjusting the global\ncolor and brightness of the foreground and ignore crucial illumination cues\nfrom the background such as apparent lighting direction, leading to unrealistic\ncompositions. We introduce Relightful Harmonization, a lighting-aware diffusion\nmodel designed to seamlessly harmonize sophisticated lighting effect for the\nforeground portrait using any background image. Our approach unfolds in three\nstages. First, we introduce a lighting representation module that allows our\ndiffusion model to encode lighting information from target image background.\nSecond, we introduce an alignment network that aligns lighting features learned\nfrom image background with lighting features learned from panorama environment\nmaps, which is a complete representation for scene illumination. Last, to\nfurther boost the photorealism of the proposed method, we introduce a novel\ndata simulation pipeline that generates synthetic training pairs from a diverse\nrange of natural images, which are used to refine the model. Our method\noutperforms existing benchmarks in visual fidelity and lighting coherence,\nshowing superior generalization in real-world testing scenarios, highlighting\nits versatility and practicality.", + "Video Moment Retrieval (MR) and Highlight Detection (HD) have attracted\nsignificant attention due to the growing demand for video analysis. Recent\napproaches treat MR and HD as similar video grounding problems and address them\ntogether with transformer-based architecture. However, we observe that the\nemphasis of MR and HD differs, with one necessitating the perception of local\nrelationships and the other prioritizing the understanding of global contexts.\nConsequently, the lack of task-specific design will inevitably lead to\nlimitations in associating the intrinsic specialty of two tasks. To tackle the\nissue, we propose a Unified Video COMprehension framework (UVCOM) to bridge the\ngap and jointly solve MR and HD effectively. By performing progressive\nintegration on intra and inter-modality across multi-granularity, UVCOM\nachieves the comprehensive understanding in processing a video. Moreover, we\npresent multi-aspect contrastive learning to consolidate the local relation\nmodeling and global knowledge accumulation via well aligned multi-modal space.\nExtensive experiments on QVHighlights, Charades-STA, TACoS , YouTube Highlights\nand TVSum datasets demonstrate the effectiveness and rationality of UVCOM which\noutperforms the state-of-the-art methods by a remarkable margin.", + "Music recommendation for videos attracts growing interest in multi-modal\nresearch. However, existing systems focus primarily on content compatibility,\noften ignoring the users' preferences. Their inability to interact with users\nfor further refinements or to provide explanations leads to a less satisfying\nexperience. We address these issues with MuseChat, a first-of-its-kind\ndialogue-based recommendation system that personalizes music suggestions for\nvideos. Our system consists of two key functionalities with associated modules:\nrecommendation and reasoning. The recommendation module takes a video along\nwith optional information including previous suggested music and user's\npreference as inputs and retrieves an appropriate music matching the context.\nThe reasoning module, equipped with the power of Large Language Model\n(Vicuna-7B) and extended to multi-modal inputs, is able to provide reasonable\nexplanation for the recommended music. To evaluate the effectiveness of\nMuseChat, we build a large-scale dataset, conversational music recommendation\nfor videos, that simulates a two-turn interaction between a user and a\nrecommender based on accurate music track information. Experiment results show\nthat MuseChat achieves significant improvements over existing video-based music\nretrieval methods as well as offers strong interpretability and\ninteractability.", + "Neural Radiance Fields (NeRFs) have shown great potential in novel view\nsynthesis. However, they struggle to render sharp images when the data used for\ntraining is affected by motion blur. On the other hand, event cameras excel in\ndynamic scenes as they measure brightness changes with microsecond resolution\nand are thus only marginally affected by blur. Recent methods attempt to\nenhance NeRF reconstructions under camera motion by fusing frames and events.\nHowever, they face challenges in recovering accurate color content or constrain\nthe NeRF to a set of predefined camera poses, harming reconstruction quality in\nchallenging conditions. This paper proposes a novel formulation addressing\nthese issues by leveraging both model- and learning-based modules. We\nexplicitly model the blur formation process, exploiting the event double\nintegral as an additional model-based prior. Additionally, we model the\nevent-pixel response using an end-to-end learnable response function, allowing\nour method to adapt to non-idealities in the real event-camera sensor.", + "We\nexplicitly model the blur formation process, exploiting the event double\nintegral as an additional model-based prior. Additionally, we model the\nevent-pixel response using an end-to-end learnable response function, allowing\nour method to adapt to non-idealities in the real event-camera sensor. We show,\non synthetic and real data, that the proposed approach outperforms existing\ndeblur NeRFs that use only frames as well as those that combine frames and\nevents by +6.13dB and +2.48dB, respectively.", + "We present Compound Conditioned ControlNet, C3Net, a novel generative neural\narchitecture taking conditions from multiple modalities and synthesizing\nmultimodal contents simultaneously (e.g., image, text, audio). C3Net adapts the\nControlNet architecture to jointly train and make inferences on a\nproduction-ready diffusion model and its trainable copies. Specifically, C3Net\nfirst aligns the conditions from multi-modalities to the same semantic latent\nspace using modality-specific encoders based on contrastive training. Then, it\ngenerates multimodal outputs based on the aligned latent space, whose semantic\ninformation is combined using a ControlNet-like architecture called Control\nC3-UNet. Correspondingly, with this system design, our model offers an improved\nsolution for joint-modality generation through learning and explaining\nmultimodal conditions instead of simply taking linear interpolations on the\nlatent space. Meanwhile, as we align conditions to a unified latent space,\nC3Net only requires one trainable Control C3-UNet to work on multimodal\nsemantic information.", + "Meanwhile, as we align conditions to a unified latent space,\nC3Net only requires one trainable Control C3-UNet to work on multimodal\nsemantic information. Furthermore, our model employs unimodal pretraining on\nthe condition alignment stage, outperforming the non-pretrained alignment even\non relatively scarce training data and thus demonstrating high-quality compound\ncondition generation. We contribute the first high-quality tri-modal validation\nset to validate quantitatively that C3Net outperforms or is on par with first\nand contemporary state-of-the-art multimodal generation. Our codes and\ntri-modal dataset will be released.", + "Few-shot segmentation performance declines substantially when facing images\nfrom a domain different than the training domain, effectively limiting\nreal-world use cases. To alleviate this, recently cross-domain few-shot\nsegmentation (CD-FSS) has emerged. Works that address this task mainly\nattempted to learn segmentation on a source domain in a manner that generalizes\nacross domains. Surprisingly, we can outperform these approaches while\neliminating the training stage and removing their main segmentation network. We\nshow test-time task-adaption is the key for successful CD-FSS instead.\nTask-adaption is achieved by appending small networks to the feature pyramid of\na conventionally classification-pretrained backbone. To avoid overfitting to\nthe few labeled samples in supervised fine-tuning, consistency across augmented\nviews of input images serves as guidance while learning the parameters of the\nattached layers. Despite our self-restriction not to use any images other than\nthe few labeled samples at test time, we achieve new state-of-the-art\nperformance in CD-FSS, evidencing the need to rethink approaches for the task.", + "We address the problem of regressing 3D human pose and shape from a single\nimage, with a focus on 3D accuracy. The current best methods leverage large\ndatasets of 3D pseudo-ground-truth (p-GT) and 2D keypoints, leading to robust\nperformance. With such methods, we observe a paradoxical decline in 3D pose\naccuracy with increasing 2D accuracy. This is caused by biases in the p-GT and\nthe use of an approximate camera projection model. We quantify the error\ninduced by current camera models and show that fitting 2D keypoints and p-GT\naccurately causes incorrect 3D poses. Our analysis defines the invalid\ndistances within which minimizing 2D and p-GT losses is detrimental. We use\nthis to formulate a new loss Threshold-Adaptive Loss Scaling (TALS) that\npenalizes gross 2D and p-GT losses but not smaller ones. With such a loss,\nthere are many 3D poses that could equally explain the 2D evidence. To reduce\nthis ambiguity we need a prior over valid human poses but such priors can\nintroduce unwanted bias.", + "With such a loss,\nthere are many 3D poses that could equally explain the 2D evidence. To reduce\nthis ambiguity we need a prior over valid human poses but such priors can\nintroduce unwanted bias. To address this, we exploit a tokenized representation\nof human pose and reformulate the problem as token prediction. This restricts\nthe estimated poses to the space of valid poses, effectively providing a\nuniform prior. Extensive experiments on the EMDB and 3DPW datasets show that\nour reformulated keypoint loss and tokenization allows us to train on\nin-the-wild data while improving 3D accuracy over the state-of-the-art. Our\nmodels and code are available for research at https://tokenhmr.is.tue.mpg.de.", + "This paper addresses the task of video question answering (videoQA) via a\ndecomposed multi-stage, modular reasoning framework. Previous modular methods\nhave shown promise with a single planning stage ungrounded in visual content.\nHowever, through a simple and effective baseline, we find that such systems can\nlead to brittle behavior in practice for challenging videoQA settings. Thus,\nunlike traditional single-stage planning methods, we propose a multi-stage\nsystem consisting of an event parser, a grounding stage, and a final reasoning\nstage in conjunction with an external memory. All stages are training-free, and\nperformed using few-shot prompting of large models, creating interpretable\nintermediate outputs at each stage. By decomposing the underlying planning and\ntask complexity, our method, MoReVQA, improves over prior work on standard\nvideoQA benchmarks (NExT-QA, iVQA, EgoSchema, ActivityNet-QA) with\nstate-of-the-art results, and extensions to related tasks (grounded videoQA,\nparagraph captioning).", + "We present a lightweight solution for estimating spatially-coherent indoor\nlighting from a single RGB image. Previous methods for estimating illumination\nusing volumetric representations have overlooked the sparse distribution of\nlight sources in space, necessitating substantial memory and computational\nresources for achieving high-quality results. We introduce a unified, voxel\noctree-based illumination estimation framework to produce 3D spatially-coherent\nlighting. Additionally, a differentiable voxel octree cone tracing rendering\nlayer is proposed to eliminate regular volumetric representation throughout the\nentire process and ensure the retention of features across different frequency\ndomains. This reduction significantly decreases spatial usage and required\nfloating-point operations without substantially compromising precision.\nExperimental results demonstrate that our approach achieves high-quality\ncoherent estimation with minimal cost compared to previous methods.", + "The recent progress in language-based open-vocabulary object detection can be\nlargely attributed to finding better ways of leveraging large-scale data with\nfree-form text annotations. Training such models with a discriminative\nobjective function has proven successful, but requires good positive and\nnegative samples. However, the free-form nature and the open vocabulary of\nobject descriptions make the space of negatives extremely large. Prior works\nrandomly sample negatives or use rule-based techniques to build them. In\ncontrast, we propose to leverage the vast knowledge built into modern\ngenerative models to automatically build negatives that are more relevant to\nthe original data. Specifically, we use large-language-models to generate\nnegative text descriptions, and text-to-image diffusion models to also generate\ncorresponding negative images. Our experimental analysis confirms the relevance\nof the generated negative data, and its use in language-based detectors\nimproves performance on two complex benchmarks. Code is available at\n\\url{https://github.com/xiaofeng94/Gen-Enhanced-Negs}.", + "The goal of multimodal alignment is to learn a single latent space that is\nshared between multimodal inputs. The most powerful models in this space have\nbeen trained using massive datasets of paired inputs and large-scale\ncomputational resources, making them prohibitively expensive to train in many\npractical scenarios. We surmise that existing unimodal encoders pre-trained on\nlarge amounts of unimodal data should provide an effective bootstrap to create\nmultimodal models from unimodal ones at much lower costs. We therefore propose\nFuseMix, a multimodal augmentation scheme that operates on the latent spaces of\narbitrary pre-trained unimodal encoders. Using FuseMix for multimodal\nalignment, we achieve competitive performance -- and in certain cases\noutperform state-of-the art methods -- in both image-text and audio-text\nretrieval, with orders of magnitude less compute and data: for example, we\noutperform CLIP on the Flickr30K text-to-image retrieval task with $\\sim \\!\n600\\times$ fewer GPU days and $\\sim \\! 80\\times$ fewer image-text pairs.", + "600\\times$ fewer GPU days and $\\sim \\! 80\\times$ fewer image-text pairs.\nAdditionally, we show how our method can be applied to convert pre-trained\ntext-to-image generative models into audio-to-image ones. Code is available at:\nhttps://github.com/layer6ai-labs/fusemix.", + "Standard federated learning approaches suffer when client data distributions\nhave sufficient heterogeneity. Recent methods addressed the client data\nheterogeneity issue via personalized federated learning (PFL) - a class of FL\nalgorithms aiming to personalize learned global knowledge to better suit the\nclients' local data distributions. Existing PFL methods usually decouple global\nupdates in deep neural networks by performing personalization on particular\nlayers (i.e. classifier heads) and global aggregation for the rest of the\nnetwork. However, preselecting network layers for personalization may result in\nsuboptimal storage of global knowledge. In this work, we propose FedSelect, a\nnovel PFL algorithm inspired by the iterative subnetwork discovery procedure\nused for the Lottery Ticket Hypothesis. FedSelect incrementally expands\nsubnetworks to personalize client parameters, concurrently conducting global\naggregations on the remaining parameters. This approach enables the\npersonalization of both client parameters and subnetwork structure during the\ntraining process. Finally, we show that FedSelect outperforms recent\nstate-of-the-art PFL algorithms under challenging client data heterogeneity\nsettings and demonstrates robustness to various real-world distributional\nshifts.", + "This approach enables the\npersonalization of both client parameters and subnetwork structure during the\ntraining process. Finally, we show that FedSelect outperforms recent\nstate-of-the-art PFL algorithms under challenging client data heterogeneity\nsettings and demonstrates robustness to various real-world distributional\nshifts. Our code is available at https://github.com/lapisrocks/fedselect.", + "3D facial landmark localization has proven to be of particular use for\napplications, such as face tracking, 3D face modeling, and image-based 3D face\nreconstruction. In the supervised learning case, such methods usually rely on\n3D landmark datasets derived from 3DMM-based registration that often lack\nspatial definition alignment, as compared with that chosen by hand-labeled\nhuman consensus, e.g., how are eyebrow landmarks defined? This creates a gap\nbetween landmark datasets generated via high-quality 2D human labels and 3DMMs,\nand it ultimately limits their effectiveness. To address this issue, we\nintroduce a novel semi-supervised learning approach that learns 3D landmarks by\ndirectly lifting (visible) hand-labeled 2D landmarks and ensures better\ndefinition alignment, without the need for 3D landmark datasets. To lift 2D\nlandmarks to 3D, we leverage 3D-aware GANs for better multi-view consistency\nlearning and in-the-wild multi-frame videos for robust cross-generalization.", + "To lift 2D\nlandmarks to 3D, we leverage 3D-aware GANs for better multi-view consistency\nlearning and in-the-wild multi-frame videos for robust cross-generalization.\nEmpirical experiments demonstrate that our method not only achieves better\ndefinition alignment between 2D-3D landmarks but also outperforms other\nsupervised learning 3D landmark localization methods on both 3DMM labeled and\nphotogrammetric ground truth evaluation datasets. Project Page:\nhttps://davidcferman.github.io/FaceLift", + "How to effectively explore multi-scale representations of rain streaks is\nimportant for image deraining. In contrast to existing Transformer-based\nmethods that depend mostly on single-scale rain appearance, we develop an\nend-to-end multi-scale Transformer that leverages the potentially useful\nfeatures in various scales to facilitate high-quality image reconstruction. To\nbetter explore the common degradation representations from spatially-varying\nrain streaks, we incorporate intra-scale implicit neural representations based\non pixel coordinates with the degraded inputs in a closed-loop design, enabling\nthe learned features to facilitate rain removal and improve the robustness of\nthe model in complex scenarios. To ensure richer collaborative representation\nfrom different scales, we embed a simple yet effective inter-scale\nbidirectional feedback operation into our multi-scale Transformer by performing\ncoarse-to-fine and fine-to-coarse information communication. Extensive\nexperiments demonstrate that our approach, named as NeRD-Rain, performs\nfavorably against the state-of-the-art ones on both synthetic and real-world\nbenchmark datasets. The source code and trained models are available at\nhttps://github.com/cschenxiang/NeRD-Rain.", + "In this work we focus on learning facial representations that can be adapted\nto train effective face recognition models, particularly in the absence of\nlabels. Firstly, compared with existing labelled face datasets, a vastly larger\nmagnitude of unlabeled faces exists in the real world. We explore the learning\nstrategy of these unlabeled facial images through self-supervised pretraining\nto transfer generalized face recognition performance. Moreover, motivated by\none recent finding, that is, the face saliency area is critical for face\nrecognition, in contrast to utilizing random cropped blocks of images for\nconstructing augmentations in pretraining, we utilize patches localized by\nextracted facial landmarks. This enables our method - namely LAndmark-based\nFacial Self-supervised learning LAFS), to learn key representation that is more\ncritical for face recognition. We also incorporate two landmark-specific\naugmentations which introduce more diversity of landmark information to further\nregularize the learning. With learned landmark-based facial representations, we\nfurther adapt the representation for face recognition with regularization\nmitigating variations in landmark positions. Our method achieves significant\nimprovement over the state-of-the-art on multiple face recognition benchmarks,\nespecially on more challenging few-shot scenarios.", + "Open-vocabulary semantic segmentation strives to distinguish pixels into\ndifferent semantic groups from an open set of categories. Most existing methods\nexplore utilizing pre-trained vision-language models, in which the key is to\nadopt the image-level model for pixel-level segmentation task. In this paper,\nwe propose a simple encoder-decoder, named SED, for open-vocabulary semantic\nsegmentation, which comprises a hierarchical encoder-based cost map generation\nand a gradual fusion decoder with category early rejection. The hierarchical\nencoder-based cost map generation employs hierarchical backbone, instead of\nplain transformer, to predict pixel-level image-text cost map. Compared to\nplain transformer, hierarchical backbone better captures local spatial\ninformation and has linear computational complexity with respect to input size.\nOur gradual fusion decoder employs a top-down structure to combine cost map and\nthe feature maps of different backbone levels for segmentation. To accelerate\ninference speed, we introduce a category early rejection scheme in the decoder\nthat rejects many no-existing categories at the early layer of decoder,\nresulting in at most 4.7 times acceleration without accuracy degradation.\nExperiments are performed on multiple open-vocabulary semantic segmentation\ndatasets, which demonstrates the efficacy of our SED method.", + "Experiments are performed on multiple open-vocabulary semantic segmentation\ndatasets, which demonstrates the efficacy of our SED method. When using\nConvNeXt-B, our SED method achieves mIoU score of 31.6\\% on ADE20K with 150\ncategories at 82 millisecond ($ms$) per image on a single A6000. We will\nrelease it at \\url{https://github.com/xb534/SED.git}.", + "Existing quality enhancement methods for compressed images focus on aligning\nthe enhancement domain with the raw domain to yield realistic images. However,\nthese methods exhibit a pervasive enhancement bias towards the compression\ndomain, inadvertently regarding it as more realistic than the raw domain. This\nbias makes enhanced images closely resemble their compressed counterparts, thus\ndegrading their perceptual quality. In this paper, we propose a simple yet\neffective method to mitigate this bias and enhance the quality of compressed\nimages. Our method employs a conditional discriminator with the compressed\nimage as a key condition, and then incorporates a domain-divergence\nregularization to actively distance the enhancement domain from the compression\ndomain. Through this dual strategy, our method enables the discrimination\nagainst the compression domain, and brings the enhancement domain closer to the\nraw domain. Comprehensive quality evaluations confirm the superiority of our\nmethod over other state-of-the-art methods without incurring inference\noverheads.", + "Humans live in a 3D world and commonly use natural language to interact with\na 3D scene. Modeling a 3D language field to support open-ended language queries\nin 3D has gained increasing attention recently. This paper introduces\nLangSplat, which constructs a 3D language field that enables precise and\nefficient open-vocabulary querying within 3D spaces. Unlike existing methods\nthat ground CLIP language embeddings in a NeRF model, LangSplat advances the\nfield by utilizing a collection of 3D Gaussians, each encoding language\nfeatures distilled from CLIP, to represent the language field. By employing a\ntile-based splatting technique for rendering language features, we circumvent\nthe costly rendering process inherent in NeRF. Instead of directly learning\nCLIP embeddings, LangSplat first trains a scene-wise language autoencoder and\nthen learns language features on the scene-specific latent space, thereby\nalleviating substantial memory demands imposed by explicit modeling. Existing\nmethods struggle with imprecise and vague 3D language fields, which fail to\ndiscern clear boundaries between objects.", + "Existing\nmethods struggle with imprecise and vague 3D language fields, which fail to\ndiscern clear boundaries between objects. We delve into this issue and propose\nto learn hierarchical semantics using SAM, thereby eliminating the need for\nextensively querying the language field across various scales and the\nregularization of DINO features. Extensive experimental results show that\nLangSplat significantly outperforms the previous state-of-the-art method LERF\nby a large margin. Notably, LangSplat is extremely efficient, achieving a 199\n$\\times$ speedup compared to LERF at the resolution of 1440 $\\times$ 1080. We\nstrongly recommend readers to check out our video results at\nhttps://langsplat.github.io/", + "Many existing motion prediction approaches rely on symbolic perception\noutputs to generate agent trajectories, such as bounding boxes, road graph\ninformation and traffic lights. This symbolic representation is a high-level\nabstraction of the real world, which may render the motion prediction model\nvulnerable to perception errors (e.g., failures in detecting open-vocabulary\nobstacles) while missing salient information from the scene context (e.g., poor\nroad conditions). An alternative paradigm is end-to-end learning from raw\nsensors. However, this approach suffers from the lack of interpretability and\nrequires significantly more training resources. In this work, we propose\ntokenizing the visual world into a compact set of scene elements and then\nleveraging pre-trained image foundation models and LiDAR neural networks to\nencode all the scene elements in an open-vocabulary manner. The image\nfoundation model enables our scene tokens to encode the general knowledge of\nthe open world while the LiDAR neural network encodes geometry information. Our\nproposed representation can efficiently encode the multi-frame multi-modality\nobservations with a few hundred tokens and is compatible with most\ntransformer-based architectures.", + "Our\nproposed representation can efficiently encode the multi-frame multi-modality\nobservations with a few hundred tokens and is compatible with most\ntransformer-based architectures. To evaluate our method, we have augmented\nWaymo Open Motion Dataset with camera embeddings. Experiments over Waymo Open\nMotion Dataset show that our approach leads to significant performance\nimprovements over the state-of-the-art.", + "Planet-scale image geolocalization remains a challenging problem due to the\ndiversity of images originating from anywhere in the world. Although approaches\nbased on vision transformers have made significant progress in geolocalization\naccuracy, success in prior literature is constrained to narrow distributions of\nimages of landmarks, and performance has not generalized to unseen places. We\npresent a new geolocalization system that combines semantic geocell creation,\nmulti-task contrastive pretraining, and a novel loss function. Additionally,\nour work is the first to perform retrieval over location clusters for guess\nrefinements. We train two models for evaluations on street-level data and\ngeneral-purpose image geolocalization; the first model, PIGEON, is trained on\ndata from the game of Geoguessr and is capable of placing over 40% of its\nguesses within 25 kilometers of the target location globally. We also develop a\nbot and deploy PIGEON in a blind experiment against humans, ranking in the top\n0.01% of players.", + "We also develop a\nbot and deploy PIGEON in a blind experiment against humans, ranking in the top\n0.01% of players. We further challenge one of the world's foremost professional\nGeoguessr players to a series of six matches with millions of viewers, winning\nall six games. Our second model, PIGEOTTO, differs in that it is trained on a\ndataset of images from Flickr and Wikipedia, achieving state-of-the-art results\non a wide range of image geolocalization benchmarks, outperforming the previous\nSOTA by up to 7.7 percentage points on the city accuracy level and up to 38.8\npercentage points on the country level. Our findings suggest that PIGEOTTO is\nthe first image geolocalization model that effectively generalizes to unseen\nplaces and that our approach can pave the way for highly accurate, planet-scale\nimage geolocalization systems. Our code is available on GitHub.", + "Text-to-image generation has witnessed significant progress with the advent\nof diffusion models. Despite the ability to generate photorealistic images,\ncurrent text-to-image diffusion models still often struggle to accurately\ninterpret and follow complex input text prompts. In contrast to existing models\nthat aim to generate images only with their best effort, we introduce\nSelf-correcting LLM-controlled Diffusion (SLD). SLD is a framework that\ngenerates an image from the input prompt, assesses its alignment with the\nprompt, and performs self-corrections on the inaccuracies in the generated\nimage. Steered by an LLM controller, SLD turns text-to-image generation into an\niterative closed-loop process, ensuring correctness in the resulting image. SLD\nis not only training-free but can also be seamlessly integrated with diffusion\nmodels behind API access, such as DALL-E 3, to further boost the performance of\nstate-of-the-art diffusion models. Experimental results show that our approach\ncan rectify a majority of incorrect generations, particularly in generative\nnumeracy, attribute binding, and spatial relationships.", + "Experimental results show that our approach\ncan rectify a majority of incorrect generations, particularly in generative\nnumeracy, attribute binding, and spatial relationships. Furthermore, by simply\nadjusting the instructions to the LLM, SLD can perform image editing tasks,\nbridging the gap between text-to-image generation and image editing pipelines.\nWe will make our code available for future research and applications.", + "Radiance fields have demonstrated impressive performance in synthesizing\nnovel views from sparse input views, yet prevailing methods suffer from high\ntraining costs and slow inference speed. This paper introduces DNGaussian, a\ndepth-regularized framework based on 3D Gaussian radiance fields, offering\nreal-time and high-quality few-shot novel view synthesis at low costs. Our\nmotivation stems from the highly efficient representation and surprising\nquality of the recent 3D Gaussian Splatting, despite it will encounter a\ngeometry degradation when input views decrease. In the Gaussian radiance\nfields, we find this degradation in scene geometry primarily lined to the\npositioning of Gaussian primitives and can be mitigated by depth constraint.\nConsequently, we propose a Hard and Soft Depth Regularization to restore\naccurate scene geometry under coarse monocular depth supervision while\nmaintaining a fine-grained color appearance. To further refine detailed\ngeometry reshaping, we introduce Global-Local Depth Normalization, enhancing\nthe focus on small local depth changes.", + "To further refine detailed\ngeometry reshaping, we introduce Global-Local Depth Normalization, enhancing\nthe focus on small local depth changes. Extensive experiments on LLFF, DTU, and\nBlender datasets demonstrate that DNGaussian outperforms state-of-the-art\nmethods, achieving comparable or better results with significantly reduced\nmemory cost, a $25 \\times$ reduction in training time, and over $3000 \\times$\nfaster rendering speed.", + "Counterfactual reasoning, a fundamental aspect of human cognition, involves\ncontemplating alternatives to established facts or past events, significantly\nenhancing our abilities in planning and decision-making. In light of the\nadvancements in current multi-modal large language models, we explore their\neffectiveness in counterfactual reasoning. To facilitate this investigation, we\nintroduce a novel dataset, C-VQA, specifically designed to test the\ncounterfactual reasoning capabilities of modern multi-modal large language\nmodels. This dataset is constructed by infusing original questions with\ncounterfactual presuppositions, spanning various types such as numerical and\nboolean queries. It encompasses a mix of real and synthetic data, representing\na wide range of difficulty levels. Our thorough evaluations of contemporary\nvision-language models using this dataset have revealed substantial performance\ndrops, with some models showing up to a 40% decrease, highlighting a\nsignificant gap between current models and human-like vision reasoning\ncapabilities. We hope our dataset will serve as a vital benchmark for\nevaluating the counterfactual reasoning capabilities of models. Code and\ndataset are publicly available at https://bzhao.me/C-VQA/.", + "Driver's eye gaze holds a wealth of cognitive and intentional cues crucial\nfor intelligent vehicles. Despite its significance, research on in-vehicle gaze\nestimation remains limited due to the scarcity of comprehensive and\nwell-annotated datasets in real driving scenarios. In this paper, we present\nthree novel elements to advance in-vehicle gaze research. Firstly, we introduce\nIVGaze, a pioneering dataset capturing in-vehicle gaze, collected from 125\nsubjects and covering a large range of gaze and head poses within vehicles.\nConventional gaze collection systems are inadequate for in-vehicle use. In this\ndataset, we propose a new vision-based solution for in-vehicle gaze collection,\nintroducing a refined gaze target calibration method to tackle annotation\nchallenges. Second, our research focuses on in-vehicle gaze estimation\nleveraging the IVGaze. In-vehicle face images often suffer from low resolution,\nprompting our introduction of a gaze pyramid transformer that leverages\ntransformer-based multilevel features integration. Expanding upon this, we\nintroduce the dual-stream gaze pyramid transformer (GazeDPTR).", + "In-vehicle face images often suffer from low resolution,\nprompting our introduction of a gaze pyramid transformer that leverages\ntransformer-based multilevel features integration. Expanding upon this, we\nintroduce the dual-stream gaze pyramid transformer (GazeDPTR). Employing\nperspective transformation, we rotate virtual cameras to normalize images,\nutilizing camera pose to merge normalized and original images for accurate gaze\nestimation. GazeDPTR shows state-of-the-art performance on the IVGaze dataset.\nThirdly, we explore a novel strategy for gaze zone classification by extending\nthe GazeDPTR. A foundational tri-plane and project gaze onto these planes are\nnewly defined. Leveraging both positional features from the projection points\nand visual attributes from images, we achieve superior performance compared to\nrelying solely on visual features, substantiating the advantage of gaze\nestimation. Our project is available at https://yihua.zone/work/ivgaze.", + "Adapting driving behavior to new environments, customs, and laws is a\nlong-standing problem in autonomous driving, precluding the widespread\ndeployment of autonomous vehicles (AVs). In this paper, we present LLaDA, a\nsimple yet powerful tool that enables human drivers and autonomous vehicles\nalike to drive everywhere by adapting their tasks and motion plans to traffic\nrules in new locations. LLaDA achieves this by leveraging the impressive\nzero-shot generalizability of large language models (LLMs) in interpreting the\ntraffic rules in the local driver handbook. Through an extensive user study, we\nshow that LLaDA's instructions are useful in disambiguating in-the-wild\nunexpected situations. We also demonstrate LLaDA's ability to adapt AV motion\nplanning policies in real-world datasets; LLaDA outperforms baseline planning\napproaches on all our metrics. Please check our website for more details:\nhttps://boyiliee.github.io/llada.", + "Generalizable neural implicit surface reconstruction aims to obtain an\naccurate underlying geometry given a limited number of multi-view images from\nunseen scenes. However, existing methods select only informative and relevant\nviews using predefined scores for training and testing phases. This constraint\nrenders the model impractical in real-world scenarios, where the availability\nof favorable combinations cannot always be ensured. We introduce and validate a\nview-combination score to indicate the effectiveness of the input view\ncombination. We observe that previous methods output degenerate solutions under\narbitrary and unfavorable sets. Building upon this finding, we propose\nUFORecon, a robust view-combination generalizable surface reconstruction\nframework. To achieve this, we apply cross-view matching transformers to model\ninteractions between source images and build correlation frustums to capture\nglobal correlations. Additionally, we explicitly encode pairwise feature\nsimilarities as view-consistent priors. Our proposed framework significantly\noutperforms previous methods in terms of view-combination generalizability and\nalso in the conventional generalizable protocol trained with favorable\nview-combinations. The code is available at\nhttps://github.com/Youngju-Na/UFORecon.", + "Estimating relative camera poses between images has been a central problem in\ncomputer vision. Methods that find correspondences and solve for the\nfundamental matrix offer high precision in most cases. Conversely, methods\npredicting pose directly using neural networks are more robust to limited\noverlap and can infer absolute translation scale, but at the expense of reduced\nprecision. We show how to combine the best of both methods; our approach yields\nresults that are both precise and robust, while also accurately inferring\ntranslation scales. At the heart of our model lies a Transformer that (1)\nlearns to balance between solved and learned pose estimations, and (2) provides\na prior to guide a solver. A comprehensive analysis supports our design choices\nand demonstrates that our method adapts flexibly to various feature extractors\nand correspondence estimators, showing state-of-the-art performance in 6DoF\npose estimation on Matterport3D, InteriorNet, StreetLearn, and Map-free\nRelocalization.", + "Event cameras, with their high temporal and dynamic range and minimal memory\nusage, have found applications in various fields. However, their potential in\nstatic traffic monitoring remains largely unexplored. To facilitate this\nexploration, we present eTraM - a first-of-its-kind, fully event-based traffic\nmonitoring dataset. eTraM offers 10 hr of data from different traffic scenarios\nin various lighting and weather conditions, providing a comprehensive overview\nof real-world situations. Providing 2M bounding box annotations, it covers\neight distinct classes of traffic participants, ranging from vehicles to\npedestrians and micro-mobility. eTraM's utility has been assessed using\nstate-of-the-art methods for traffic participant detection, including RVT, RED,\nand YOLOv8. We quantitatively evaluate the ability of event-based models to\ngeneralize on nighttime and unseen scenes. Our findings substantiate the\ncompelling potential of leveraging event cameras for traffic monitoring,\nopening new avenues for research and application. eTraM is available at\nhttps://eventbasedvision.github.io/eTraM", + "Long video question answering is a challenging task that involves recognizing\nshort-term activities and reasoning about their fine-grained relationships.\nState-of-the-art video Large Language Models (vLLMs) hold promise as a viable\nsolution due to their demonstrated emergent capabilities on new tasks. However,\ndespite being trained on millions of short seconds-long videos, vLLMs are\nunable to understand minutes-long videos and accurately answer questions about\nthem. To address this limitation, we propose a lightweight and self-supervised\napproach, Key frame-conditioned long video-LLM (Koala), that introduces\nlearnable spatiotemporal queries to adapt pretrained vLLMs for generalizing to\nlonger videos. Our approach introduces two new tokenizers that condition on\nvisual tokens computed from sparse video key frames for understanding short and\nlong video moments. We train our proposed approach on HowTo100M and demonstrate\nits effectiveness on zero-shot long video understanding benchmarks, where it\noutperforms state-of-the-art large models by 3 - 6% in absolute accuracy across\nall tasks.", + "We train our proposed approach on HowTo100M and demonstrate\nits effectiveness on zero-shot long video understanding benchmarks, where it\noutperforms state-of-the-art large models by 3 - 6% in absolute accuracy across\nall tasks. Surprisingly, we also empirically show that our approach not only\nhelps a pretrained vLLM to understand long videos but also improves its\naccuracy on short-term action recognition.", + "Registration of point clouds collected from a pair of distant vehicles\nprovides a comprehensive and accurate 3D view of the driving scenario, which is\nvital for driving safety related applications, yet existing literature suffers\nfrom the expensive pose label acquisition and the deficiency to generalize to\nnew data distributions. In this paper, we propose EYOC, an unsupervised distant\npoint cloud registration method that adapts to new point cloud distributions on\nthe fly, requiring no global pose labels. The core idea of EYOC is to train a\nfeature extractor in a progressive fashion, where in each round, the feature\nextractor, trained with near point cloud pairs, can label slightly farther\npoint cloud pairs, enabling self-supervision on such far point cloud pairs.\nThis process continues until the derived extractor can be used to register\ndistant point clouds. Particularly, to enable high-fidelity correspondence\nlabel generation, we devise an effective spatial filtering scheme to select the\nmost representative correspondences to register a point cloud pair, and then\nutilize the aligned point clouds to discover more correct correspondences.\nExperiments show that EYOC can achieve comparable performance with\nstate-of-the-art supervised methods at a lower training cost.", + "Experiments show that EYOC can achieve comparable performance with\nstate-of-the-art supervised methods at a lower training cost. Moreover, it\noutwits supervised methods regarding generalization performance on new data\ndistributions.", + "We introduce HallusionBench, a comprehensive benchmark designed for the\nevaluation of image-context reasoning. This benchmark presents significant\nchallenges to advanced large visual-language models (LVLMs), such as\nGPT-4V(Vision), Gemini Pro Vision, Claude 3, and LLaVA-1.5, by emphasizing\nnuanced understanding and interpretation of visual data. The benchmark\ncomprises 346 images paired with 1129 questions, all meticulously crafted by\nhuman experts. We introduce a novel structure for these visual questions\ndesigned to establish control groups. This structure enables us to conduct a\nquantitative analysis of the models' response tendencies, logical consistency,\nand various failure modes. In our evaluation on HallusionBench, we benchmarked\n15 different models, highlighting a 31.42% question-pair accuracy achieved by\nthe state-of-the-art GPT-4V. Notably, all other evaluated models achieve\naccuracy below 16%. Moreover, our analysis not only highlights the observed\nfailure modes, including language hallucination and visual illusion, but also\ndeepens an understanding of these pitfalls.", + "Notably, all other evaluated models achieve\naccuracy below 16%. Moreover, our analysis not only highlights the observed\nfailure modes, including language hallucination and visual illusion, but also\ndeepens an understanding of these pitfalls. Our comprehensive case studies\nwithin HallusionBench shed light on the challenges of hallucination and\nillusion in LVLMs. Based on these insights, we suggest potential pathways for\ntheir future improvement. The benchmark and codebase can be accessed at\nhttps://github.com/tianyi-lab/HallusionBench.", + "Out-of-distribution (OOD) detection methods often exploit auxiliary outliers\nto train model identifying OOD samples, especially discovering challenging\noutliers from auxiliary outliers dataset to improve OOD detection. However,\nthey may still face limitations in effectively distinguishing between the most\nchallenging OOD samples that are much like in-distribution (ID) data, i.e.,\n\\idlike samples. To this end, we propose a novel OOD detection framework that\ndiscovers \\idlike outliers using CLIP \\cite{DBLP:conf/icml/RadfordKHRGASAM21}\nfrom the vicinity space of the ID samples, thus helping to identify these most\nchallenging OOD samples. Then a prompt learning framework is proposed that\nutilizes the identified \\idlike outliers to further leverage the capabilities\nof CLIP for OOD detection. Benefiting from the powerful CLIP, we only need a\nsmall number of ID samples to learn the prompts of the model without exposing\nother auxiliary outlier datasets.", + "Benefiting from the powerful CLIP, we only need a\nsmall number of ID samples to learn the prompts of the model without exposing\nother auxiliary outlier datasets. By focusing on the most challenging \\idlike\nOOD samples and elegantly exploiting the capabilities of CLIP, our method\nachieves superior few-shot learning performance on various real-world image\ndatasets (e.g., in 4-shot OOD detection on the ImageNet-1k dataset, our method\nreduces the average FPR95 by 12.16\\% and improves the average AUROC by 2.76\\%,\ncompared to state-of-the-art methods). Code is available at\nhttps://github.com/ycfate/ID-like.", + "A sketch is one of the most intuitive and versatile tools humans use to\nconvey their ideas visually. An animated sketch opens another dimension to the\nexpression of ideas and is widely used by designers for a variety of purposes.\nAnimating sketches is a laborious process, requiring extensive experience and\nprofessional design skills. In this work, we present a method that\nautomatically adds motion to a single-subject sketch (hence, \"breathing life\ninto it\"), merely by providing a text prompt indicating the desired motion. The\noutput is a short animation provided in vector representation, which can be\neasily edited. Our method does not require extensive training, but instead\nleverages the motion prior of a large pretrained text-to-video diffusion model\nusing a score-distillation loss to guide the placement of strokes. To promote\nnatural and smooth motion and to better preserve the sketch's appearance, we\nmodel the learned motion through two components. The first governs small local\ndeformations and the second controls global affine transformations.\nSurprisingly, we find that even models that struggle to generate sketch videos\non their own can still serve as a useful backbone for animating abstract\nrepresentations.", + "The innovative application of precise geospatial vegetation forecasting holds\nimmense potential across diverse sectors, including agriculture, forestry,\nhumanitarian aid, and carbon accounting. To leverage the vast availability of\nsatellite imagery for this task, various works have applied deep neural\nnetworks for predicting multispectral images in photorealistic quality.\nHowever, the important area of vegetation dynamics has not been thoroughly\nexplored. Our study breaks new ground by introducing GreenEarthNet, the first\ndataset specifically designed for high-resolution vegetation forecasting, and\nContextformer, a novel deep learning approach for predicting vegetation\ngreenness from Sentinel 2 satellite images with fine resolution across Europe.\nOur multi-modal transformer model Contextformer leverages spatial context\nthrough a vision backbone and predicts the temporal dynamics on local context\npatches incorporating meteorological time series in a parameter-efficient\nmanner. The GreenEarthNet dataset features a learned cloud mask and an\nappropriate evaluation scheme for vegetation modeling. It also maintains\ncompatibility with the existing satellite imagery forecasting dataset\nEarthNet2021, enabling cross-dataset model comparisons. Our extensive\nqualitative and quantitative analyses reveal that our methods outperform a\nbroad range of baseline techniques.", + "It also maintains\ncompatibility with the existing satellite imagery forecasting dataset\nEarthNet2021, enabling cross-dataset model comparisons. Our extensive\nqualitative and quantitative analyses reveal that our methods outperform a\nbroad range of baseline techniques. This includes surpassing previous\nstate-of-the-art models on EarthNet2021, as well as adapted models from time\nseries forecasting and video prediction. To the best of our knowledge, this\nwork presents the first models for continental-scale vegetation modeling at\nfine resolution able to capture anomalies beyond the seasonal cycle, thereby\npaving the way for predicting vegetation health and behaviour in response to\nclimate variability and extremes.", + "Single RGB or LiDAR is the mainstream sensor for the challenging scene flow,\nwhich relies heavily on visual features to match motion features. Compared with\nsingle modality, existing methods adopt a fusion strategy to directly fuse the\ncross-modal complementary knowledge in motion space. However, these direct\nfusion methods may suffer the modality gap due to the visual intrinsic\nheterogeneous nature between RGB and LiDAR, thus deteriorating motion features.\nWe discover that event has the homogeneous nature with RGB and LiDAR in both\nvisual and motion spaces. In this work, we bring the event as a bridge between\nRGB and LiDAR, and propose a novel hierarchical visual-motion fusion framework\nfor scene flow, which explores a homogeneous space to fuse the cross-modal\ncomplementary knowledge for physical interpretation. In visual fusion, we\ndiscover that event has a complementarity (relative v.s. absolute) in luminance\nspace with RGB for high dynamic imaging, and has a complementarity (local\nboundary v.s. global shape) in scene structure space with LiDAR for structure\nintegrity.", + "In visual fusion, we\ndiscover that event has a complementarity (relative v.s. absolute) in luminance\nspace with RGB for high dynamic imaging, and has a complementarity (local\nboundary v.s. global shape) in scene structure space with LiDAR for structure\nintegrity. In motion fusion, we figure out that RGB, event and LiDAR are\ncomplementary (spatial-dense, temporal-dense v.s. spatiotemporal-sparse) to\neach other in correlation space, which motivates us to fuse their motion\ncorrelations for motion continuity. The proposed hierarchical fusion can\nexplicitly fuse the multimodal knowledge to progressively improve scene flow\nfrom visual space to motion space. Extensive experiments have been performed to\nverify the superiority of the proposed method.", + "Generalizable NeRF can directly synthesize novel views across new scenes,\neliminating the need for scene-specific retraining in vanilla NeRF. A critical\nenabling factor in these approaches is the extraction of a generalizable 3D\nrepresentation by aggregating source-view features. In this paper, we propose\nan Entangled View-Epipolar Information Aggregation method dubbed EVE-NeRF.\nDifferent from existing methods that consider cross-view and along-epipolar\ninformation independently, EVE-NeRF conducts the view-epipolar feature\naggregation in an entangled manner by injecting the scene-invariant appearance\ncontinuity and geometry consistency priors to the aggregation process. Our\napproach effectively mitigates the potential lack of inherent geometric and\nappearance constraint resulting from one-dimensional interactions, thus further\nboosting the 3D representation generalizablity. EVE-NeRF attains\nstate-of-the-art performance across various evaluation scenarios. Extensive\nexperiments demonstate that, compared to prevailing single-dimensional\naggregation, the entangled network excels in the accuracy of 3D scene geometry\nand appearance reconstruction. Our code is publicly available at\nhttps://github.com/tatakai1/EVENeRF.", + "The ability of large language models (LLMs) to process visual inputs has\ngiven rise to general-purpose vision systems, unifying various vision-language\n(VL) tasks by instruction tuning. However, due to the enormous diversity in\ninput-output formats in the vision domain, existing general-purpose models fail\nto successfully integrate segmentation and multi-image inputs with coarse-level\ntasks into a single framework. In this work, we introduce VistaLLM, a powerful\nvisual system that addresses coarse- and fine-grained VL tasks over single and\nmultiple input images using a unified framework. VistaLLM utilizes an\ninstruction-guided image tokenizer that filters global embeddings using task\ndescriptions to extract compressed and refined features from numerous images.\nMoreover, VistaLLM employs a gradient-aware adaptive sampling technique to\nrepresent binary segmentation masks as sequences, significantly improving over\npreviously used uniform sampling. To bolster the desired capability of\nVistaLLM, we curate CoinIt, a comprehensive coarse-to-fine instruction tuning\ndataset with 6.8M samples.", + "To bolster the desired capability of\nVistaLLM, we curate CoinIt, a comprehensive coarse-to-fine instruction tuning\ndataset with 6.8M samples. We also address the lack of multi-image grounding\ndatasets by introducing a novel task, AttCoSeg (Attribute-level\nCo-Segmentation), which boosts the model's reasoning and grounding capability\nover multiple input images. Extensive experiments on a wide range of V- and VL\ntasks demonstrate the effectiveness of VistaLLM by achieving consistent\nstate-of-the-art performance over strong baselines across all downstream tasks.\nOur project page can be found at https://shramanpramanick.github.io/VistaLLM/.", + "Foot contact is an important cue for human motion capture, understanding, and\ngeneration. Existing datasets tend to annotate dense foot contact using visual\nmatching with thresholding or incorporating pressure signals. However, these\napproaches either suffer from low accuracy or are only designed for small-range\nand slow motion. There is still a lack of a vision-pressure multimodal dataset\nwith large-range and fast human motion, as well as accurate and dense\nfoot-contact annotation. To fill this gap, we propose a Multimodal MoCap\nDataset with Vision and Pressure sensors, named MMVP. MMVP provides accurate\nand dense plantar pressure signals synchronized with RGBD observations, which\nis especially useful for both plausible shape estimation, robust pose fitting\nwithout foot drifting, and accurate global translation tracking. To validate\nthe dataset, we propose an RGBD-P SMPL fitting method and also a\nmonocular-video-based baseline framework, VP-MoCap, for human motion capture.\nExperiments demonstrate that our RGBD-P SMPL Fitting results significantly\noutperform pure visual motion capture. Moreover, VP-MoCap outperforms SOTA\nmethods in foot-contact and global translation estimation accuracy.", + "Experiments demonstrate that our RGBD-P SMPL Fitting results significantly\noutperform pure visual motion capture. Moreover, VP-MoCap outperforms SOTA\nmethods in foot-contact and global translation estimation accuracy. We believe\nthe configuration of the dataset and the baseline frameworks will stimulate the\nresearch in this direction and also provide a good reference for MoCap\napplications in various domains. Project page:\nhttps://metaverse-ai-lab-thu.github.io/MMVP-Dataset/.", + "Out-of-distribution (OOD) detection has attracted a large amount of attention\nfrom the machine learning research community in recent years due to its\nimportance in deployed systems. Most of the previous studies focused on the\ndetection of OOD samples in the multi-class classification task. However, OOD\ndetection in the multi-label classification task, a more common real-world use\ncase, remains an underexplored domain. In this research, we propose YolOOD - a\nmethod that utilizes concepts from the object detection domain to perform OOD\ndetection in the multi-label classification task. Object detection models have\nan inherent ability to distinguish between objects of interest\n(in-distribution) and irrelevant objects (e.g., OOD objects) in images that\ncontain multiple objects belonging to different class categories. These\nabilities allow us to convert a regular object detection model into an image\nclassifier with inherent OOD detection capabilities with just minor changes. We\ncompare our approach to state-of-the-art OOD detection methods and demonstrate\nYolOOD's ability to outperform these methods on a comprehensive suite of\nin-distribution and OOD benchmark datasets.", + "Accuracy and computational efficiency are the most important metrics to\nVisual Inertial Navigation System (VINS). The existing VINS algorithms with\neither high accuracy or low computational complexity, are difficult to provide\nthe high precision localization in resource-constrained devices. To this end,\nwe propose a novel filter-based VINS framework named SchurVINS, which could\nguarantee both high accuracy by building a complete residual model and low\ncomputational complexity with Schur complement. Technically, we first formulate\nthe full residual model where Gradient, Hessian and observation covariance are\nexplicitly modeled. Then Schur complement is employed to decompose the full\nmodel into ego-motion residual model and landmark residual model. Finally,\nExtended Kalman Filter (EKF) update is implemented in these two models with\nhigh efficiency. Experiments on EuRoC and TUM-VI datasets show that our method\nnotably outperforms state-of-the-art (SOTA) methods in both accuracy and\ncomputational complexity. The experimental code of SchurVINS is available at\nhttps://github.com/bytedance/SchurVINS.", + "Domain Generalized Semantic Segmentation (DGSS) deals with training a model\non a labeled source domain with the aim of generalizing to unseen domains\nduring inference. Existing DGSS methods typically effectuate robust features by\nmeans of Domain Randomization (DR). Such an approach is often limited as it can\nonly account for style diversification and not content. In this work, we take\nan orthogonal approach to DGSS and propose to use an assembly of CoLlaborative\nFOUndation models for Domain Generalized Semantic Segmentation (CLOUDS). In\ndetail, CLOUDS is a framework that integrates FMs of various kinds: (i) CLIP\nbackbone for its robust feature representation, (ii) generative models to\ndiversify the content, thereby covering various modes of the possible target\ndistribution, and (iii) Segment Anything Model (SAM) for iteratively refining\nthe predictions of the segmentation model. Extensive experiments show that our\nCLOUDS excels in adapting from synthetic to real DGSS benchmarks and under\nvarying weather conditions, notably outperforming prior methods by 5.6% and\n6.7% on averaged miou, respectively.", + "Extensive experiments show that our\nCLOUDS excels in adapting from synthetic to real DGSS benchmarks and under\nvarying weather conditions, notably outperforming prior methods by 5.6% and\n6.7% on averaged miou, respectively. The code is available at :\nhttps://github.com/yasserben/CLOUDS", + "This paper addresses the problem of generating lifelike holistic co-speech\nmotions for 3D avatars, focusing on two key aspects: variability and\ncoordination. Variability allows the avatar to exhibit a wide range of motions\neven with similar speech content, while coordination ensures a harmonious\nalignment among facial expressions, hand gestures, and body poses. We aim to\nachieve both with ProbTalk, a unified probabilistic framework designed to\njointly model facial, hand, and body movements in speech. ProbTalk builds on\nthe variational autoencoder (VAE) architecture and incorporates three core\ndesigns. First, we introduce product quantization (PQ) to the VAE, which\nenriches the representation of complex holistic motion. Second, we devise a\nnovel non-autoregressive model that embeds 2D positional encoding into the\nproduct-quantized representation, thereby preserving essential structure\ninformation of the PQ codes. Last, we employ a secondary stage to refine the\npreliminary prediction, further sharpening the high-frequency details.", + "Last, we employ a secondary stage to refine the\npreliminary prediction, further sharpening the high-frequency details. Coupling\nthese three designs enables ProbTalk to generate natural and diverse holistic\nco-speech motions, outperforming several state-of-the-art methods in\nqualitative and quantitative evaluations, particularly in terms of realism. Our\ncode and model will be released for research purposes at\nhttps://feifeifeiliu.github.io/probtalk/.", + "Semi-supervised semantic segmentation (SSSS) has been proposed to alleviate\nthe burden of time-consuming pixel-level manual labeling, which leverages\nlimited labeled data along with larger amounts of unlabeled data. Current\nstate-of-the-art methods train the labeled data with ground truths and\nunlabeled data with pseudo labels. However, the two training flows are\nseparate, which allows labeled data to dominate the training process, resulting\nin low-quality pseudo labels and, consequently, sub-optimal results. To\nalleviate this issue, we present AllSpark, which reborns the labeled features\nfrom unlabeled ones with the channel-wise cross-attention mechanism. We further\nintroduce a Semantic Memory along with a Channel Semantic Grouping strategy to\nensure that unlabeled features adequately represent labeled features. The\nAllSpark shed new light on the architecture level designs of SSSS rather than\nframework level, which avoids increasingly complicated training pipeline\ndesigns. It can also be regarded as a flexible bottleneck module that can be\nseamlessly integrated into a general transformer-based segmentation model.", + "The\nAllSpark shed new light on the architecture level designs of SSSS rather than\nframework level, which avoids increasingly complicated training pipeline\ndesigns. It can also be regarded as a flexible bottleneck module that can be\nseamlessly integrated into a general transformer-based segmentation model. The\nproposed AllSpark outperforms existing methods across all evaluation protocols\non Pascal, Cityscapes and COCO benchmarks without bells-and-whistles. Code and\nmodel weights are available at: https://github.com/xmed-lab/AllSpark.", + "Advances in image diffusion models have recently led to notable improvements\nin the generation of high-quality images. In combination with Neural Radiance\nFields (NeRFs), they enabled new opportunities in 3D generation. However, most\ngenerative 3D approaches are object-centric and applying them to editing\nexisting photorealistic scenes is not trivial. We propose SIGNeRF, a novel\napproach for fast and controllable NeRF scene editing and scene-integrated\nobject generation. A new generative update strategy ensures 3D consistency\nacross the edited images, without requiring iterative optimization. We find\nthat depth-conditioned diffusion models inherently possess the capability to\ngenerate 3D consistent views by requesting a grid of images instead of single\nviews. Based on these insights, we introduce a multi-view reference sheet of\nmodified images. Our method updates an image collection consistently based on\nthe reference sheet and refines the original NeRF with the newly generated\nimage set in one go. By exploiting the depth conditioning mechanism of the\nimage diffusion model, we gain fine control over the spatial location of the\nedit and enforce shape guidance by a selected region or an external mesh.", + "Data privacy is of great concern in cloud machine-learning service platforms,\nwhen sensitive data are exposed to service providers. While private computing\nenvironments (e.g., secure enclaves), and cryptographic approaches (e.g.,\nhomomorphic encryption) provide strong privacy protection, their computing\nperformance still falls short compared to cloud GPUs. To achieve privacy\nprotection with high computing performance, we propose Delta, a new private\ntraining and inference framework, with comparable model performance as\nnon-private centralized training. Delta features two asymmetric data flows: the\nmain information-sensitive flow and the residual flow. The main part flows into\na small model while the residuals are offloaded to a large model. Specifically,\nDelta embeds the information-sensitive representations into a low-dimensional\nspace while pushing the information-insensitive part into high-dimension\nresiduals. To ensure privacy protection, the low-dimensional\ninformation-sensitive part is secured and fed to a small model in a private\nenvironment. On the other hand, the residual part is sent to fast cloud GPUs,\nand processed by a large model.", + "To ensure privacy protection, the low-dimensional\ninformation-sensitive part is secured and fed to a small model in a private\nenvironment. On the other hand, the residual part is sent to fast cloud GPUs,\nand processed by a large model. To further enhance privacy and reduce the\ncommunication cost, Delta applies a random binary quantization technique along\nwith a DP-based technique to the residuals before sharing them with the public\nplatform. We theoretically show that Delta guarantees differential privacy in\nthe public environment and greatly reduces the complexity in the private\nenvironment. We conduct empirical analyses on CIFAR-10, CIFAR-100 and ImageNet\ndatasets and ResNet-18 and ResNet-34, showing that Delta achieves strong\nprivacy protection, fast training, and inference without significantly\ncompromising the model utility.", + "We introduce the new task of generating Illustrated Instructions, i.e.,\nvisual instructions customized to a user's needs. We identify desiderata unique\nto this task, and formalize it through a suite of automatic and human\nevaluation metrics, designed to measure the validity, consistency, and efficacy\nof the generations. We combine the power of large language models (LLMs)\ntogether with strong text-to-image generation diffusion models to propose a\nsimple approach called StackedDiffusion, which generates such illustrated\ninstructions given text as input. The resulting model strongly outperforms\nbaseline approaches and state-of-the-art multimodal LLMs; and in 30% of cases,\nusers even prefer it to human-generated articles. Most notably, it enables\nvarious new and exciting applications far beyond what static articles on the\nweb can provide, such as personalized instructions complete with intermediate\nsteps and pictures in response to a user's individual situation.", + "Reconstructing 3D hand mesh robustly from a single image is very challenging,\ndue to the lack of diversity in existing real-world datasets. While data\nsynthesis helps relieve the issue, the syn-to-real gap still hinders its usage.\nIn this work, we present HandBooster, a new approach to uplift the data\ndiversity and boost the 3D hand-mesh reconstruction performance by training a\nconditional generative space on hand-object interactions and purposely sampling\nthe space to synthesize effective data samples. First, we construct versatile\ncontent-aware conditions to guide a diffusion model to produce realistic images\nwith diverse hand appearances, poses, views, and backgrounds; favorably,\naccurate 3D annotations are obtained for free. Then, we design a novel\ncondition creator based on our similarity-aware distribution sampling\nstrategies to deliberately find novel and realistic interaction poses that are\ndistinctive from the training set. Equipped with our method, several baselines\ncan be significantly improved beyond the SOTA on the HO3D and DexYCB\nbenchmarks. Our code will be released on\nhttps://github.com/hxwork/HandBooster_Pytorch.", + "Matching cost aggregation plays a fundamental role in learning-based\nmulti-view stereo networks. However, directly aggregating adjacent costs can\nlead to suboptimal results due to local geometric inconsistency. Related\nmethods either seek selective aggregation or improve aggregated depth in the 2D\nspace, both are unable to handle geometric inconsistency in the cost volume\neffectively. In this paper, we propose GoMVS to aggregate geometrically\nconsistent costs, yielding better utilization of adjacent geometries. More\nspecifically, we correspond and propagate adjacent costs to the reference pixel\nby leveraging the local geometric smoothness in conjunction with surface\nnormals. We achieve this by the geometric consistent propagation (GCP) module.\nIt computes the correspondence from the adjacent depth hypothesis space to the\nreference depth space using surface normals, then uses the correspondence to\npropagate adjacent costs to the reference geometry, followed by a convolution\nfor aggregation. Our method achieves new state-of-the-art performance on DTU,\nTanks & Temple, and ETH3D datasets. Notably, our method ranks 1st on the Tanks\n& Temple Advanced benchmark.", + "Super-resolution (SR) is an ill-posed inverse problem, where the size of the\nset of feasible solutions that are consistent with a given low-resolution image\nis very large. Many algorithms have been proposed to find a \"good\" solution\namong the feasible solutions that strike a balance between fidelity and\nperceptual quality. Unfortunately, all known methods generate artifacts and\nhallucinations while trying to reconstruct high-frequency (HF) image details. A\nfundamental question is: Can a model learn to distinguish genuine image details\nfrom artifacts? Although some recent works focused on the differentiation of\ndetails and artifacts, this is a very challenging problem and a satisfactory\nsolution is yet to be found. This paper shows that the characterization of\ngenuine HF details versus artifacts can be better learned by training GAN-based\nSR models using wavelet-domain loss functions compared to RGB-domain or\nFourier-space losses. Although wavelet-domain losses have been used in the\nliterature before, they have not been used in the context of the SR task.", + "Although wavelet-domain losses have been used in the\nliterature before, they have not been used in the context of the SR task. More\nspecifically, we train the discriminator only on the HF wavelet sub-bands\ninstead of on RGB images and the generator is trained by a fidelity loss over\nwavelet subbands to make it sensitive to the scale and orientation of\nstructures. Extensive experimental results demonstrate that our model achieves\nbetter perception-distortion trade-off according to multiple objective measures\nand visual evaluations.", + "In film gender studies, the concept of 'male gaze' refers to the way the\ncharacters are portrayed on-screen as objects of desire rather than subjects.\nIn this article, we introduce a novel video-interpretation task, to detect\ncharacter objectification in films. The purpose is to reveal and quantify the\nusage of complex temporal patterns operated in cinema to produce the cognitive\nperception of objectification. We introduce the ObyGaze12 dataset, made of 1914\nmovie clips densely annotated by experts for objectification concepts\nidentified in film studies and psychology. We evaluate recent vision models,\nshow the feasibility of the task and where the challenges remain with concept\nbottleneck models. Our new dataset and code are made available to the\ncommunity.", + "Creating personalized hand avatars is important to offer a realistic\nexperience to users on AR / VR platforms. While most prior studies focused on\nreconstructing 3D hand shapes, some recent work has tackled the reconstruction\nof hand textures on top of shapes. However, these methods are often limited to\ncapturing pixels on the visible side of a hand, requiring diverse views of the\nhand in a video or multiple images as input. In this paper, we propose a novel\nmethod, BiTT(Bi-directional Texture reconstruction of Two hands), which is the\nfirst end-to-end trainable method for relightable, pose-free texture\nreconstruction of two interacting hands taking only a single RGB image, by\nthree novel components: 1) bi-directional (left $\\leftrightarrow$ right)\ntexture reconstruction using the texture symmetry of left / right hands, 2)\nutilizing a texture parametric model for hand texture recovery, and 3) the\noverall coarse-to-fine stage pipeline for reconstructing personalized texture\nof two interacting hands.", + "BiTT first estimates the scene light condition and\nalbedo image from an input image, then reconstructs the texture of both hands\nthrough the texture parametric model and bi-directional texture reconstructor.\nIn experiments using InterHand2.6M and RGB2Hands datasets, our method\nsignificantly outperforms state-of-the-art hand texture reconstruction methods\nquantitatively and qualitatively. The code is available at\nhttps://github.com/yunminjin2/BiTT", + "Existing open-vocabulary object detectors typically require a predefined set\nof categories from users, significantly confining their application scenarios.\nIn this paper, we introduce DetCLIPv3, a high-performing detector that excels\nnot only at both open-vocabulary object detection, but also generating\nhierarchical labels for detected objects. DetCLIPv3 is characterized by three\ncore designs: 1. Versatile model architecture: we derive a robust open-set\ndetection framework which is further empowered with generation ability via the\nintegration of a caption head. 2. High information density data: we develop an\nauto-annotation pipeline leveraging visual large language model to refine\ncaptions for large-scale image-text pairs, providing rich, multi-granular\nobject labels to enhance the training. 3. Efficient training strategy: we\nemploy a pre-training stage with low-resolution inputs that enables the object\ncaptioner to efficiently learn a broad spectrum of visual concepts from\nextensive image-text paired data. This is followed by a fine-tuning stage that\nleverages a small number of high-resolution samples to further enhance\ndetection performance.", + "This is followed by a fine-tuning stage that\nleverages a small number of high-resolution samples to further enhance\ndetection performance. With these effective designs, DetCLIPv3 demonstrates\nsuperior open-vocabulary detection performance, \\eg, our Swin-T backbone model\nachieves a notable 47.0 zero-shot fixed AP on the LVIS minival benchmark,\noutperforming GLIPv2, GroundingDINO, and DetCLIPv2 by 18.0/19.6/6.6 AP,\nrespectively. DetCLIPv3 also achieves a state-of-the-art 19.7 AP in dense\ncaptioning task on VG dataset, showcasing its strong generative capability.", + "Learning-based underwater image enhancement (UIE) methods have made great\nprogress. However, the lack of large-scale and high-quality paired training\nsamples has become the main bottleneck hindering the development of UIE. The\ninter-frame information in underwater videos can accelerate or optimize the UIE\nprocess. Thus, we constructed the first large-scale high-resolution underwater\nvideo enhancement benchmark (UVEB) to promote the development of underwater\nvision.It contains 1,308 pairs of video sequences and more than 453,000\nhigh-resolution with 38\\% Ultra-High-Definition (UHD) 4K frame pairs. UVEB\ncomes from multiple countries, containing various scenes and video degradation\ntypes to adapt to diverse and complex underwater environments. We also propose\nthe first supervised underwater video enhancement method, UVE-Net. UVE-Net\nconverts the current frame information into convolutional kernels and passes\nthem to adjacent frames for efficient inter-frame information exchange. By\nfully utilizing the redundant degraded information of underwater videos,\nUVE-Net completes video enhancement better. Experiments show the effective\nnetwork design and good performance of UVE-Net.", + "Integration of Large Language Models (LLMs) into visual domain tasks,\nresulting in visual-LLMs (V-LLMs), has enabled exceptional performance in\nvision-language tasks, particularly for visual question answering (VQA).\nHowever, existing V-LLMs (e.g. BLIP-2, LLaVA) demonstrate weak spatial\nreasoning and localization awareness. Despite generating highly descriptive and\nelaborate textual answers, these models fail at simple tasks like\ndistinguishing a left vs right location. In this work, we explore how\nimage-space coordinate based instruction fine-tuning objectives could inject\nspatial awareness into V-LLMs. We discover optimal coordinate representations,\ndata-efficient instruction fine-tuning objectives, and pseudo-data generation\nstrategies that lead to improved spatial awareness in V-LLMs. Additionally, our\nresulting model improves VQA across image and video domains, reduces undesired\nhallucination, and generates better contextual object descriptions. Experiments\nacross 5 vision-language tasks involving 14 different datasets establish the\nclear performance improvements achieved by our proposed framework.", + "Recent 3D face reconstruction methods have made remarkable advancements, yet\nthere remain huge challenges in monocular high-quality facial reflectance\nreconstruction. Existing methods rely on a large amount of light-stage captured\ndata to learn facial reflectance models. However, the lack of subject diversity\nposes challenges in achieving good generalization and widespread applicability.\nIn this paper, we learn the reflectance prior in image space rather than UV\nspace and present a framework named ID2Reflectance. Our framework can directly\nestimate the reflectance maps of a single image while using limited reflectance\ndata for training. Our key insight is that reflectance data shares facial\nstructures with RGB faces, which enables obtaining expressive facial prior from\ninexpensive RGB data thus reducing the dependency on reflectance data. We first\nlearn a high-quality prior for facial reflectance. Specifically, we pretrain\nmulti-domain facial feature codebooks and design a codebook fusion method to\nalign the reflectance and RGB domains. Then, we propose an identity-conditioned\nswapping module that injects facial identity from the target image into the\npre-trained autoencoder to modify the identity of the source reflectance image.", + "Then, we propose an identity-conditioned\nswapping module that injects facial identity from the target image into the\npre-trained autoencoder to modify the identity of the source reflectance image.\nFinally, we stitch multi-view swapped reflectance images to obtain renderable\nassets. Extensive experiments demonstrate that our method exhibits excellent\ngeneralization capability and achieves state-of-the-art facial reflectance\nreconstruction results for in-the-wild faces. Our project page is\nhttps://xingyuren.github.io/id2reflectance/.", + "Most neural compression models are trained on large datasets of images or\nvideos in order to generalize to unseen data. Such generalization typically\nrequires large and expressive architectures with a high decoding complexity.\nHere we introduce C3, a neural compression method with strong rate-distortion\n(RD) performance that instead overfits a small model to each image or video\nseparately. The resulting decoding complexity of C3 can be an order of\nmagnitude lower than neural baselines with similar RD performance. C3 builds on\nCOOL-CHIC (Ladune et al.) and makes several simple and effective improvements\nfor images. We further develop new methodology to apply C3 to videos. On the\nCLIC2020 image benchmark, we match the RD performance of VTM, the reference\nimplementation of the H.266 codec, with less than 3k MACs/pixel for decoding.\nOn the UVG video benchmark, we match the RD performance of the Video\nCompression Transformer (Mentzer et al.), a well-established neural video\ncodec, with less than 5k MACs/pixel for decoding.", + "We propose an efficient abnormal event detection model based on a lightweight\nmasked auto-encoder (AE) applied at the video frame level. The novelty of the\nproposed model is threefold. First, we introduce an approach to weight tokens\nbased on motion gradients, thus shifting the focus from the static background\nscene to the foreground objects. Second, we integrate a teacher decoder and a\nstudent decoder into our architecture, leveraging the discrepancy between the\noutputs given by the two decoders to improve anomaly detection. Third, we\ngenerate synthetic abnormal events to augment the training videos, and task the\nmasked AE model to jointly reconstruct the original frames (without anomalies)\nand the corresponding pixel-level anomaly maps. Our design leads to an\nefficient and effective model, as demonstrated by the extensive experiments\ncarried out on four benchmarks: Avenue, ShanghaiTech, UBnormal and UCSD Ped2.\nThe empirical results show that our model achieves an excellent trade-off\nbetween speed and accuracy, obtaining competitive AUC scores, while processing\n1655 FPS. Hence, our model is between 8 and 70 times faster than competing\nmethods. We also conduct an ablation study to justify our design.", + "Hence, our model is between 8 and 70 times faster than competing\nmethods. We also conduct an ablation study to justify our design. Our code is\nfreely available at: https://github.com/ristea/aed-mae.", + "The recent advance in vision-language models is largely attributed to the\nabundance of image-text data. We aim to replicate this success for\nvideo-language models, but there simply is not enough human-curated video-text\ndata available. We thus resort to fine-tuning a video-language model from a\nstrong image-language baseline with synthesized instructional data. The\nresulting video model by video-instruction-tuning (VIIT) is then used to\nauto-label millions of videos to generate high-quality captions. We show the\nadapted video-language model performs well on a wide range of video-language\nbenchmarks. For instance, it surpasses the best prior result on open-ended\nNExT-QA by 2.8%. Besides, our model generates detailed descriptions for\npreviously unseen videos, which provide better textual supervision than\nexisting methods. Experiments show that a video-language dual-encoder model\ncontrastively trained on these auto-generated captions is 3.8% better than the\nstrongest baseline that also leverages vision-language models. Our best model\noutperforms state-of-the-art methods on MSR-VTT zero-shot text-to-video\nretrieval by 6%.", + "Our best model\noutperforms state-of-the-art methods on MSR-VTT zero-shot text-to-video\nretrieval by 6%. As a side product, we generate the largest video caption\ndataset to date.", + "We present SimXR, a method for controlling a simulated avatar from\ninformation (headset pose and cameras) obtained from AR / VR headsets. Due to\nthe challenging viewpoint of head-mounted cameras, the human body is often\nclipped out of view, making traditional image-based egocentric pose estimation\nchallenging. On the other hand, headset poses provide valuable information\nabout overall body motion, but lack fine-grained details about the hands and\nfeet. To synergize headset poses with cameras, we control a humanoid to track\nheadset movement while analyzing input images to decide body movement. When\nbody parts are seen, the movements of hands and feet will be guided by the\nimages; when unseen, the laws of physics guide the controller to generate\nplausible motion. We design an end-to-end method that does not rely on any\nintermediate representations and learns to directly map from images and headset\nposes to humanoid control signals. To train our method, we also propose a\nlarge-scale synthetic dataset created using camera configurations compatible\nwith a commercially available VR headset (Quest 2) and show promising results\non real-world captures.", + "To train our method, we also propose a\nlarge-scale synthetic dataset created using camera configurations compatible\nwith a commercially available VR headset (Quest 2) and show promising results\non real-world captures. To demonstrate the applicability of our framework, we\nalso test it on an AR headset with a forward-facing camera.", + "In this paper, we introduce the first large-scale video prediction model in\nthe autonomous driving discipline. To eliminate the restriction of high-cost\ndata collection and empower the generalization ability of our model, we acquire\nmassive data from the web and pair it with diverse and high-quality text\ndescriptions. The resultant dataset accumulates over 2000 hours of driving\nvideos, spanning areas all over the world with diverse weather conditions and\ntraffic scenarios. Inheriting the merits from recent latent diffusion models,\nour model, dubbed GenAD, handles the challenging dynamics in driving scenes\nwith novel temporal reasoning blocks. We showcase that it can generalize to\nvarious unseen driving datasets in a zero-shot manner, surpassing general or\ndriving-specific video prediction counterparts. Furthermore, GenAD can be\nadapted into an action-conditioned prediction model or a motion planner,\nholding great potential for real-world driving applications.", + "Zero-Shot Temporal Action Localization (ZS-TAL) seeks to identify and locate\nactions in untrimmed videos unseen during training. Existing ZS-TAL methods\ninvolve fine-tuning a model on a large amount of annotated training data. While\neffective, training-based ZS-TAL approaches assume the availability of labeled\ndata for supervised learning, which can be impractical in some applications.\nFurthermore, the training process naturally induces a domain bias into the\nlearned model, which may adversely affect the model's generalization ability to\narbitrary videos. These considerations prompt us to approach the ZS-TAL problem\nfrom a radically novel perspective, relaxing the requirement for training data.\nTo this aim, we introduce a novel method that performs Test-Time adaptation for\nTemporal Action Localization (T3AL). In a nutshell, T3AL adapts a pre-trained\nVision and Language Model (VLM). T3AL operates in three steps. First, a\nvideo-level pseudo-label of the action category is computed by aggregating\ninformation from the entire video. Then, action localization is performed\nadopting a novel procedure inspired by self-supervised learning.", + "T3AL operates in three steps. First, a\nvideo-level pseudo-label of the action category is computed by aggregating\ninformation from the entire video. Then, action localization is performed\nadopting a novel procedure inspired by self-supervised learning. Finally,\nframe-level textual descriptions extracted with a state-of-the-art captioning\nmodel are employed for refining the action region proposals. We validate the\neffectiveness of T3AL by conducting experiments on the THUMOS14 and the\nActivityNet-v1.3 datasets. Our results demonstrate that T3AL significantly\noutperforms zero-shot baselines based on state-of-the-art VLMs, confirming the\nbenefit of a test-time adaptation approach.", + "Open-vocabulary 3D instance segmentation is cutting-edge for its ability to\nsegment 3D instances without predefined categories. However, progress in 3D\nlags behind its 2D counterpart due to limited annotated 3D data. To address\nthis, recent works first generate 2D open-vocabulary masks through 2D models\nand then merge them into 3D instances based on metrics calculated between two\nneighboring frames. In contrast to these local metrics, we propose a novel\nmetric, view consensus rate, to enhance the utilization of multi-view\nobservations. The key insight is that two 2D masks should be deemed part of the\nsame 3D instance if a significant number of other 2D masks from different views\ncontain both these two masks. Using this metric as edge weight, we construct a\nglobal mask graph where each mask is a node. Through iterative clustering of\nmasks showing high view consensus, we generate a series of clusters, each\nrepresenting a distinct 3D instance. Notably, our model is training-free.", + "Using this metric as edge weight, we construct a\nglobal mask graph where each mask is a node. Through iterative clustering of\nmasks showing high view consensus, we generate a series of clusters, each\nrepresenting a distinct 3D instance. Notably, our model is training-free.\nThrough extensive experiments on publicly available datasets, including\nScanNet++, ScanNet200 and MatterPort3D, we demonstrate that our method achieves\nstate-of-the-art performance in open-vocabulary 3D instance segmentation. Our\nproject page is at https://pku-epic.github.io/MaskClustering.", + "Conditional human motion generation is an important topic with many\napplications in virtual reality, gaming, and robotics. While prior works have\nfocused on generating motion guided by text, music, or scenes, these typically\nresult in isolated motions confined to short durations. Instead, we address the\ngeneration of long, continuous sequences guided by a series of varying textual\ndescriptions. In this context, we introduce FlowMDM, the first diffusion-based\nmodel that generates seamless Human Motion Compositions (HMC) without any\npostprocessing or redundant denoising steps. For this, we introduce the Blended\nPositional Encodings, a technique that leverages both absolute and relative\npositional encodings in the denoising chain. More specifically, global motion\ncoherence is recovered at the absolute stage, whereas smooth and realistic\ntransitions are built at the relative stage. As a result, we achieve\nstate-of-the-art results in terms of accuracy, realism, and smoothness on the\nBabel and HumanML3D datasets.", + "As a result, we achieve\nstate-of-the-art results in terms of accuracy, realism, and smoothness on the\nBabel and HumanML3D datasets. FlowMDM excels when trained with only a single\ndescription per motion sequence thanks to its Pose-Centric Cross-ATtention,\nwhich makes it robust against varying text descriptions at inference time.\nFinally, to address the limitations of existing HMC metrics, we propose two new\nmetrics: the Peak Jerk and the Area Under the Jerk, to detect abrupt\ntransitions.", + "Adversarial robustness of the neural network is a significant concern when it\nis applied to security-critical domains. In this situation, adversarial\ndistillation is a promising option which aims to distill the robustness of the\nteacher network to improve the robustness of a small student network. Previous\nworks pretrain the teacher network to make it robust against the adversarial\nexamples aimed at itself. However, the adversarial examples are dependent on\nthe parameters of the target network. The fixed teacher network inevitably\ndegrades its robustness against the unseen transferred adversarial examples\nwhich target the parameters of the student network in the adversarial\ndistillation process. We propose PeerAiD to make a peer network learn the\nadversarial examples of the student network instead of adversarial examples\naimed at itself. PeerAiD is an adversarial distillation that trains the peer\nnetwork and the student network simultaneously in order to specialize the peer\nnetwork for defending the student network. We observe that such peer networks\nsurpass the robustness of the pretrained robust teacher model against\nadversarial examples aimed at the student network.", + "We observe that such peer networks\nsurpass the robustness of the pretrained robust teacher model against\nadversarial examples aimed at the student network. With this peer network and\nadversarial distillation, PeerAiD achieves significantly higher robustness of\nthe student network with AutoAttack (AA) accuracy by up to 1.66%p and improves\nthe natural accuracy of the student network by up to 4.72%p with ResNet-18 on\nTinyImageNet dataset. Code is available at\nhttps://github.com/jaewonalive/PeerAiD.", + "3D correspondence, i.e., a pair of 3D points, is a fundamental concept in\ncomputer vision. A set of 3D correspondences, when equipped with compatibility\nedges, forms a correspondence graph. This graph is a critical component in\nseveral state-of-the-art 3D point cloud registration approaches, e.g., the one\nbased on maximal cliques (MAC). However, its properties have not been well\nunderstood. So we present the first study that introduces graph signal\nprocessing into the domain of correspondence graph. We exploit the generalized\ndegree signal on correspondence graph and pursue sampling strategies that\npreserve high-frequency components of this signal. To address time-consuming\nsingular value decomposition in deterministic sampling, we resort to a\nstochastic approximate sampling strategy. As such, the core of our method is\nthe stochastic spectral sampling of correspondence graph. As an application, we\nbuild a complete 3D registration algorithm termed as FastMAC, that reaches\nreal-time speed while leading to little to none performance drop. Through\nextensive experiments, we validate that FastMAC works for both indoor and\noutdoor benchmarks.", + "As an application, we\nbuild a complete 3D registration algorithm termed as FastMAC, that reaches\nreal-time speed while leading to little to none performance drop. Through\nextensive experiments, we validate that FastMAC works for both indoor and\noutdoor benchmarks. For example, FastMAC can accelerate MAC by 80 times while\nmaintaining high registration success rate on KITTI. Codes are publicly\navailable at https://github.com/Forrest-110/FastMAC.", + "Federated learning is a promising framework to train neural networks with\nwidely distributed data. However, performance degrades heavily with\nheterogeneously distributed data. Recent work has shown this is due to the\nfinal layer of the network being most prone to local bias, some finding success\nfreezing the final layer as an orthogonal classifier. We investigate the\ntraining dynamics of the classifier by applying SVD to the weights motivated by\nthe observation that freezing weights results in constant singular values. We\nfind that there are differences when training in IID and non-IID settings.\nBased on this finding, we introduce two regularization terms for local training\nto continuously emulate IID settings: (1) variance in the dimension-wise\nprobability distribution of the classifier and (2) hyperspherical uniformity of\nrepresentations of the encoder. These regularizations promote local models to\nact as if it were in an IID setting regardless of the local data distribution,\nthus offsetting proneness to bias while being flexible to the data. On\nextensive experiments in both label-shift and feature-shift settings, we verify\nthat our method achieves highest performance by a large margin especially in\nhighly non-IID cases in addition to being scalable to larger models and\ndatasets.", + "Federated Learning (FL) aggregates locally trained models from individual\nclients to construct a global model. While FL enables learning a model with\ndata privacy, it often suffers from significant performance degradation when\nclients have heterogeneous data distributions. This data heterogeneity causes\nthe model to forget the global knowledge acquired from previously sampled\nclients after being trained on local datasets. Although the introduction of\nproximal objectives in local updates helps to preserve global knowledge, it can\nalso hinder local learning by interfering with local objectives. To address\nthis problem, we propose a novel method, Federated Stabilized Orthogonal\nLearning (FedSOL), which adopts an orthogonal learning strategy to balance the\ntwo conflicting objectives. FedSOL is designed to identify gradients of local\nobjectives that are inherently orthogonal to directions affecting the proximal\nobjective. Specifically, FedSOL targets parameter regions where learning on the\nlocal objective is minimally influenced by proximal weight perturbations. Our\nexperiments demonstrate that FedSOL consistently achieves state-of-the-art\nperformance across various scenarios.", + "Gaussian splatting has emerged as a powerful 3D representation that harnesses\nthe advantages of both explicit (mesh) and implicit (NeRF) 3D representations.\nIn this paper, we seek to leverage Gaussian splatting to generate realistic\nanimatable avatars from textual descriptions, addressing the limitations (e.g.,\nflexibility and efficiency) imposed by mesh or NeRF-based representations.\nHowever, a naive application of Gaussian splatting cannot generate high-quality\nanimatable avatars and suffers from learning instability; it also cannot\ncapture fine avatar geometries and often leads to degenerate body parts. To\ntackle these problems, we first propose a primitive-based 3D Gaussian\nrepresentation where Gaussians are defined inside pose-driven primitives to\nfacilitate animation. Second, to stabilize and amortize the learning of\nmillions of Gaussians, we propose to use neural implicit fields to predict the\nGaussian attributes (e.g., colors). Finally, to capture fine avatar geometries\nand extract detailed meshes, we propose a novel SDF-based implicit mesh\nlearning approach for 3D Gaussians that regularizes the underlying geometries\nand extracts highly detailed textured meshes.", + "Finally, to capture fine avatar geometries\nand extract detailed meshes, we propose a novel SDF-based implicit mesh\nlearning approach for 3D Gaussians that regularizes the underlying geometries\nand extracts highly detailed textured meshes. Our proposed method, GAvatar,\nenables the large-scale generation of diverse animatable avatars using only\ntext prompts. GAvatar significantly surpasses existing methods in terms of both\nappearance and geometry quality, and achieves extremely fast rendering (100\nfps) at 1K resolution.", + "Understanding how attention varies across individuals has significant\nscientific and societal impacts. However, existing visual scanpath models treat\nattention uniformly, neglecting individual differences. To bridge this gap,\nthis paper focuses on individualized scanpath prediction (ISP), a new attention\nmodeling task that aims to accurately predict how different individuals shift\ntheir attention in diverse visual tasks. It proposes an ISP method featuring\nthree novel technical components: (1) an observer encoder to characterize and\nintegrate an observer's unique attention traits, (2) an observer-centric\nfeature integration approach that holistically combines visual features, task\nguidance, and observer-specific characteristics, and (3) an adaptive fixation\nprioritization mechanism that refines scanpath predictions by dynamically\nprioritizing semantic feature maps based on individual observers' attention\ntraits. These novel components allow scanpath models to effectively address the\nattention variations across different observers. Our method is generally\napplicable to different datasets, model architectures, and visual tasks,\noffering a comprehensive tool for transforming general scanpath models into\nindividualized ones. Comprehensive evaluations using value-based and\nranking-based metrics verify the method's effectiveness and generalizability.", + "Vision-language foundation models have shown remarkable performance in\nvarious zero-shot settings such as image retrieval, classification, or\ncaptioning. But so far, those models seem to fall behind when it comes to\nzero-shot localization of referential expressions and objects in images. As a\nresult, they need to be fine-tuned for this task. In this paper, we show that\npretrained vision-language (VL) models allow for zero-shot open-vocabulary\nobject localization without any fine-tuning. To leverage those capabilities, we\npropose a Grounding Everything Module (GEM) that generalizes the idea of\nvalue-value attention introduced by CLIPSurgery to a self-self attention path.\nWe show that the concept of self-self attention corresponds to clustering, thus\nenforcing groups of tokens arising from the same object to be similar while\npreserving the alignment with the language space. To further guide the group\nformation, we propose a set of regularizations that allows the model to finally\ngeneralize across datasets and backbones. We evaluate the proposed GEM\nframework on various benchmark tasks and datasets for semantic segmentation.", + "To further guide the group\nformation, we propose a set of regularizations that allows the model to finally\ngeneralize across datasets and backbones. We evaluate the proposed GEM\nframework on various benchmark tasks and datasets for semantic segmentation. It\nshows that GEM not only outperforms other training-free open-vocabulary\nlocalization methods, but also achieves state-of-the-art results on the\nrecently proposed OpenImagesV7 large-scale segmentation benchmark.", + "We focus on a very challenging task: imaging at nighttime dynamic scenes.\nMost previous methods rely on the low-light enhancement of a conventional RGB\ncamera. However, they would inevitably face a dilemma between the long exposure\ntime of nighttime and the motion blur of dynamic scenes. Event cameras react to\ndynamic changes with higher temporal resolution (microsecond) and higher\ndynamic range (120dB), offering an alternative solution. In this work, we\npresent a novel nighttime dynamic imaging method with an event camera.\nSpecifically, we discover that the event at nighttime exhibits temporal\ntrailing characteristics and spatial non-stationary distribution. Consequently,\nwe propose a nighttime event reconstruction network (NER-Net) which mainly\nincludes a learnable event timestamps calibration module (LETC) to align the\ntemporal trailing events and a non-uniform illumination aware module (NIAM) to\nstabilize the spatiotemporal distribution of events. Moreover, we construct a\npaired real low-light event dataset (RLED) through a co-axial imaging system,\nincluding 64,200 spatially and temporally aligned image GTs and low-light\nevents.", + "Moreover, we construct a\npaired real low-light event dataset (RLED) through a co-axial imaging system,\nincluding 64,200 spatially and temporally aligned image GTs and low-light\nevents. Extensive experiments demonstrate that the proposed method outperforms\nstate-of-the-art methods in terms of visual quality and generalization ability\non real-world nighttime datasets. The project are available at:\nhttps://github.com/Liu-haoyue/NER-Net.", + "Humans effortlessly interpret images by parsing them into part-whole\nhierarchies; deep learning excels in learning multi-level feature spaces, but\nthey often lack explicit coding of part-whole relations, a prominent property\nof medical imaging. To overcome this limitation, we introduce Adam-v2, a new\nself-supervised learning framework extending Adam [79] by explicitly\nincorporating part-whole hierarchies into its learning objectives through three\nkey branches: (1) Localizability, acquiring discriminative representations to\ndistinguish different anatomical patterns; (2) Composability, learning each\nanatomical structure in a parts-to-whole manner; and (3) Decomposability,\ncomprehending each anatomical structure in a whole-to-parts manner.\nExperimental results across 10 tasks, compared to 11 baselines in zero-shot,\nfew-shot transfer, and full fine-tuning settings, showcase Adam-v2's superior\nperformance over large-scale medical models and existing SSL methods across\ndiverse downstream tasks. The higher generality and robustness of Adam-v2's\nrepresentations originate from its explicit construction of hierarchies for\ndistinct anatomical structures from unlabeled medical images.", + "The higher generality and robustness of Adam-v2's\nrepresentations originate from its explicit construction of hierarchies for\ndistinct anatomical structures from unlabeled medical images. Adam-v2 preserves\na semantic balance of anatomical diversity and harmony in its embedding,\nyielding representations that are both generic and semantically meaningful, yet\noverlooked in existing SSL methods. All code and pretrained models are\navailable at https://github.com/JLiangLab/Eden.", + "Test-time adaptation with pre-trained vision-language models has attracted\nincreasing attention for tackling distribution shifts during the test time.\nThough prior studies have achieved very promising performance, they involve\nintensive computation which is severely unaligned with test-time adaptation. We\ndesign TDA, a training-free dynamic adapter that enables effective and\nefficient test-time adaptation with vision-language models. TDA works with a\nlightweight key-value cache that maintains a dynamic queue with few-shot pseudo\nlabels as values and the corresponding test-sample features as keys. Leveraging\nthe key-value cache, TDA allows adapting to test data gradually via progressive\npseudo label refinement which is super-efficient without incurring any\nbackpropagation. In addition, we introduce negative pseudo labeling that\nalleviates the adverse impact of pseudo label noises by assigning pseudo labels\nto certain negative classes when the model is uncertain about its pseudo label\npredictions. Extensive experiments over two benchmarks demonstrate TDA's\nsuperior effectiveness and efficiency as compared with the state-of-the-art.\nThe code has been released in \\url{https://kdiaaa.github.io/tda/}.", + "Is vision good enough for language? Recent advancements in multimodal models\nprimarily stem from the powerful reasoning abilities of large language models\n(LLMs). However, the visual component typically depends only on the\ninstance-level contrastive language-image pre-training (CLIP). Our research\nreveals that the visual capabilities in recent multimodal LLMs (MLLMs) still\nexhibit systematic shortcomings. To understand the roots of these errors, we\nexplore the gap between the visual embedding space of CLIP and vision-only\nself-supervised learning. We identify ''CLIP-blind pairs'' - images that CLIP\nperceives as similar despite their clear visual differences. With these pairs,\nwe construct the Multimodal Visual Patterns (MMVP) benchmark. MMVP exposes\nareas where state-of-the-art systems, including GPT-4V, struggle with\nstraightforward questions across nine basic visual patterns, often providing\nincorrect answers and hallucinated explanations. We further evaluate various\nCLIP-based vision-and-language models and found a notable correlation between\nvisual patterns that challenge CLIP models and those problematic for multimodal\nLLMs.", + "We further evaluate various\nCLIP-based vision-and-language models and found a notable correlation between\nvisual patterns that challenge CLIP models and those problematic for multimodal\nLLMs. As an initial effort to address these issues, we propose a Mixture of\nFeatures (MoF) approach, demonstrating that integrating vision self-supervised\nlearning features with MLLMs can significantly enhance their visual grounding\ncapabilities. Together, our research suggests visual representation learning\nremains an open challenge, and accurate visual grounding is crucial for future\nsuccessful multimodal systems.", + "Instance segmentation of neurons in volumetric light microscopy images of\nnervous systems enables groundbreaking research in neuroscience by facilitating\njoint functional and morphological analyses of neural circuits at cellular\nresolution. Yet said multi-neuron light microscopy data exhibits extremely\nchallenging properties for the task of instance segmentation: Individual\nneurons have long-ranging, thin filamentous and widely branching morphologies,\nmultiple neurons are tightly inter-weaved, and partial volume effects, uneven\nillumination and noise inherent to light microscopy severely impede local\ndisentangling as well as long-range tracing of individual neurons. These\nproperties reflect a current key challenge in machine learning research, namely\nto effectively capture long-range dependencies in the data. While respective\nmethodological research is buzzing, to date methods are typically benchmarked\non synthetic datasets. To address this gap, we release the FlyLight Instance\nSegmentation Benchmark (FISBe) dataset, the first publicly available\nmulti-neuron light microscopy dataset with pixel-wise annotations. In addition,\nwe define a set of instance segmentation metrics for benchmarking that we\ndesigned to be meaningful with regard to downstream analyses.", + "In addition,\nwe define a set of instance segmentation metrics for benchmarking that we\ndesigned to be meaningful with regard to downstream analyses. Lastly, we\nprovide three baselines to kick off a competition that we envision to both\nadvance the field of machine learning regarding methodology for capturing\nlong-range data dependencies, and facilitate scientific discovery in basic\nneuroscience.", + "Vision language models (VLMs) have experienced rapid advancements through the\nintegration of large language models (LLMs) with image-text pairs, yet they\nstruggle with detailed regional visual understanding due to limited spatial\nawareness of the vision encoder, and the use of coarse-grained training data\nthat lacks detailed, region-specific captions. To address this, we introduce\nRegionGPT (short as RGPT), a novel framework designed for complex region-level\ncaptioning and understanding. RGPT enhances the spatial awareness of regional\nrepresentation with simple yet effective modifications to existing visual\nencoders in VLMs. We further improve performance on tasks requiring a specific\noutput scope by integrating task-guided instruction prompts during both\ntraining and inference phases, while maintaining the model's versatility for\ngeneral-purpose tasks. Additionally, we develop an automated region caption\ndata generation pipeline, enriching the training set with detailed region-level\ncaptions. We demonstrate that a universal RGPT model can be effectively applied\nand significantly enhancing performance across a range of region-level tasks,\nincluding but not limited to complex region descriptions, reasoning, object\nclassification, and referring expressions comprehension.", + "Recent advances in Large Multimodal Models (LMM) have made it possible for\nvarious applications in human-machine interactions. However, developing LMMs\nthat can comprehend, reason, and plan in complex and diverse 3D environments\nremains a challenging topic, especially considering the demand for\nunderstanding permutation-invariant point cloud 3D representations of the 3D\nscene. Existing works seek help from multi-view images, and project 2D features\nto 3D space as 3D scene representations. This, however, leads to huge\ncomputational overhead and performance degradation. In this paper, we present\nLL3DA, a Large Language 3D Assistant that takes point cloud as direct input and\nrespond to both textual-instructions and visual-prompts. This help LMMs better\ncomprehend human interactions and further help to remove the ambiguities in\ncluttered 3D scenes. Experiments show that LL3DA achieves remarkable results,\nand surpasses various 3D vision-language models on both 3D Dense Captioning and\n3D Question Answering.", + "Representing and rendering dynamic scenes has been an important but\nchallenging task. Especially, to accurately model complex motions, high\nefficiency is usually hard to guarantee. To achieve real-time dynamic scene\nrendering while also enjoying high training and storage efficiency, we propose\n4D Gaussian Splatting (4D-GS) as a holistic representation for dynamic scenes\nrather than applying 3D-GS for each individual frame. In 4D-GS, a novel\nexplicit representation containing both 3D Gaussians and 4D neural voxels is\nproposed. A decomposed neural voxel encoding algorithm inspired by HexPlane is\nproposed to efficiently build Gaussian features from 4D neural voxels and then\na lightweight MLP is applied to predict Gaussian deformations at novel\ntimestamps. Our 4D-GS method achieves real-time rendering under high\nresolutions, 82 FPS at an 800$\\times$800 resolution on an RTX 3090 GPU while\nmaintaining comparable or better quality than previous state-of-the-art\nmethods. More demos and code are available at\nhttps://guanjunwu.github.io/4dgs/.", + "Personalized Federated Learning (pFL) has emerged as a promising solution to\ntackle data heterogeneity across clients in FL. However, existing pFL methods\neither (1) introduce high communication and computation costs or (2) overfit to\nlocal data, which can be limited in scope, and are vulnerable to evolved test\nsamples with natural shifts. In this paper, we propose PerAda, a\nparameter-efficient pFL framework that reduces communication and computational\ncosts and exhibits superior generalization performance, especially under\ntest-time distribution shifts. PerAda reduces the costs by leveraging the power\nof pretrained models and only updates and communicates a small number of\nadditional parameters from adapters. PerAda has good generalization since it\nregularizes each client's personalized adapter with a global adapter, while the\nglobal adapter uses knowledge distillation to aggregate generalized information\nfrom all clients. Theoretically, we provide generalization bounds to explain\nwhy PerAda improves generalization, and we prove its convergence to stationary\npoints under non-convex settings.", + "Theoretically, we provide generalization bounds to explain\nwhy PerAda improves generalization, and we prove its convergence to stationary\npoints under non-convex settings. Empirically, PerAda demonstrates competitive\npersonalized performance (+4.85% on CheXpert) and enables better\nout-of-distribution generalization (+5.23% on CIFAR-10-C) on different datasets\nacross natural and medical domains compared with baselines, while only updating\n12.6% of parameters per model based on the adapter. Our code is available at\nhttps://github.com/NVlabs/PerAda.", + "In this paper, we propose a novel virtual try-on from unconstrained designs\n(ucVTON) task to enable photorealistic synthesis of personalized composite\nclothing on input human images. Unlike prior arts constrained by specific input\ntypes, our method allows flexible specification of style (text or image) and\ntexture (full garment, cropped sections, or texture patches) conditions. To\naddress the entanglement challenge when using full garment images as\nconditions, we develop a two-stage pipeline with explicit disentanglement of\nstyle and texture. In the first stage, we generate a human parsing map\nreflecting the desired style conditioned on the input. In the second stage, we\ncomposite textures onto the parsing map areas based on the texture input. To\nrepresent complex and non-stationary textures that have never been achieved in\nprevious fashion editing works, we first propose extracting hierarchical and\nbalanced CLIP features and applying position encoding in VTON. Experiments\ndemonstrate superior synthesis quality and personalization enabled by our\nmethod. The flexible control over style and texture mixing brings virtual\ntry-on to a new level of user experience for online shopping and fashion\ndesign.", + "Continual learning requires the model to learn multiple tasks sequentially.\nIn continual learning, the model should possess the ability to maintain its\nperformance on old tasks (stability) and the ability to adapt to new tasks\ncontinuously (plasticity). Recently, parameter-efficient fine-tuning (PEFT),\nwhich involves freezing a pre-trained model and injecting a small number of\nlearnable parameters to adapt to downstream tasks, has gained increasing\npopularity in continual learning. Although existing continual learning methods\nbased on PEFT have demonstrated superior performance compared to those not\nbased on PEFT, most of them do not consider how to eliminate the interference\nof the new task on the old tasks, which inhibits the model from making a good\ntrade-off between stability and plasticity. In this work, we propose a new PEFT\nmethod, called interference-free low-rank adaptation (InfLoRA), for continual\nlearning. InfLoRA injects a small number of parameters to reparameterize the\npre-trained weights and shows that fine-tuning these injected parameters is\nequivalent to fine-tuning the pre-trained weights within a subspace.", + "InfLoRA injects a small number of parameters to reparameterize the\npre-trained weights and shows that fine-tuning these injected parameters is\nequivalent to fine-tuning the pre-trained weights within a subspace.\nFurthermore, InfLoRA designs this subspace to eliminate the interference of the\nnew task on the old tasks, making a good trade-off between stability and\nplasticity. Experimental results show that InfLoRA outperforms existing\nstate-of-the-art continual learning methods on multiple datasets.", + "3D pose transfer that aims to transfer the desired pose to a target mesh is\none of the most challenging 3D generation tasks. Previous attempts rely on\nwell-defined parametric human models or skeletal joints as driving pose\nsources. However, to obtain those clean pose sources, cumbersome but necessary\npre-processing pipelines are inevitable, hindering implementations of the\nreal-time applications. This work is driven by the intuition that the\nrobustness of the model can be enhanced by introducing adversarial samples into\nthe training, leading to a more invulnerable model to the noisy inputs, which\neven can be further extended to directly handling the real-world data like raw\npoint clouds/scans without intermediate processing. Furthermore, we propose a\nnovel 3D pose Masked Autoencoder (3D-PoseMAE), a customized MAE that\neffectively learns 3D extrinsic presentations (i.e., pose). 3D-PoseMAE\nfacilitates learning from the aspect of extrinsic attributes by simultaneously\ngenerating adversarial samples that perturb the model and learning the\narbitrary raw noisy poses via a multi-scale masking strategy.", + "3D-PoseMAE\nfacilitates learning from the aspect of extrinsic attributes by simultaneously\ngenerating adversarial samples that perturb the model and learning the\narbitrary raw noisy poses via a multi-scale masking strategy. Both qualitative\nand quantitative studies show that the transferred meshes given by our network\nresult in much better quality. Besides, we demonstrate the strong\ngeneralizability of our method on various poses, different domains, and even\nraw scans. Experimental results also show meaningful insights that the\nintermediate adversarial samples generated in the training can successfully\nattack the existing pose transfer models.", + "Semantic segmentation has innately relied on extensive pixel-level annotated\ndata, leading to the emergence of unsupervised methodologies. Among them,\nleveraging self-supervised Vision Transformers for unsupervised semantic\nsegmentation (USS) has been making steady progress with expressive deep\nfeatures. Yet, for semantically segmenting images with complex objects, a\npredominant challenge remains: the lack of explicit object-level semantic\nencoding in patch-level features. This technical limitation often leads to\ninadequate segmentation of complex objects with diverse structures. To address\nthis gap, we present a novel approach, EAGLE, which emphasizes object-centric\nrepresentation learning for unsupervised semantic segmentation. Specifically,\nwe introduce EiCue, a spectral technique providing semantic and structural cues\nthrough an eigenbasis derived from the semantic similarity matrix of deep image\nfeatures and color affinity from an image. Further, by incorporating our\nobject-centric contrastive loss with EiCue, we guide our model to learn\nobject-level representations with intra- and inter-image object-feature\nconsistency, thereby enhancing semantic accuracy.", + "Further, by incorporating our\nobject-centric contrastive loss with EiCue, we guide our model to learn\nobject-level representations with intra- and inter-image object-feature\nconsistency, thereby enhancing semantic accuracy. Extensive experiments on\nCOCO-Stuff, Cityscapes, and Potsdam-3 datasets demonstrate the state-of-the-art\nUSS results of EAGLE with accurate and consistent semantic segmentation across\ncomplex scenes.", + "Recent advances in diffusion models have successfully enabled text-guided\nimage inpainting. While it seems straightforward to extend such editing\ncapability into the video domain, there have been fewer works regarding\ntext-guided video inpainting. Given a video, a masked region at its initial\nframe, and an editing prompt, it requires a model to do infilling at each frame\nfollowing the editing guidance while keeping the out-of-mask region intact.\nThere are three main challenges in text-guided video inpainting: ($i$) temporal\nconsistency of the edited video, ($ii$) supporting different inpainting types\nat different structural fidelity levels, and ($iii$) dealing with variable\nvideo length. To address these challenges, we introduce Any-Length Video\nInpainting with Diffusion Model, dubbed as AVID. At its core, our model is\nequipped with effective motion modules and adjustable structure guidance, for\nfixed-length video inpainting. Building on top of that, we propose a novel\nTemporal MultiDiffusion sampling pipeline with a middle-frame attention\nguidance mechanism, facilitating the generation of videos with any desired\nduration.", + "Building on top of that, we propose a novel\nTemporal MultiDiffusion sampling pipeline with a middle-frame attention\nguidance mechanism, facilitating the generation of videos with any desired\nduration. Our comprehensive experiments show our model can robustly deal with\nvarious inpainting types at different video duration ranges, with high quality.\nMore visualization results are made publicly available at\nhttps://zhang-zx.github.io/AVID/ .", + "Layout-aware text-to-image generation is a task to generate multi-object\nimages that reflect layout conditions in addition to text conditions. The\ncurrent layout-aware text-to-image diffusion models still have several issues,\nincluding mismatches between the text and layout conditions and quality\ndegradation of generated images. This paper proposes a novel layout-aware\ntext-to-image diffusion model called NoiseCollage to tackle these issues.\nDuring the denoising process, NoiseCollage independently estimates noises for\nindividual objects and then crops and merges them into a single noise. This\noperation helps avoid condition mismatches; in other words, it can put the\nright objects in the right places. Qualitative and quantitative evaluations\nshow that NoiseCollage outperforms several state-of-the-art models. These\nsuccessful results indicate that the crop-and-merge operation of noises is a\nreasonable strategy to control image generation. We also show that NoiseCollage\ncan be integrated with ControlNet to use edges, sketches, and pose skeletons as\nadditional conditions. Experimental results show that this integration boosts\nthe layout accuracy of ControlNet. The code is available at\nhttps://github.com/univ-esuty/noisecollage.", + "Neural implicit scene representations have recently shown encouraging results\nin dense visual SLAM. However, existing methods produce low-quality scene\nreconstruction and low-accuracy localization performance when scaling up to\nlarge indoor scenes and long sequences. These limitations are mainly due to\ntheir single, global radiance field with finite capacity, which does not adapt\nto large scenarios. Their end-to-end pose networks are also not robust enough\nwith the growth of cumulative errors in large scenes. To this end, we introduce\nPLGSLAM, a neural visual SLAM system capable of high-fidelity surface\nreconstruction and robust camera tracking in real-time. To handle large-scale\nindoor scenes, PLGSLAM proposes a progressive scene representation method which\ndynamically allocates new local scene representation trained with frames within\na local sliding window. This allows us to scale up to larger indoor scenes and\nimproves robustness (even under pose drifts). In local scene representation,\nPLGSLAM utilizes tri-planes for local high-frequency features with multi-layer\nperceptron (MLP) networks for the low-frequency feature, achieving smoothness\nand scene completion in unobserved areas.", + "In local scene representation,\nPLGSLAM utilizes tri-planes for local high-frequency features with multi-layer\nperceptron (MLP) networks for the low-frequency feature, achieving smoothness\nand scene completion in unobserved areas. Moreover, we propose local-to-global\nbundle adjustment method with a global keyframe database to address the\nincreased pose drifts on long sequences. Experimental results demonstrate that\nPLGSLAM achieves state-of-the-art scene reconstruction results and tracking\nperformance across various datasets and scenarios (both in small and\nlarge-scale indoor environments).", + "Previous multi-task dense prediction methods based on the Mixture of Experts\n(MoE) have received great performance but they neglect the importance of\nexplicitly modeling the global relations among all tasks. In this paper, we\npresent a novel decoder-focused method for multi-task dense prediction, called\nMixture-of-Low-Rank-Experts (MLoRE). To model the global task relationships,\nMLoRE adds a generic convolution path to the original MoE structure, where each\ntask feature can go through this path for explicit parameter sharing.\nFurthermore, to control the parameters and computational cost brought by the\nincrease in the number of experts, we take inspiration from LoRA and propose to\nleverage the low-rank format of a vanilla convolution in the expert network.\nSince the low-rank experts have fewer parameters and can be dynamically\nparameterized into the generic convolution, the parameters and computational\ncost do not change much with the increase of experts. Benefiting from this\ndesign, we increase the number of experts and its reception field to enlarge\nthe representation capacity, facilitating multiple dense tasks learning in a\nunified network.", + "Benefiting from this\ndesign, we increase the number of experts and its reception field to enlarge\nthe representation capacity, facilitating multiple dense tasks learning in a\nunified network. Extensive experiments on the PASCAL-Context and NYUD-v2\nbenchmarks show that our MLoRE achieves superior performance compared to\nprevious state-of-the-art methods on all metrics. Our code is available at\nhttps://github.com/YuqiYang213/MLoRE.", + "The ability to associate touch with other modalities has huge implications\nfor humans and computational systems. However, multimodal learning with touch\nremains challenging due to the expensive data collection process and\nnon-standardized sensor outputs. We introduce UniTouch, a unified tactile model\nfor vision-based touch sensors connected to multiple modalities, including\nvision, language, and sound. We achieve this by aligning our UniTouch\nembeddings to pretrained image embeddings already associated with a variety of\nother modalities. We further propose learnable sensor-specific tokens, allowing\nthe model to learn from a set of heterogeneous tactile sensors, all at the same\ntime. UniTouch is capable of conducting various touch sensing tasks in the\nzero-shot setting, from robot grasping prediction to touch image question\nanswering. To the best of our knowledge, UniTouch is the first to demonstrate\nsuch capabilities. Project page: https://cfeng16.github.io/UniTouch/", + "The increasing prevalence of video clips has sparked growing interest in\ntext-video retrieval. Recent advances focus on establishing a joint embedding\nspace for text and video, relying on consistent embedding representations to\ncompute similarity. However, the text content in existing datasets is generally\nshort and concise, making it hard to fully describe the redundant semantics of\na video. Correspondingly, a single text embedding may be less expressive to\ncapture the video embedding and empower the retrieval. In this study, we\npropose a new stochastic text modeling method T-MASS, i.e., text is modeled as\na stochastic embedding, to enrich text embedding with a flexible and resilient\nsemantic range, yielding a text mass. To be specific, we introduce a\nsimilarity-aware radius module to adapt the scale of the text mass upon the\ngiven text-video pairs. Plus, we design and develop a support text\nregularization to further control the text mass during the training. The\ninference pipeline is also tailored to fully exploit the text mass for accurate\nretrieval.", + "Plus, we design and develop a support text\nregularization to further control the text mass during the training. The\ninference pipeline is also tailored to fully exploit the text mass for accurate\nretrieval. Empirical evidence suggests that T-MASS not only effectively\nattracts relevant text-video pairs while distancing irrelevant ones, but also\nenables the determination of precise text embeddings for relevant pairs. Our\nexperimental results show a substantial improvement of T-MASS over baseline (3%\nto 6.3% by R@1). Also, T-MASS achieves state-of-the-art performance on five\nbenchmark datasets, including MSRVTT, LSMDC, DiDeMo, VATEX, and Charades.", + "Recovering the 3D scene geometry from a single view is a fundamental yet\nill-posed problem in computer vision. While classical depth estimation methods\ninfer only a 2.5D scene representation limited to the image plane, recent\napproaches based on radiance fields reconstruct a full 3D representation.\nHowever, these methods still struggle with occluded regions since inferring\ngeometry without visual observation requires (i) semantic knowledge of the\nsurroundings, and (ii) reasoning about spatial context. We propose KYN, a novel\nmethod for single-view scene reconstruction that reasons about semantic and\nspatial context to predict each point's density. We introduce a vision-language\nmodulation module to enrich point features with fine-grained semantic\ninformation. We aggregate point representations across the scene through a\nlanguage-guided spatial attention mechanism to yield per-point density\npredictions aware of the 3D semantic context. We show that KYN improves 3D\nshape recovery compared to predicting density for each 3D point in isolation.\nWe achieve state-of-the-art results in scene and object reconstruction on\nKITTI-360, and show improved zero-shot generalization compared to prior work.\nProject page: https://ruili3.github.io/kyn.", + "Reliable hand mesh reconstruction (HMR) from commonly-used color and depth\nsensors is challenging especially under scenarios with varied illuminations and\nfast motions. Event camera is a highly promising alternative for its high\ndynamic range and dense temporal resolution properties, but it lacks key\ntexture appearance for hand mesh reconstruction. In this paper, we propose\nEvRGBHand -- the first approach for 3D hand mesh reconstruction with an event\ncamera and an RGB camera compensating for each other. By fusing two modalities\nof data across time, space, and information dimensions,EvRGBHand can tackle\noverexposure and motion blur issues in RGB-based HMR and foreground scarcity\nand background overflow issues in event-based HMR. We further propose\nEvRGBDegrader, which allows our model to generalize effectively in challenging\nscenes, even when trained solely on standard scenes, thus reducing data\nacquisition costs. Experiments on real-world data demonstrate that EvRGBHand\ncan effectively solve the challenging issues when using either type of camera\nalone via retaining the merits of both, and shows the potential of\ngeneralization to outdoor scenes and another type of event camera.", + "Multi-modal large language models (MLLMs) have been shown to efficiently\nintegrate natural language with visual information to handle multi-modal tasks.\nHowever, MLLMs still face a fundamental limitation of hallucinations, where\nthey tend to generate erroneous or fabricated information. In this paper, we\naddress hallucinations in MLLMs from a novel perspective of representation\nlearning. We first analyzed the representation distribution of textual and\nvisual tokens in MLLM, revealing two important findings: 1) there is a\nsignificant gap between textual and visual representations, indicating\nunsatisfactory cross-modal representation alignment; 2) representations of\ntexts that contain and do not contain hallucinations are entangled, making it\nchallenging to distinguish them. These two observations inspire us with a\nsimple yet effective method to mitigate hallucinations. Specifically, we\nintroduce contrastive learning into MLLMs and use text with hallucination as\nhard negative examples, naturally bringing representations of non-hallucinative\ntext and visual samples closer while pushing way representations of\nnon-hallucinating and hallucinative text.", + "Specifically, we\nintroduce contrastive learning into MLLMs and use text with hallucination as\nhard negative examples, naturally bringing representations of non-hallucinative\ntext and visual samples closer while pushing way representations of\nnon-hallucinating and hallucinative text. We evaluate our method quantitatively\nand qualitatively, showing its effectiveness in reducing hallucination\noccurrences and improving performance across multiple benchmarks. On the\nMMhal-Bench benchmark, our method obtains a 34.66% /29.5% improvement over the\nbaseline MiniGPT-4/LLaVA. Our code is available on\nhttps://github.com/X-PLUG/mPLUG-HalOwl/tree/main/hacl.", + "Although effective deepfake detection models have been developed in recent\nyears, recent studies have revealed that these models can result in unfair\nperformance disparities among demographic groups, such as race and gender. This\ncan lead to particular groups facing unfair targeting or exclusion from\ndetection, potentially allowing misclassified deepfakes to manipulate public\nopinion and undermine trust in the model. The existing method for addressing\nthis problem is providing a fair loss function. It shows good fairness\nperformance for intra-domain evaluation but does not maintain fairness for\ncross-domain testing. This highlights the significance of fairness\ngeneralization in the fight against deepfakes. In this work, we propose the\nfirst method to address the fairness generalization problem in deepfake\ndetection by simultaneously considering features, loss, and optimization\naspects. Our method employs disentanglement learning to extract demographic and\ndomain-agnostic forgery features, fusing them to encourage fair learning across\na flattened loss landscape. Extensive experiments on prominent deepfake\ndatasets demonstrate our method's effectiveness, surpassing state-of-the-art\napproaches in preserving fairness during cross-domain deepfake detection. The\ncode is available at https://github.com/Purdue-M2/Fairness-Generalization", + "With the advancement of generative models, the assessment of generated images\nbecomes more and more important. Previous methods measure distances between\nfeatures of reference and generated images from trained vision models. In this\npaper, we conduct an extensive investigation into the relationship between the\nrepresentation space and input space around generated images. We first propose\ntwo measures related to the presence of unnatural elements within images:\ncomplexity, which indicates how non-linear the representation space is, and\nvulnerability, which is related to how easily the extracted feature changes by\nadversarial input changes. Based on these, we introduce a new metric to\nevaluating image-generative models called anomaly score (AS). Moreover, we\npropose AS-i (anomaly score for individual images) that can effectively\nevaluate generated images individually. Experimental results demonstrate the\nvalidity of the proposed approach.", + "In this work, we propose a novel discriminative framework for dexterous grasp\ngeneration, named Dexterous Grasp TRansformer (DGTR), capable of predicting a\ndiverse set of feasible grasp poses by processing the object point cloud with\nonly one forward pass. We formulate dexterous grasp generation as a set\nprediction task and design a transformer-based grasping model for it. However,\nwe identify that this set prediction paradigm encounters several optimization\nchallenges in the field of dexterous grasping and results in restricted\nperformance. To address these issues, we propose progressive strategies for\nboth the training and testing phases. First, the dynamic-static matching\ntraining (DSMT) strategy is presented to enhance the optimization stability\nduring the training phase. Second, we introduce the adversarial-balanced\ntest-time adaptation (AB-TTA) with a pair of adversarial losses to improve\ngrasping quality during the testing phase. Experimental results on the\nDexGraspNet dataset demonstrate the capability of DGTR to predict dexterous\ngrasp poses with both high quality and diversity.", + "Experimental results on the\nDexGraspNet dataset demonstrate the capability of DGTR to predict dexterous\ngrasp poses with both high quality and diversity. Notably, while keeping high\nquality, the diversity of grasp poses predicted by DGTR significantly\noutperforms previous works in multiple metrics without any data pre-processing.\nCodes are available at https://github.com/iSEE-Laboratory/DGTR .", + "Recently, an audio-visual segmentation (AVS) task has been introduced, aiming\nto group pixels with sounding objects within a given video. This task\nnecessitates a first-ever audio-driven pixel-level understanding of the scene,\nposing significant challenges. In this paper, we propose an innovative\naudio-visual transformer framework, termed COMBO, an acronym for COoperation of\nMulti-order Bilateral relatiOns. For the first time, our framework explores\nthree types of bilateral entanglements within AVS: pixel entanglement, modality\nentanglement, and temporal entanglement. Regarding pixel entanglement, we\nemploy a Siam-Encoder Module (SEM) that leverages prior knowledge to generate\nmore precise visual features from the foundational model. For modality\nentanglement, we design a Bilateral-Fusion Module (BFM), enabling COMBO to\nalign corresponding visual and auditory signals bi-directionally. As for\ntemporal entanglement, we introduce an innovative adaptive inter-frame\nconsistency loss according to the inherent rules of temporal.", + "As for\ntemporal entanglement, we introduce an innovative adaptive inter-frame\nconsistency loss according to the inherent rules of temporal. Comprehensive\nexperiments and ablation studies on AVSBench-object (84.7 mIoU on S4, 59.2 mIou\non MS3) and AVSBench-semantic (42.1 mIoU on AVSS) datasets demonstrate that\nCOMBO surpasses previous state-of-the-art methods. Code and more results will\nbe publicly available at https://yannqi.github.io/AVS-COMBO/.", + "Vision-language models (VLMs) have recently shown promising results in\ntraditional downstream tasks. Evaluation studies have emerged to assess their\nabilities, with the majority focusing on the third-person perspective, and only\na few addressing specific tasks from the first-person perspective. However, the\ncapability of VLMs to \"think\" from a first-person perspective, a crucial\nattribute for advancing autonomous agents and robotics, remains largely\nunexplored. To bridge this research gap, we introduce EgoThink, a novel visual\nquestion-answering benchmark that encompasses six core capabilities with twelve\ndetailed dimensions. The benchmark is constructed using selected clips from\negocentric videos, with manually annotated question-answer pairs containing\nfirst-person information. To comprehensively assess VLMs, we evaluate eighteen\npopular VLMs on EgoThink. Moreover, given the open-ended format of the answers,\nwe use GPT-4 as the automatic judge to compute single-answer grading.\nExperimental results indicate that although GPT-4V leads in numerous\ndimensions, all evaluated VLMs still possess considerable potential for\nimprovement in first-person perspective tasks.", + "Moreover, given the open-ended format of the answers,\nwe use GPT-4 as the automatic judge to compute single-answer grading.\nExperimental results indicate that although GPT-4V leads in numerous\ndimensions, all evaluated VLMs still possess considerable potential for\nimprovement in first-person perspective tasks. Meanwhile, enlarging the number\nof trainable parameters has the most significant impact on model performance on\nEgoThink. In conclusion, EgoThink serves as a valuable addition to existing\nevaluation benchmarks for VLMs, providing an indispensable resource for future\nresearch in the realm of embodied artificial intelligence and robotics.", + "Single image depth estimation is a foundational task in computer vision and\ngenerative modeling. However, prevailing depth estimation models grapple with\naccommodating the increasing resolutions commonplace in today's consumer\ncameras and devices. Existing high-resolution strategies show promise, but they\noften face limitations, ranging from error propagation to the loss of\nhigh-frequency details. We present PatchFusion, a novel tile-based framework\nwith three key components to improve the current state of the art: (1) A\npatch-wise fusion network that fuses a globally-consistent coarse prediction\nwith finer, inconsistent tiled predictions via high-level feature guidance, (2)\nA Global-to-Local (G2L) module that adds vital context to the fusion network,\ndiscarding the need for patch selection heuristics, and (3) A Consistency-Aware\nTraining (CAT) and Inference (CAI) approach, emphasizing patch overlap\nconsistency and thereby eradicating the necessity for post-processing.\nExperiments on UnrealStereo4K, MVS-Synth, and Middleburry 2014 demonstrate that\nour framework can generate high-resolution depth maps with intricate details.\nPatchFusion is independent of the base model for depth estimation.", + "Experiments on UnrealStereo4K, MVS-Synth, and Middleburry 2014 demonstrate that\nour framework can generate high-resolution depth maps with intricate details.\nPatchFusion is independent of the base model for depth estimation. Notably, our\nframework built on top of SOTA ZoeDepth brings improvements for a total of\n17.3% and 29.4% in terms of the root mean squared error (RMSE) on\nUnrealStereo4K and MVS-Synth, respectively.", + "Recently, we have witnessed the explosive growth of various volumetric\nrepresentations in modeling animatable head avatars. However, due to the\ndiversity of frameworks, there is no practical method to support high-level\napplications like 3D head avatar editing across different representations. In\nthis paper, we propose a generic avatar editing approach that can be\nuniversally applied to various 3DMM driving volumetric head avatars. To achieve\nthis goal, we design a novel expression-aware modification generative model,\nwhich enables lift 2D editing from a single image to a consistent 3D\nmodification field. To ensure the effectiveness of the generative modification\nprocess, we develop several techniques, including an expression-dependent\nmodification distillation scheme to draw knowledge from the large-scale head\navatar model and 2D facial texture editing tools, implicit latent space\nguidance to enhance model convergence, and a segmentation-based loss reweight\nstrategy for fine-grained texture inversion. Extensive experiments demonstrate\nthat our method delivers high-quality and consistent results across multiple\nexpression and viewpoints. Project page: https://zju3dv.github.io/geneavatar/", + "Given an image set without any labels, our goal is to train a model that maps\neach image to a point in a feature space such that, not only proximity\nindicates visual similarity, but where it is located directly encodes how\nprototypical the image is according to the dataset.\n Our key insight is to perform unsupervised feature learning in hyperbolic\ninstead of Euclidean space, where the distance between points still reflect\nimage similarity, and yet we gain additional capacity for representing\nprototypicality with the location of the point: The closer it is to the origin,\nthe more prototypical it is. The latter property is simply emergent from\noptimizing the usual metric learning objective: The image similar to many\ntraining instances is best placed at the center of corresponding points in\nEuclidean space, but closer to the origin in hyperbolic space.\n We propose an unsupervised feature learning algorithm in Hyperbolic space\nwith sphere pACKing. HACK first generates uniformly packed particles in the\nPoincar\\'e ball of hyperbolic space and then assigns each image uniquely to\neach particle. Images after congealing are regarded more typical of the dataset\nit belongs to.", + "HACK first generates uniformly packed particles in the\nPoincar\\'e ball of hyperbolic space and then assigns each image uniquely to\neach particle. Images after congealing are regarded more typical of the dataset\nit belongs to. With our feature mapper simply trained to spread out training\ninstances in hyperbolic space, we observe that images move closer to the origin\nwith congealing, validating our idea of unsupervised prototypicality discovery.\nWe demonstrate that our data-driven prototypicality provides an easy and\nsuperior unsupervised instance selection to reduce sample complexity, increase\nmodel generalization with atypical instances and robustness with typical ones.", + "Understanding human actions from videos of first-person view poses\nsignificant challenges. Most prior approaches explore representation learning\non egocentric videos only, while overlooking the potential benefit of\nexploiting existing large-scale third-person videos. In this paper, (1) we\ndevelop EgoInstructor, a retrieval-augmented multimodal captioning model that\nautomatically retrieves semantically relevant third-person instructional videos\nto enhance the video captioning of egocentric videos. (2) For training the\ncross-view retrieval module, we devise an automatic pipeline to discover\nego-exo video pairs from distinct large-scale egocentric and exocentric\ndatasets. (3) We train the cross-view retrieval module with a novel EgoExoNCE\nloss that pulls egocentric and exocentric video features closer by aligning\nthem to shared text features that describe similar actions. (4) Through\nextensive experiments, our cross-view retrieval module demonstrates superior\nperformance across seven benchmarks. Regarding egocentric video captioning,\nEgoInstructor exhibits significant improvements by leveraging third-person\nvideos as references.", + "Diffusion models have demonstrated strong potential for robotic trajectory\nplanning. However, generating coherent trajectories from high-level\ninstructions remains challenging, especially for long-range composition tasks\nrequiring multiple sequential skills. We propose SkillDiffuser, an end-to-end\nhierarchical planning framework integrating interpretable skill learning with\nconditional diffusion planning to address this problem. At the higher level,\nthe skill abstraction module learns discrete, human-understandable skill\nrepresentations from visual observations and language instructions. These\nlearned skill embeddings are then used to condition the diffusion model to\ngenerate customized latent trajectories aligned with the skills. This allows\ngenerating diverse state trajectories that adhere to the learnable skills. By\nintegrating skill learning with conditional trajectory generation,\nSkillDiffuser produces coherent behavior following abstract instructions across\ndiverse tasks. Experiments on multi-task robotic manipulation benchmarks like\nMeta-World and LOReL demonstrate state-of-the-art performance and\nhuman-interpretable skill representations from SkillDiffuser. More\nvisualization results and information could be found on our website.", + "Recent progress in the text-driven 3D stylization of a single object has been\nconsiderably promoted by CLIP-based methods. However, the stylization of\nmulti-object 3D scenes is still impeded in that the image-text pairs used for\npre-training CLIP mostly consist of an object. Meanwhile, the local details of\nmultiple objects may be susceptible to omission due to the existing supervision\nmanner primarily relying on coarse-grained contrast of image-text pairs. To\novercome these challenges, we present a novel framework, dubbed TeMO, to parse\nmulti-object 3D scenes and edit their styles under the contrast supervision at\nmultiple levels. We first propose a Decoupled Graph Attention (DGA) module to\ndistinguishably reinforce the features of 3D surface points. Particularly, a\ncross-modal graph is constructed to align the object points accurately and noun\nphrases decoupled from the 3D mesh and textual description. Then, we develop a\nCross-Grained Contrast (CGC) supervision system, where a fine-grained loss\nbetween the words in the textual description and the randomly rendered images\nare constructed to complement the coarse-grained loss.", + "Then, we develop a\nCross-Grained Contrast (CGC) supervision system, where a fine-grained loss\nbetween the words in the textual description and the randomly rendered images\nare constructed to complement the coarse-grained loss. Extensive experiments\nshow that our method can synthesize high-quality stylized content and\noutperform the existing methods over a wide range of multi-object 3D meshes.\nOur code and results will be made publicly available", + "Utilizing multi-view inputs to synthesize novel-view images, Neural Radiance\nFields (NeRF) have emerged as a popular research topic in 3D vision. In this\nwork, we introduce a Generalizable Semantic Neural Radiance Field (GSNeRF),\nwhich uniquely takes image semantics into the synthesis process so that both\nnovel view images and the associated semantic maps can be produced for unseen\nscenes. Our GSNeRF is composed of two stages: Semantic Geo-Reasoning and\nDepth-Guided Visual rendering. The former is able to observe multi-view image\ninputs to extract semantic and geometry features from a scene. Guided by the\nresulting image geometry information, the latter performs both image and\nsemantic rendering with improved performances. Our experiments not only confirm\nthat GSNeRF performs favorably against prior works on both novel-view image and\nsemantic segmentation synthesis but the effectiveness of our sampling strategy\nfor visual rendering is further verified.", + "Scale-ambiguity in 3D scene dimensions leads to magnitude-ambiguity of\nvolumetric densities in neural radiance fields, i.e., the densities double when\nscene size is halved, and vice versa. We call this property alpha invariance.\nFor NeRFs to better maintain alpha invariance, we recommend 1) parameterizing\nboth distance and volume densities in log space, and 2) a\ndiscretization-agnostic initialization strategy to guarantee high ray\ntransmittance. We revisit a few popular radiance field models and find that\nthese systems use various heuristics to deal with issues arising from scene\nscaling. We test their behaviors and show our recipe to be more robust.", + "We introduce TexTile, a novel differentiable metric to quantify the degree\nupon which a texture image can be concatenated with itself without introducing\nrepeating artifacts (i.e., the tileability). Existing methods for tileable\ntexture synthesis focus on general texture quality, but lack explicit analysis\nof the intrinsic repeatability properties of a texture. In contrast, our\nTexTile metric effectively evaluates the tileable properties of a texture,\nopening the door to more informed synthesis and analysis of tileable textures.\nUnder the hood, TexTile is formulated as a binary classifier carefully built\nfrom a large dataset of textures of different styles, semantics, regularities,\nand human annotations.Key to our method is a set of architectural modifications\nto baseline pre-train image classifiers to overcome their shortcomings at\nmeasuring tileability, along with a custom data augmentation and training\nregime aimed at increasing robustness and accuracy. We demonstrate that TexTile\ncan be plugged into different state-of-the-art texture synthesis methods,\nincluding diffusion-based strategies, and generate tileable textures while\nkeeping or even improving the overall texture quality.", + "We demonstrate that TexTile\ncan be plugged into different state-of-the-art texture synthesis methods,\nincluding diffusion-based strategies, and generate tileable textures while\nkeeping or even improving the overall texture quality. Furthermore, we show\nthat TexTile can objectively evaluate any tileable texture synthesis method,\nwhereas the current mix of existing metrics produces uncorrelated scores which\nheavily hinders progress in the field.", + "Domain adaptation for object detection typically entails transferring\nknowledge from one visible domain to another visible domain. However, there are\nlimited studies on adapting from the visible to the thermal domain, because the\ndomain gap between the visible and thermal domains is much larger than\nexpected, and traditional domain adaptation can not successfully facilitate\nlearning in this situation. To overcome this challenge, we propose a\nDistinctive Dual-Domain Teacher (D3T) framework that employs distinct training\nparadigms for each domain. Specifically, we segregate the source and target\ntraining sets for building dual-teachers and successively deploy exponential\nmoving average to the student model to individual teachers of each domain. The\nframework further incorporates a zigzag learning method between dual teachers,\nfacilitating a gradual transition from the visible to thermal domains during\ntraining. We validate the superiority of our method through newly designed\nexperimental protocols with well-known thermal datasets, i.e., FLIR and KAIST.\nSource code is available at https://github.com/EdwardDo69/D3T .", + "In this paper, we introduce a new perspective for improving image restoration\nby removing degradation in the textual representations of a given degraded\nimage. Intuitively, restoration is much easier on text modality than image one.\nFor example, it can be easily conducted by removing degradation-related words\nwhile keeping the content-aware words. Hence, we combine the advantages of\nimages in detail description and ones of text in degradation removal to perform\nrestoration. To address the cross-modal assistance, we propose to map the\ndegraded images into textual representations for removing the degradations, and\nthen convert the restored textual representations into a guidance image for\nassisting image restoration. In particular, We ingeniously embed an\nimage-to-text mapper and text restoration module into CLIP-equipped\ntext-to-image models to generate the guidance. Then, we adopt a simple\ncoarse-to-fine approach to dynamically inject multi-scale information from\nguidance to image restoration networks. Extensive experiments are conducted on\nvarious image restoration tasks, including deblurring, dehazing, deraining, and\ndenoising, and all-in-one image restoration.", + "Extensive experiments are conducted on\nvarious image restoration tasks, including deblurring, dehazing, deraining, and\ndenoising, and all-in-one image restoration. The results showcase that our\nmethod outperforms state-of-the-art ones across all these tasks. The codes and\nmodels are available at \\url{https://github.com/mrluin/TextualDegRemoval}.", + "Recent advances in vision-language models like Stable Diffusion have shown\nremarkable power in creative image synthesis and editing.However, most existing\ntext-to-image editing methods encounter two obstacles: First, the text prompt\nneeds to be carefully crafted to achieve good results, which is not intuitive\nor user-friendly. Second, they are insensitive to local edits and can\nirreversibly affect non-edited regions, leaving obvious editing traces. To\ntackle these problems, we propose a Zero-shot instructiON-guided local image\nEditing approach, termed ZONE. We first convert the editing intent from the\nuser-provided instruction (e.g., \"make his tie blue\") into specific image\nediting regions through InstructPix2Pix. We then propose a Region-IoU scheme\nfor precise image layer extraction from an off-the-shelf segment model. We\nfurther develop an edge smoother based on FFT for seamless blending between the\nlayer and the image.Our method allows for arbitrary manipulation of a specific\nregion with a single instruction while preserving the rest. Extensive\nexperiments demonstrate that our ZONE achieves remarkable local editing results\nand user-friendliness, outperforming state-of-the-art methods.", + "Extensive\nexperiments demonstrate that our ZONE achieves remarkable local editing results\nand user-friendliness, outperforming state-of-the-art methods. Code is\navailable at https://github.com/lsl001006/ZONE.", + "This study targets a critical aspect of multi-modal LLMs' (LLMs&VLMs)\ninference: explicit controllable text generation. Multi-modal LLMs empower\nmulti-modality understanding with the capability of semantic generation yet\nbring less explainability and heavier reliance on prompt contents due to their\nautoregressive generative nature. While manipulating prompt formats could\nimprove outputs, designing specific and precise prompts per task can be\nchallenging and ineffective. To tackle this issue, we introduce a novel\ninference method, Prompt Highlighter, which enables users to highlight specific\nprompt spans to interactively control the focus during generation. Motivated by\nthe classifier-free diffusion guidance, we form regular and unconditional\ncontext pairs based on highlighted tokens, demonstrating that the\nautoregressive generation in models can be guided in a classifier-free way.\nNotably, we find that, during inference, guiding the models with highlighted\ntokens through the attention weights leads to more desired outputs. Our\napproach is compatible with current LLMs and VLMs, achieving impressive\ncustomized generation results without training. Experiments confirm its\neffectiveness in focusing on input contexts and generating reliable content.", + "Our\napproach is compatible with current LLMs and VLMs, achieving impressive\ncustomized generation results without training. Experiments confirm its\neffectiveness in focusing on input contexts and generating reliable content.\nWithout tuning on LLaVA-v1.5, our method secured 70.7 in the MMBench test and\n1552.5 in MME-perception. The code is available at:\nhttps://github.com/dvlab-research/Prompt-Highlighter/", + "Few-shot semantic segmentation (FSS) has achieved great success on segmenting\nobjects of novel classes, supported by only a few annotated samples. However,\nexisting FSS methods often underperform in the presence of domain shifts,\nespecially when encountering new domain styles that are unseen during training.\nIt is suboptimal to directly adapt or generalize the entire model to new\ndomains in the few-shot scenario. Instead, our key idea is to adapt a small\nadapter for rectifying diverse target domain styles to the source domain.\nConsequently, the rectified target domain features can fittingly benefit from\nthe well-optimized source domain segmentation model, which is intently trained\non sufficient source domain data. Training domain-rectifying adapter requires\nsufficiently diverse target domains. We thus propose a novel local-global style\nperturbation method to simulate diverse potential target domains by\nperturbating the feature channel statistics of the individual images and\ncollective statistics of the entire source domain, respectively. Additionally,\nwe propose a cyclic domain alignment module to facilitate the adapter\neffectively rectifying domains using a reverse domain rectification\nsupervision.", + "Additionally,\nwe propose a cyclic domain alignment module to facilitate the adapter\neffectively rectifying domains using a reverse domain rectification\nsupervision. The adapter is trained to rectify the image features from diverse\nsynthesized target domains to align with the source domain. During testing on\ntarget domains, we start by rectifying the image features and then conduct\nfew-shot segmentation on the domain-rectified features. Extensive experiments\ndemonstrate the effectiveness of our method, achieving promising results on\ncross-domain few-shot semantic segmentation tasks. Our code is available at\nhttps://github.com/Matt-Su/DR-Adapter.", + "The problem of self-calibration of two cameras from a given fundamental\nmatrix is one of the basic problems in geometric computer vision. Under the\nassumption of known principal points and square pixels, the well-known Bougnoux\nformula offers a means to compute the two unknown focal lengths. However, in\nmany practical situations, the formula yields inaccurate results due to\ncommonly occurring singularities. Moreover, the estimates are sensitive to\nnoise in the computed fundamental matrix and to the assumed positions of the\nprincipal points. In this paper, we therefore propose an efficient and robust\niterative method to estimate the focal lengths along with the principal points\nof the cameras given a fundamental matrix and priors for the estimated camera\nparameters. In addition, we study a computationally efficient check of models\ngenerated within RANSAC that improves the accuracy of the estimated models\nwhile reducing the total computational time. Extensive experiments on real and\nsynthetic data show that our iterative method brings significant improvements\nin terms of the accuracy of the estimated focal lengths over the Bougnoux\nformula and other state-of-the-art methods, even when relying on inaccurate\npriors.", + "This paper proposes a cross-modal distillation framework, PartDistill, which\ntransfers 2D knowledge from vision-language models (VLMs) to facilitate 3D\nshape part segmentation. PartDistill addresses three major challenges in this\ntask: the lack of 3D segmentation in invisible or undetected regions in the 2D\nprojections, inconsistent 2D predictions by VLMs, and the lack of knowledge\naccumulation across different 3D shapes. PartDistill consists of a teacher\nnetwork that uses a VLM to make 2D predictions and a student network that\nlearns from the 2D predictions while extracting geometrical features from\nmultiple 3D shapes to carry out 3D part segmentation. A bi-directional\ndistillation, including forward and backward distillations, is carried out\nwithin the framework, where the former forward distills the 2D predictions to\nthe student network, and the latter improves the quality of the 2D predictions,\nwhich subsequently enhances the final 3D segmentation. Moreover, PartDistill\ncan exploit generative models that facilitate effortless 3D shape creation for\ngenerating knowledge sources to be distilled.", + "Moreover, PartDistill\ncan exploit generative models that facilitate effortless 3D shape creation for\ngenerating knowledge sources to be distilled. Through extensive experiments,\nPartDistill boosts the existing methods with substantial margins on widely used\nShapeNetPart and PartNetE datasets, by more than 15% and 12% higher mIoU\nscores, respectively. The code for this work is available at\nhttps://github.com/ardianumam/PartDistill.", + "In the era where AI-generated content (AIGC) models can produce stunning and\nlifelike images, the lingering shadow of unauthorized reproductions and\nmalicious tampering poses imminent threats to copyright integrity and\ninformation security. Current image watermarking methods, while widely accepted\nfor safeguarding visual content, can only protect copyright and ensure\ntraceability. They fall short in localizing increasingly realistic image\ntampering, potentially leading to trust crises, privacy violations, and legal\ndisputes. To solve this challenge, we propose an innovative proactive forensics\nframework EditGuard, to unify copyright protection and tamper-agnostic\nlocalization, especially for AIGC-based editing methods. It can offer a\nmeticulous embedding of imperceptible watermarks and precise decoding of\ntampered areas and copyright information. Leveraging our observed fragility and\nlocality of image-into-image steganography, the realization of EditGuard can be\nconverted into a united image-bit steganography issue, thus completely\ndecoupling the training process from the tampering types.", + "Leveraging our observed fragility and\nlocality of image-into-image steganography, the realization of EditGuard can be\nconverted into a united image-bit steganography issue, thus completely\ndecoupling the training process from the tampering types. Extensive experiments\ndemonstrate that our EditGuard balances the tamper localization accuracy,\ncopyright recovery precision, and generalizability to various AIGC-based\ntampering methods, especially for image forgery that is difficult for the naked\neye to detect. The project page is available at\nhttps://xuanyuzhang21.github.io/project/editguard/.", + "Constructing photo-realistic Free-Viewpoint Videos (FVVs) of dynamic scenes\nfrom multi-view videos remains a challenging endeavor. Despite the remarkable\nadvancements achieved by current neural rendering techniques, these methods\ngenerally require complete video sequences for offline training and are not\ncapable of real-time rendering. To address these constraints, we introduce\n3DGStream, a method designed for efficient FVV streaming of real-world dynamic\nscenes. Our method achieves fast on-the-fly per-frame reconstruction within 12\nseconds and real-time rendering at 200 FPS. Specifically, we utilize 3D\nGaussians (3DGs) to represent the scene. Instead of the na\\\"ive approach of\ndirectly optimizing 3DGs per-frame, we employ a compact Neural Transformation\nCache (NTC) to model the translations and rotations of 3DGs, markedly reducing\nthe training time and storage required for each FVV frame. Furthermore, we\npropose an adaptive 3DG addition strategy to handle emerging objects in dynamic\nscenes. Experiments demonstrate that 3DGStream achieves competitive performance\nin terms of rendering speed, image quality, training time, and model storage\nwhen compared with state-of-the-art methods.", + "Existing text-to-image generative models reflect or even amplify societal\nbiases ingrained in their training data. This is especially concerning for\nhuman image generation where models are biased against certain demographic\ngroups. Existing attempts to rectify this issue are hindered by the inherent\nlimitations of the pre-trained models and fail to substantially improve\ndemographic diversity. In this work, we introduce Fair Retrieval Augmented\nGeneration (FairRAG), a novel framework that conditions pre-trained generative\nmodels on reference images retrieved from an external image database to improve\nfairness in human generation. FairRAG enables conditioning through a\nlightweight linear module that projects reference images into the textual\nspace. To enhance fairness, FairRAG applies simple-yet-effective debiasing\nstrategies, providing images from diverse demographic groups during the\ngenerative process. Extensive experiments demonstrate that FairRAG outperforms\nexisting methods in terms of demographic diversity, image-text alignment, and\nimage fidelity while incurring minimal computational overhead during inference.", + "Accurate and controllable image editing is a challenging task that has\nattracted significant attention recently. Notably, DragGAN is an interactive\npoint-based image editing framework that achieves impressive editing results\nwith pixel-level precision. However, due to its reliance on generative\nadversarial networks (GANs), its generality is limited by the capacity of\npretrained GAN models. In this work, we extend this editing framework to\ndiffusion models and propose a novel approach DragDiffusion. By harnessing\nlarge-scale pretrained diffusion models, we greatly enhance the applicability\nof interactive point-based editing on both real and diffusion-generated images.\nOur approach involves optimizing the diffusion latents to achieve precise\nspatial control. The supervision signal of this optimization process is from\nthe diffusion model's UNet features, which are known to contain rich semantic\nand geometric information. Moreover, we introduce two additional techniques,\nnamely LoRA fine-tuning and latent-MasaCtrl, to further preserve the identity\nof the original image. Lastly, we present a challenging benchmark dataset\ncalled DragBench -- the first benchmark to evaluate the performance of\ninteractive point-based image editing methods.", + "Moreover, we introduce two additional techniques,\nnamely LoRA fine-tuning and latent-MasaCtrl, to further preserve the identity\nof the original image. Lastly, we present a challenging benchmark dataset\ncalled DragBench -- the first benchmark to evaluate the performance of\ninteractive point-based image editing methods. Experiments across a wide range\nof challenging cases (e.g., images with multiple objects, diverse object\ncategories, various styles, etc.) demonstrate the versatility and generality of\nDragDiffusion. Code: https://github.com/Yujun-Shi/DragDiffusion.", + "We introduce FaceTalk, a novel generative approach designed for synthesizing\nhigh-fidelity 3D motion sequences of talking human heads from input audio\nsignal. To capture the expressive, detailed nature of human heads, including\nhair, ears, and finer-scale eye movements, we propose to couple speech signal\nwith the latent space of neural parametric head models to create high-fidelity,\ntemporally coherent motion sequences. We propose a new latent diffusion model\nfor this task, operating in the expression space of neural parametric head\nmodels, to synthesize audio-driven realistic head sequences. In the absence of\na dataset with corresponding NPHM expressions to audio, we optimize for these\ncorrespondences to produce a dataset of temporally-optimized NPHM expressions\nfit to audio-video recordings of people talking. To the best of our knowledge,\nthis is the first work to propose a generative approach for realistic and\nhigh-quality motion synthesis of volumetric human heads, representing a\nsignificant advancement in the field of audio-driven 3D animation. Notably, our\napproach stands out in its ability to generate plausible motion sequences that\ncan produce high-fidelity head animation coupled with the NPHM shape space.", + "Notably, our\napproach stands out in its ability to generate plausible motion sequences that\ncan produce high-fidelity head animation coupled with the NPHM shape space. Our\nexperimental results substantiate the effectiveness of FaceTalk, consistently\nachieving superior and visually natural motion, encompassing diverse facial\nexpressions and styles, outperforming existing methods by 75% in perceptual\nuser study evaluation.", + "Reconstructing human-object interaction in 3D from a single RGB image is a\nchallenging task and existing data driven methods do not generalize beyond the\nobjects present in the carefully curated 3D interaction datasets. Capturing\nlarge-scale real data to learn strong interaction and 3D shape priors is very\nexpensive due to the combinatorial nature of human-object interactions. In this\npaper, we propose ProciGen (Procedural interaction Generation), a method to\nprocedurally generate datasets with both, plausible interaction and diverse\nobject variation. We generate 1M+ human-object interaction pairs in 3D and\nleverage this large-scale data to train our HDM (Hierarchical Diffusion Model),\na novel method to reconstruct interacting human and unseen objects, without any\ntemplates. Our HDM is an image-conditioned diffusion model that learns both\nrealistic interaction and highly accurate human and object shapes. Experiments\nshow that our HDM trained with ProciGen significantly outperforms prior methods\nthat requires template meshes and that our dataset allows training methods with\nstrong generalization ability to unseen object instances. Our code and data are\nreleased.", + "Video anomaly detection (VAD) with weak supervision has achieved remarkable\nperformance in utilizing video-level labels to discriminate whether a video\nframe is normal or abnormal. However, current approaches are inherently limited\nto a closed-set setting and may struggle in open-world applications where there\ncan be anomaly categories in the test data unseen during training. A few recent\nstudies attempt to tackle a more realistic setting, open-set VAD, which aims to\ndetect unseen anomalies given seen anomalies and normal videos. However, such a\nsetting focuses on predicting frame anomaly scores, having no ability to\nrecognize the specific categories of anomalies, despite the fact that this\nability is essential for building more informed video surveillance systems.\nThis paper takes a step further and explores open-vocabulary video anomaly\ndetection (OVVAD), in which we aim to leverage pre-trained large models to\ndetect and categorize seen and unseen anomalies. To this end, we propose a\nmodel that decouples OVVAD into two mutually complementary tasks --\nclass-agnostic detection and class-specific classification -- and jointly\noptimizes both tasks.", + "To this end, we propose a\nmodel that decouples OVVAD into two mutually complementary tasks --\nclass-agnostic detection and class-specific classification -- and jointly\noptimizes both tasks. Particularly, we devise a semantic knowledge injection\nmodule to introduce semantic knowledge from large language models for the\ndetection task, and design a novel anomaly synthesis module to generate pseudo\nunseen anomaly videos with the help of large vision generation models for the\nclassification task. These semantic knowledge and synthesis anomalies\nsubstantially extend our model's capability in detecting and categorizing a\nvariety of seen and unseen anomalies. Extensive experiments on three\nwidely-used benchmarks demonstrate our model achieves state-of-the-art\nperformance on OVVAD task.", + "In recent years, text-image joint pre-training techniques have shown\npromising results in various tasks. However, in Optical Character Recognition\n(OCR) tasks, aligning text instances with their corresponding text regions in\nimages poses a challenge, as it requires effective alignment between text and\nOCR-Text (referring to the text in images as OCR-Text to distinguish from the\ntext in natural language) rather than a holistic understanding of the overall\nimage content. In this paper, we propose a new pre-training method called\nOCR-Text Destylization Modeling (ODM) that transfers diverse styles of text\nfound in images to a uniform style based on the text prompt. With ODM, we\nachieve better alignment between text and OCR-Text and enable pre-trained\nmodels to adapt to the complex and diverse styles of scene text detection and\nspotting tasks. Additionally, we have designed a new labeling generation method\nspecifically for ODM and combined it with our proposed Text-Controller module\nto address the challenge of annotation costs in OCR tasks, allowing a larger\namount of unlabeled data to participate in pre-training.", + "Additionally, we have designed a new labeling generation method\nspecifically for ODM and combined it with our proposed Text-Controller module\nto address the challenge of annotation costs in OCR tasks, allowing a larger\namount of unlabeled data to participate in pre-training. Extensive experiments\non multiple public datasets demonstrate that our method significantly improves\nperformance and outperforms current pre-training methods in scene text\ndetection and spotting tasks. Code is available at\nhttps://github.com/PriNing/ODM.", + "Epistemic uncertainty quantification (UQ) identifies where models lack\nknowledge. Traditional UQ methods, often based on Bayesian neural networks, are\nnot suitable for pre-trained non-Bayesian models. Our study addresses\nquantifying epistemic uncertainty for any pre-trained model, which does not\nneed the original training data or model modifications and can ensure broad\napplicability regardless of network architectures or training techniques.\nSpecifically, we propose a gradient-based approach to assess epistemic\nuncertainty, analyzing the gradients of outputs relative to model parameters,\nand thereby indicating necessary model adjustments to accurately represent the\ninputs. We first explore theoretical guarantees of gradient-based methods for\nepistemic UQ, questioning the view that this uncertainty is only calculable\nthrough differences between multiple models. We further improve gradient-driven\nUQ by using class-specific weights for integrating gradients and emphasizing\ndistinct contributions from neural network layers. Additionally, we enhance UQ\naccuracy by combining gradient and perturbation methods to refine the\ngradients. We evaluate our approach on out-of-distribution detection,\nuncertainty calibration, and active learning, demonstrating its superiority\nover current state-of-the-art UQ methods for pre-trained models.", + "Image diffusion models have been utilized in various tasks, such as\ntext-to-image generation and controllable image synthesis. Recent research has\nintroduced tuning methods that make subtle adjustments to the original models,\nyielding promising results in specific adaptations of foundational generative\ndiffusion models. Rather than modifying the main backbone of the diffusion\nmodel, we delve into the role of skip connection in U-Net and reveal that\nhierarchical features aggregating long-distance information across encoder and\ndecoder make a significant impact on the content and quality of image\ngeneration. Based on the observation, we propose an efficient generative tuning\nframework, dubbed SCEdit, which integrates and edits Skip Connection using a\nlightweight tuning module named SC-Tuner. Furthermore, the proposed framework\nallows for straightforward extension to controllable image synthesis by\ninjecting different conditions with Controllable SC-Tuner, simplifying and\nunifying the network design for multi-condition inputs. Our SCEdit\nsubstantially reduces training parameters, memory usage, and computational\nexpense due to its lightweight tuners, with backward propagation only passing\nto the decoder blocks.", + "Our SCEdit\nsubstantially reduces training parameters, memory usage, and computational\nexpense due to its lightweight tuners, with backward propagation only passing\nto the decoder blocks. Extensive experiments conducted on text-to-image\ngeneration and controllable image synthesis tasks demonstrate the superiority\nof our method in terms of efficiency and performance. Project page:\n\\url{https://scedit.github.io/}", + "Monocular 3D object detection has attracted widespread attention due to its\npotential to accurately obtain object 3D localization from a single image at a\nlow cost. Depth estimation is an essential but challenging subtask of monocular\n3D object detection due to the ill-posedness of 2D to 3D mapping. Many methods\nexplore multiple local depth clues such as object heights and keypoints and\nthen formulate the object depth estimation as an ensemble of multiple depth\npredictions to mitigate the insufficiency of single-depth information. However,\nthe errors of existing multiple depths tend to have the same sign, which\nhinders them from neutralizing each other and limits the overall accuracy of\ncombined depth. To alleviate this problem, we propose to increase the\ncomplementarity of depths with two novel designs. First, we add a new depth\nprediction branch named complementary depth that utilizes global and efficient\ndepth clues from the entire image rather than the local clues to reduce the\ncorrelation of depth predictions. Second, we propose to fully exploit the\ngeometric relations between multiple depth clues to achieve complementarity in\nform. Benefiting from these designs, our method achieves higher\ncomplementarity.", + "Second, we propose to fully exploit the\ngeometric relations between multiple depth clues to achieve complementarity in\nform. Benefiting from these designs, our method achieves higher\ncomplementarity. Experiments on the KITTI benchmark demonstrate that our method\nachieves state-of-the-art performance without introducing extra data. In\naddition, complementary depth can also be a lightweight and plug-and-play\nmodule to boost multiple existing monocular 3d object detectors. Code is\navailable at https://github.com/elvintanhust/MonoCD.", + "Score distillation sampling (SDS) and its variants have greatly boosted the\ndevelopment of text-to-3D generation, but are vulnerable to geometry collapse\nand poor textures yet. To solve this issue, we first deeply analyze the SDS and\nfind that its distillation sampling process indeed corresponds to the\ntrajectory sampling of a stochastic differential equation (SDE): SDS samples\nalong an SDE trajectory to yield a less noisy sample which then serves as a\nguidance to optimize a 3D model. However, the randomness in SDE sampling often\nleads to a diverse and unpredictable sample which is not always less noisy, and\nthus is not a consistently correct guidance, explaining the vulnerability of\nSDS. Since for any SDE, there always exists an ordinary differential equation\n(ODE) whose trajectory sampling can deterministically and consistently converge\nto the desired target point as the SDE, we propose a novel and effective\n\"Consistent3D\" method that explores the ODE deterministic sampling prior for\ntext-to-3D generation.", + "Specifically, at each training iteration, given a\nrendered image by a 3D model, we first estimate its desired 3D score function\nby a pre-trained 2D diffusion model, and build an ODE for trajectory sampling.\nNext, we design a consistency distillation sampling loss which samples along\nthe ODE trajectory to generate two adjacent samples and uses the less noisy\nsample to guide another more noisy one for distilling the deterministic prior\ninto the 3D model. Experimental results show the efficacy of our Consistent3D\nin generating high-fidelity and diverse 3D objects and large-scale scenes, as\nshown in Fig. 1. The codes are available at\nhttps://github.com/sail-sg/Consistent3D.", + "Robot manipulation relies on accurately predicting contact points and\nend-effector directions to ensure successful operation. However, learning-based\nrobot manipulation, trained on a limited category within a simulator, often\nstruggles to achieve generalizability, especially when confronted with\nextensive categories. Therefore, we introduce an innovative approach for robot\nmanipulation that leverages the robust reasoning capabilities of Multimodal\nLarge Language Models (MLLMs) to enhance the stability and generalization of\nmanipulation. By fine-tuning the injected adapters, we preserve the inherent\ncommon sense and reasoning ability of the MLLMs while equipping them with the\nability for manipulation. The fundamental insight lies in the introduced\nfine-tuning paradigm, encompassing object category understanding, affordance\nprior reasoning, and object-centric pose prediction to stimulate the reasoning\nability of MLLM in manipulation. During inference, our approach utilizes an RGB\nimage and text prompt to predict the end effector's pose in chain of thoughts.\nAfter the initial contact is established, an active impedance adaptation policy\nis introduced to plan the upcoming waypoints in a closed-loop manner.", + "During inference, our approach utilizes an RGB\nimage and text prompt to predict the end effector's pose in chain of thoughts.\nAfter the initial contact is established, an active impedance adaptation policy\nis introduced to plan the upcoming waypoints in a closed-loop manner. Moreover,\nin real world, we design a test-time adaptation (TTA) strategy for manipulation\nto enable the model better adapt to the current real-world scene configuration.\nExperiments in simulator and real-world show the promising performance of\nManipLLM. More details and demonstrations can be found at\nhttps://sites.google.com/view/manipllm.", + "Federated Class-Incremental Learning (FCIL) is an underexplored yet pivotal\nissue, involving the dynamic addition of new classes in the context of\nfederated learning. In this field, Data-Free Knowledge Transfer (DFKT) plays a\ncrucial role in addressing catastrophic forgetting and data privacy problems.\nHowever, prior approaches lack the crucial synergy between DFKT and the model\ntraining phases, causing DFKT to encounter difficulties in generating\nhigh-quality data from a non-anchored latent space of the old task model. In\nthis paper, we introduce LANDER (Label Text Centered Data-Free Knowledge\nTransfer) to address this issue by utilizing label text embeddings (LTE)\nproduced by pretrained language models. Specifically, during the model training\nphase, our approach treats LTE as anchor points and constrains the feature\nembeddings of corresponding training samples around them, enriching the\nsurrounding area with more meaningful information. In the DFKT phase, by using\nthese LTE anchors, LANDER can synthesize more meaningful samples, thereby\neffectively addressing the forgetting problem.", + "In the DFKT phase, by using\nthese LTE anchors, LANDER can synthesize more meaningful samples, thereby\neffectively addressing the forgetting problem. Additionally, instead of tightly\nconstraining embeddings toward the anchor, the Bounding Loss is introduced to\nencourage sample embeddings to remain flexible within a defined radius. This\napproach preserves the natural differences in sample embeddings and mitigates\nthe embedding overlap caused by heterogeneous federated settings. Extensive\nexperiments conducted on CIFAR100, Tiny-ImageNet, and ImageNet demonstrate that\nLANDER significantly outperforms previous methods and achieves state-of-the-art\nperformance in FCIL. The code is available at\nhttps://github.com/tmtuan1307/lander.", + "We present an approach for analyzing grouping information contained within a\nneural network's activations, permitting extraction of spatial layout and\nsemantic segmentation from the behavior of large pre-trained vision models.\nUnlike prior work, our method conducts a wholistic analysis of a network's\nactivation state, leveraging features from all layers and obviating the need to\nguess which part of the model contains relevant information. Motivated by\nclassic spectral clustering, we formulate this analysis in terms of an\noptimization objective involving a set of affinity matrices, each formed by\ncomparing features within a different layer. Solving this optimization problem\nusing gradient descent allows our technique to scale from single images to\ndataset-level analysis, including, in the latter, both intra- and inter-image\nrelationships. Analyzing a pre-trained generative transformer provides insight\ninto the computational strategy learned by such models. Equating affinity with\nkey-query similarity across attention layers yields eigenvectors encoding scene\nspatial layout, whereas defining affinity by value vector similarity yields\neigenvectors encoding object identity.", + "Analyzing a pre-trained generative transformer provides insight\ninto the computational strategy learned by such models. Equating affinity with\nkey-query similarity across attention layers yields eigenvectors encoding scene\nspatial layout, whereas defining affinity by value vector similarity yields\neigenvectors encoding object identity. This result suggests that key and query\nvectors coordinate attentional information flow according to spatial proximity\n(a `where' pathway), while value vectors refine a semantic category\nrepresentation (a `what' pathway).", + "Large Multimodal Models (LMMs) extend Large Language Models to the vision\ndomain. Initial LMMs used holistic images and text prompts to generate\nungrounded textual responses. Recently, region-level LMMs have been used to\ngenerate visually grounded responses. However, they are limited to only\nreferring to a single object category at a time, require users to specify the\nregions, or cannot offer dense pixel-wise object grounding. In this work, we\npresent Grounding LMM (GLaMM), the first model that can generate natural\nlanguage responses seamlessly intertwined with corresponding object\nsegmentation masks. GLaMM not only grounds objects appearing in the\nconversations but is flexible enough to accept both textual and optional visual\nprompts (region of interest) as input. This empowers users to interact with the\nmodel at various levels of granularity, both in textual and visual domains. Due\nto the lack of standard benchmarks for the novel setting of visually Grounded\nConversation Generation (GCG), we introduce a comprehensive evaluation protocol\nwith our curated grounded conversations. Our proposed GCG task requires densely\ngrounded concepts in natural scenes at a large-scale.", + "Due\nto the lack of standard benchmarks for the novel setting of visually Grounded\nConversation Generation (GCG), we introduce a comprehensive evaluation protocol\nwith our curated grounded conversations. Our proposed GCG task requires densely\ngrounded concepts in natural scenes at a large-scale. To this end, we propose a\ndensely annotated Grounding-anything Dataset (GranD) using our proposed\nautomated annotation pipeline that encompasses 7.5M unique concepts grounded in\na total of 810M regions available with segmentation masks. Besides GCG, GLaMM\nalso performs effectively on several downstream tasks, e.g., referring\nexpression segmentation, image and region-level captioning and vision-language\nconversations.", + "Concept Bottleneck Models (CBMs) map the black-box visual representations\nextracted by deep neural networks onto a set of interpretable concepts and use\nthe concepts to make predictions, enhancing the transparency of the\ndecision-making process. Multimodal pre-trained models can match visual\nrepresentations with textual concept embeddings, allowing for obtaining the\ninterpretable concept bottleneck without the expertise concept annotations.\nRecent research has focused on the concept bank establishment and the\nhigh-quality concept selection. However, it is challenging to construct a\ncomprehensive concept bank through humans or large language models, which\nseverely limits the performance of CBMs. In this work, we propose the\nIncremental Residual Concept Bottleneck Model (Res-CBM) to address the\nchallenge of concept completeness. Specifically, the residual concept\nbottleneck model employs a set of optimizable vectors to complete missing\nconcepts, then the incremental concept discovery module converts the\ncomplemented vectors with unclear meanings into potential concepts in the\ncandidate concept bank. Our approach can be applied to any user-defined concept\nbank, as a post-hoc processing method to enhance the performance of any CBMs.", + "Our approach can be applied to any user-defined concept\nbank, as a post-hoc processing method to enhance the performance of any CBMs.\nFurthermore, to measure the descriptive efficiency of CBMs, the Concept\nUtilization Efficiency (CUE) metric is proposed. Experiments show that the\nRes-CBM outperforms the current state-of-the-art methods in terms of both\naccuracy and efficiency and achieves comparable performance to black-box models\nacross multiple datasets.", + "We propose Lodge, a network capable of generating extremely long dance\nsequences conditioned on given music. We design Lodge as a two-stage coarse to\nfine diffusion architecture, and propose the characteristic dance primitives\nthat possess significant expressiveness as intermediate representations between\ntwo diffusion models. The first stage is global diffusion, which focuses on\ncomprehending the coarse-level music-dance correlation and production\ncharacteristic dance primitives. In contrast, the second-stage is the local\ndiffusion, which parallelly generates detailed motion sequences under the\nguidance of the dance primitives and choreographic rules. In addition, we\npropose a Foot Refine Block to optimize the contact between the feet and the\nground, enhancing the physical realism of the motion. Our approach can\nparallelly generate dance sequences of extremely long length, striking a\nbalance between global choreographic patterns and local motion quality and\nexpressiveness. Extensive experiments validate the efficacy of our method.", + "Diffusion models have shown remarkable results for image generation, editing\nand inpainting. Recent works explore diffusion models for 3D shape generation\nwith neural implicit functions, i.e., signed distance function and occupancy\nfunction. However, they are limited to shapes with closed surfaces, which\nprevents them from generating diverse 3D real-world contents containing open\nsurfaces. In this work, we present UDiFF, a 3D diffusion model for unsigned\ndistance fields (UDFs) which is capable to generate textured 3D shapes with\nopen surfaces from text conditions or unconditionally. Our key idea is to\ngenerate UDFs in spatial-frequency domain with an optimal wavelet\ntransformation, which produces a compact representation space for UDF\ngeneration. Specifically, instead of selecting an appropriate wavelet\ntransformation which requires expensive manual efforts and still leads to large\ninformation loss, we propose a data-driven approach to learn the optimal\nwavelet transformation for UDFs. We evaluate UDiFF to show our advantages by\nnumerical and visual comparisons with the latest methods on widely used\nbenchmarks. Page: https://weiqi-zhang.github.io/UDiFF.", + "Active Speaker Detection (ASD) aims to identify who is speaking in each frame\nof a video. ASD reasons from audio and visual information from two contexts:\nlong-term intra-speaker context and short-term inter-speaker context. Long-term\nintra-speaker context models the temporal dependencies of the same speaker,\nwhile short-term inter-speaker context models the interactions of speakers in\nthe same scene. These two contexts are complementary to each other and can help\ninfer the active speaker. Motivated by these observations, we propose LoCoNet,\na simple yet effective Long-Short Context Network that models the long-term\nintra-speaker context and short-term inter-speaker context. We use\nself-attention to model long-term intra-speaker context due to its\neffectiveness in modeling long-range dependencies, and convolutional blocks\nthat capture local patterns to model short-term inter-speaker context.", + "We use\nself-attention to model long-term intra-speaker context due to its\neffectiveness in modeling long-range dependencies, and convolutional blocks\nthat capture local patterns to model short-term inter-speaker context.\nExtensive experiments show that LoCoNet achieves state-of-the-art performance\non multiple datasets, achieving an mAP of 95.2%(+1.1%) on AVA-ActiveSpeaker,\n68.1%(+22%) on Columbia dataset, 97.2%(+2.8%) on Talkies dataset and\n59.7%(+8.0%) on Ego4D dataset. Moreover, in challenging cases where multiple\nspeakers are present, or face of active speaker is much smaller than other\nfaces in the same scene, LoCoNet outperforms previous state-of-the-art methods\nby 3.4% on the AVA-ActiveSpeaker dataset. The code will be released at\nhttps://github.com/SJTUwxz/LoCoNet_ASD.", + "Deepfake detection faces a critical generalization hurdle, with performance\ndeteriorating when there is a mismatch between the distributions of training\nand testing data. A broadly received explanation is the tendency of these\ndetectors to be overfitted to forgery-specific artifacts, rather than learning\nfeatures that are widely applicable across various forgeries. To address this\nissue, we propose a simple yet effective detector called LSDA\n(\\underline{L}atent \\underline{S}pace \\underline{D}ata\n\\underline{A}ugmentation), which is based on a heuristic idea: representations\nwith a wider variety of forgeries should be able to learn a more generalizable\ndecision boundary, thereby mitigating the overfitting of method-specific\nfeatures (see Fig.~\\ref{fig:toy}). Following this idea, we propose to enlarge\nthe forgery space by constructing and simulating variations within and across\nforgery features in the latent space. This approach encompasses the acquisition\nof enriched, domain-specific features and the facilitation of smoother\ntransitions between different forgery types, effectively bridging domain gaps.", + "This approach encompasses the acquisition\nof enriched, domain-specific features and the facilitation of smoother\ntransitions between different forgery types, effectively bridging domain gaps.\nOur approach culminates in refining a binary classifier that leverages the\ndistilled knowledge from the enhanced features, striving for a generalizable\ndeepfake detector. Comprehensive experiments show that our proposed method is\nsurprisingly effective and transcends state-of-the-art detectors across several\nwidely used benchmarks.", + "Recent significant advances in text-to-image models unlock the possibility of\ntraining vision systems using synthetic images, potentially overcoming the\ndifficulty of collecting curated data at scale. It is unclear, however, how\nthese models behave at scale, as more synthetic data is added to the training\nset. In this paper we study the scaling laws of synthetic images generated by\nstate of the art text-to-image models, for the training of supervised models:\nimage classifiers with label supervision, and CLIP with language supervision.\nWe identify several factors, including text prompts, classifier-free guidance\nscale, and types of text-to-image models, that significantly affect scaling\nbehavior. After tuning these factors, we observe that synthetic images\ndemonstrate a scaling trend similar to, but slightly less effective than, real\nimages in CLIP training, while they significantly underperform in scaling when\ntraining supervised image classifiers. Our analysis indicates that the main\nreason for this underperformance is the inability of off-the-shelf\ntext-to-image models to generate certain concepts, a limitation that\nsignificantly impairs the training of image classifiers.", + "Our analysis indicates that the main\nreason for this underperformance is the inability of off-the-shelf\ntext-to-image models to generate certain concepts, a limitation that\nsignificantly impairs the training of image classifiers. Our findings also\nsuggest that scaling synthetic data can be particularly effective in scenarios\nsuch as: (1) when there is a limited supply of real images for a supervised\nproblem (e.g., fewer than 0.5 million images in ImageNet), (2) when the\nevaluation dataset diverges significantly from the training data, indicating\nthe out-of-distribution scenario, or (3) when synthetic data is used in\nconjunction with real images, as demonstrated in the training of CLIP models.", + "The rapid advancement of deep learning models often attributes to their\nability to leverage massive training data. In contrast, such privilege has not\nyet fully benefited 3D deep learning, mainly due to the limited availability of\nlarge-scale 3D datasets. Merging multiple available data sources and letting\nthem collaboratively train a single model is a potential solution. However, due\nto the large domain gap between 3D point cloud datasets, such mixed supervision\ncould adversely affect the model's performance and lead to degenerated\nperformance (i.e., negative transfer) compared to single-dataset training. In\nview of this challenge, we introduce Point Prompt Training (PPT), a novel\nframework for multi-dataset synergistic learning in the context of 3D\nrepresentation learning that supports multiple pre-training paradigms. Based on\nthis framework, we propose Prompt-driven Normalization, which adapts the model\nto different datasets with domain-specific prompts and Language-guided\nCategorical Alignment that decently unifies the multiple-dataset label spaces\nby leveraging the relationship between label text. Extensive experiments verify\nthat PPT can overcome the negative transfer associated with synergistic\nlearning and produce generalizable representations.", + "Extensive experiments verify\nthat PPT can overcome the negative transfer associated with synergistic\nlearning and produce generalizable representations. Notably, it achieves\nstate-of-the-art performance on each dataset using a single weight-shared model\nwith supervised multi-dataset training. Moreover, when served as a pre-training\nframework, it outperforms other pre-training approaches regarding\nrepresentation quality and attains remarkable state-of-the-art performance\nacross over ten diverse downstream tasks spanning both indoor and outdoor 3D\nscenarios.", + "Convolution neural network is successful in pervasive vision tasks, including\nlabel distribution learning, which usually takes the form of learning an\ninjection from the non-linear visual features to the well-defined labels.\nHowever, how the discrepancy between features is mapped to the label\ndiscrepancy is ambient, and its correctness is not guaranteed.To address these\nproblems, we study the mathematical connection between feature and its label,\npresenting a general and simple framework for label distribution learning. We\npropose a so-called Triangular Distribution Transform (TDT) to build an\ninjective function between feature and label, guaranteeing that any symmetric\nfeature discrepancy linearly reflects the difference between labels. The\nproposed TDT can be used as a plug-in in mainstream backbone networks to\naddress different label distribution learning tasks. Experiments on Facial Age\nRecognition, Illumination Chromaticity Estimation, and Aesthetics assessment\nshow that TDT achieves on-par or better results than the prior arts.", + "Today, state-of-the-art deep neural networks that process event-camera data\nfirst convert a temporal window of events into dense, grid-like input\nrepresentations. As such, they exhibit poor generalizability when deployed at\nhigher inference frequencies (i.e., smaller temporal windows) than the ones\nthey were trained on. We address this challenge by introducing state-space\nmodels (SSMs) with learnable timescale parameters to event-based vision. This\ndesign adapts to varying frequencies without the need to retrain the network at\ndifferent frequencies. Additionally, we investigate two strategies to\ncounteract aliasing effects when deploying the model at higher frequencies. We\ncomprehensively evaluate our approach against existing methods based on RNN and\nTransformer architectures across various benchmarks, including Gen1 and 1 Mpx\nevent camera datasets. Our results demonstrate that SSM-based models train 33%\nfaster and also exhibit minimal performance degradation when tested at higher\nfrequencies than the training input. Traditional RNN and Transformer models\nexhibit performance drops of more than 20 mAP, with SSMs having a drop of 3.76\nmAP, highlighting the effectiveness of SSMs in event-based vision tasks.", + "In the realm of computer vision and robotics, embodied agents are expected to\nexplore their environment and carry out human instructions. This necessitates\nthe ability to fully understand 3D scenes given their first-person observations\nand contextualize them into language for interaction. However, traditional\nresearch focuses more on scene-level input and output setups from a global\nview. To address the gap, we introduce EmbodiedScan, a multi-modal, ego-centric\n3D perception dataset and benchmark for holistic 3D scene understanding. It\nencompasses over 5k scans encapsulating 1M ego-centric RGB-D views, 1M language\nprompts, 160k 3D-oriented boxes spanning over 760 categories, some of which\npartially align with LVIS, and dense semantic occupancy with 80 common\ncategories. Building upon this database, we introduce a baseline framework\nnamed Embodied Perceptron. It is capable of processing an arbitrary number of\nmulti-modal inputs and demonstrates remarkable 3D perception capabilities, both\nwithin the two series of benchmarks we set up, i.e., fundamental 3D perception\ntasks and language-grounded tasks, and in the wild.", + "It is capable of processing an arbitrary number of\nmulti-modal inputs and demonstrates remarkable 3D perception capabilities, both\nwithin the two series of benchmarks we set up, i.e., fundamental 3D perception\ntasks and language-grounded tasks, and in the wild. Codes, datasets, and\nbenchmarks will be available at https://github.com/OpenRobotLab/EmbodiedScan.", + "We present SHINOBI, an end-to-end framework for the reconstruction of shape,\nmaterial, and illumination from object images captured with varying lighting,\npose, and background. Inverse rendering of an object based on unconstrained\nimage collections is a long-standing challenge in computer vision and graphics\nand requires a joint optimization over shape, radiance, and pose. We show that\nan implicit shape representation based on a multi-resolution hash encoding\nenables faster and robust shape reconstruction with joint camera alignment\noptimization that outperforms prior work. Further, to enable the editing of\nillumination and object reflectance (i.e. material) we jointly optimize BRDF\nand illumination together with the object's shape. Our method is class-agnostic\nand works on in-the-wild image collections of objects to produce relightable 3D\nassets for several use cases such as AR/VR, movies, games, etc. Project page:\nhttps://shinobi.aengelhardt.com Video:\nhttps://www.youtube.com/watch?v=iFENQ6AcYd8&feature=youtu.be", + "Neural Radiance Fields (NeRF) revolutionize the realm of visual media by\nproviding photorealistic Free-Viewpoint Video (FVV) experiences, offering\nviewers unparalleled immersion and interactivity. However, the technology's\nsignificant storage requirements and the computational complexity involved in\ngeneration and rendering currently limit its broader application. To close this\ngap, this paper presents Temporal Tri-Plane Radiance Fields (TeTriRF), a novel\ntechnology that significantly reduces the storage size for Free-Viewpoint Video\n(FVV) while maintaining low-cost generation and rendering. TeTriRF introduces a\nhybrid representation with tri-planes and voxel grids to support scaling up to\nlong-duration sequences and scenes with complex motions or rapid changes. We\npropose a group training scheme tailored to achieving high training efficiency\nand yielding temporally consistent, low-entropy scene representations.\nLeveraging these properties of the representations, we introduce a compression\npipeline with off-the-shelf video codecs, achieving an order of magnitude less\nstorage size compared to the state-of-the-art. Our experiments demonstrate that\nTeTriRF can achieve competitive quality with a higher compression rate.", + "We introduce Motion2VecSets, a 4D diffusion model for dynamic surface\nreconstruction from point cloud sequences. While existing state-of-the-art\nmethods have demonstrated success in reconstructing non-rigid objects using\nneural field representations, conventional feed-forward networks encounter\nchallenges with ambiguous observations from noisy, partial, or sparse point\nclouds. To address these challenges, we introduce a diffusion model that\nexplicitly learns the shape and motion distribution of non-rigid objects\nthrough an iterative denoising process of compressed latent representations.\nThe diffusion-based priors enable more plausible and probabilistic\nreconstructions when handling ambiguous inputs. We parameterize 4D dynamics\nwith latent sets instead of using global latent codes. This novel 4D\nrepresentation allows us to learn local shape and deformation patterns, leading\nto more accurate non-linear motion capture and significantly improving\ngeneralizability to unseen motions and identities. For more temporally-coherent\nobject tracking, we synchronously denoise deformation latent sets and exchange\ninformation across multiple frames. To avoid computational overhead, we\ndesigned an interleaved space and time attention block to alternately aggregate\ndeformation latents along spatial and temporal domains.", + "For more temporally-coherent\nobject tracking, we synchronously denoise deformation latent sets and exchange\ninformation across multiple frames. To avoid computational overhead, we\ndesigned an interleaved space and time attention block to alternately aggregate\ndeformation latents along spatial and temporal domains. Extensive comparisons\nagainst state-of-the-art methods demonstrate the superiority of our\nMotion2VecSets in 4D reconstruction from various imperfect observations. More\ndetailed information can be found at\nhttps://vveicao.github.io/projects/Motion2VecSets/.", + "Multimodal learning has advanced the performance for many vision-language\ntasks. However, most existing works in embodied dialog research focus on\nnavigation and leave the localization task understudied. The few existing\ndialog-based localization approaches assume the availability of entire dialog\nprior to localizaiton, which is impractical for deployed dialog-based\nlocalization. In this paper, we propose DiaLoc, a new dialog-based localization\nframework which aligns with a real human operator behavior. Specifically, we\nproduce an iterative refinement of location predictions which can visualize\ncurrent pose believes after each dialog turn. DiaLoc effectively utilizes the\nmultimodal data for multi-shot localization, where a fusion encoder fuses\nvision and dialog information iteratively. We achieve state-of-the-art results\non embodied dialog-based localization task, in single-shot (+7.08% in\nAcc5@valUnseen) and multi-shot settings (+10.85% in Acc5@valUnseen). DiaLoc\nnarrows the gap between simulation and real-world applications, opening doors\nfor future research on collaborative localization and navigation.", + "Visual program synthesis is a promising approach to exploit the reasoning\nabilities of large language models for compositional computer vision tasks.\nPrevious work has used few-shot prompting with frozen LLMs to synthesize visual\nprograms. Training an LLM to write better visual programs is an attractive\nprospect, but it is unclear how to accomplish this. No dataset of visual\nprograms for training exists, and acquisition of a visual program dataset\ncannot be easily crowdsourced due to the need for expert annotators. To get\naround the lack of direct supervision, we explore improving the program\nsynthesis abilities of an LLM using feedback from interactive experience. We\npropose a method where we exploit existing annotations for a vision-language\ntask to improvise a coarse reward signal for that task, treat the LLM as a\npolicy, and apply reinforced self-training to improve the visual program\nsynthesis ability of the LLM for that task.", + "We\npropose a method where we exploit existing annotations for a vision-language\ntask to improvise a coarse reward signal for that task, treat the LLM as a\npolicy, and apply reinforced self-training to improve the visual program\nsynthesis ability of the LLM for that task. We describe a series of experiments\non object detection, compositional visual question answering, and image-text\nretrieval, and show that in each case, the self-trained LLM outperforms or\nperforms on par with few-shot frozen LLMs that are an order of magnitude\nlarger. Website: https://zaidkhan.me/ViReP", + "Deep Neural Networks (DNNs) have become pivotal in various fields, especially\nin computer vision, outperforming previous methodologies. A critical challenge\nin their deployment is the bias inherent in data across different domains, such\nas image style and environmental conditions, leading to domain gaps. This\nnecessitates techniques for learning general representations from biased\ntraining data, known as domain generalization. This paper presents Attend to\neXpert Prompts (A2XP), a novel approach for domain generalization that\npreserves the privacy and integrity of the network architecture. A2XP consists\nof two phases: Expert Adaptation and Domain Generalization. In the first phase,\nprompts for each source domain are optimized to guide the model towards the\noptimal direction. In the second phase, two embedder networks are trained to\neffectively amalgamate these expert prompts, aiming for an optimal output. Our\nextensive experiments demonstrate that A2XP achieves state-of-the-art results\nover existing non-private domain generalization methods. The experimental\nresults validate that the proposed approach not only tackles the domain\ngeneralization challenge in DNNs but also offers a privacy-preserving,\nefficient solution to the broader field of computer vision.", + "In the realm of video object segmentation (VOS), the challenge of operating\nunder low-light conditions persists, resulting in notably degraded image\nquality and compromised accuracy when comparing query and memory frames for\nsimilarity computation. Event cameras, characterized by their high dynamic\nrange and ability to capture motion information of objects, offer promise in\nenhancing object visibility and aiding VOS methods under such low-light\nconditions. This paper introduces a pioneering framework tailored for low-light\nVOS, leveraging event camera data to elevate segmentation accuracy. Our\napproach hinges on two pivotal components: the Adaptive Cross-Modal Fusion\n(ACMF) module, aimed at extracting pertinent features while fusing image and\nevent modalities to mitigate noise interference, and the Event-Guided Memory\nMatching (EGMM) module, designed to rectify the issue of inaccurate matching\nprevalent in low-light settings. Additionally, we present the creation of a\nsynthetic LLE-DAVIS dataset and the curation of a real-world LLE-VOS dataset,\nencompassing frames and events. Experimental evaluations corroborate the\nefficacy of our method across both datasets, affirming its effectiveness in\nlow-light scenarios.", + "The scarcity of annotated data has sparked significant interest in\nunsupervised pre-training methods that leverage medical reports as auxiliary\nsignals for medical visual representation learning. However, existing research\noverlooks the multi-granularity nature of medical visual representation and\nlacks suitable contrastive learning techniques to improve the models'\ngeneralizability across different granularities, leading to the\nunderutilization of image-text information. To address this, we propose MLIP, a\nnovel framework leveraging domain-specific medical knowledge as guiding signals\nto integrate language information into the visual domain through image-text\ncontrastive learning. Our model includes global contrastive learning with our\ndesigned divergence encoder, local token-knowledge-patch alignment contrastive\nlearning, and knowledge-guided category-level contrastive learning with expert\nknowledge. Experimental evaluations reveal the efficacy of our model in\nenhancing transfer performance for tasks such as image classification, object\ndetection, and semantic segmentation. Notably, MLIP surpasses state-of-the-art\nmethods even with limited annotated data, highlighting the potential of\nmultimodal pre-training in advancing medical representation learning.", + "Generative 3D part assembly involves understanding part relationships and\npredicting their 6-DoF poses for assembling a realistic 3D shape. Prior work\noften focus on the geometry of individual parts, neglecting part-whole\nhierarchies of objects. Leveraging two key observations: 1) super-part poses\nprovide strong hints about part poses, and 2) predicting super-part poses is\neasier due to fewer superparts, we propose a part-whole-hierarchy message\npassing network for efficient 3D part assembly. We first introduce super-parts\nby grouping geometrically similar parts without any semantic labels. Then we\nemploy a part-whole hierarchical encoder, wherein a super-part encoder predicts\nlatent super-part poses based on input parts. Subsequently, we transform the\npoint cloud using the latent poses, feeding it to the part encoder for\naggregating super-part information and reasoning about part relationships to\npredict all part poses. In training, only ground-truth part poses are required.\nDuring inference, the predicted latent poses of super-parts enhance\ninterpretability.", + "In training, only ground-truth part poses are required.\nDuring inference, the predicted latent poses of super-parts enhance\ninterpretability. Experimental results on the PartNet dataset show that our\nmethod achieves state-of-the-art performance in part and connectivity accuracy\nand enables an interpretable hierarchical part assembly. Code is available at\nhttps://github.com/pkudba/3DHPA.", + "Diffusion models have made significant advances in generating high-quality\nimages, but their application to video generation has remained challenging due\nto the complexity of temporal motion. Zero-shot video editing offers a solution\nby utilizing pre-trained image diffusion models to translate source videos into\nnew ones. Nevertheless, existing methods struggle to maintain strict temporal\nconsistency and efficient memory consumption. In this work, we propose a novel\napproach to enhance temporal consistency in generated videos by merging\nself-attention tokens across frames. By aligning and compressing temporally\nredundant tokens across frames, our method improves temporal coherence and\nreduces memory consumption in self-attention computations. The merging strategy\nmatches and aligns tokens according to the temporal correspondence between\nframes, facilitating natural temporal consistency in generated video frames. To\nmanage the complexity of video processing, we divide videos into chunks and\ndevelop intra-chunk local token merging and inter-chunk global token merging,\nensuring both short-term video continuity and long-term content consistency.\nOur video editing approach seamlessly extends the advancements in image editing\nto video editing, rendering favorable results in temporal consistency over\nstate-of-the-art methods.", + "When deploying segmentation models in practice, it is critical to evaluate\ntheir behaviors in varied and complex scenes. Different from the previous\nevaluation paradigms only in consideration of global attribute variations (e.g.\nadverse weather), we investigate both local and global attribute variations for\nrobustness evaluation. To achieve this, we construct a mask-preserved attribute\nediting pipeline to edit visual attributes of real images with precise control\nof structural information. Therefore, the original segmentation labels can be\nreused for the edited images. Using our pipeline, we construct a benchmark\ncovering both object and image attributes (e.g. color, material, pattern,\nstyle). We evaluate a broad variety of semantic segmentation models, spanning\nfrom conventional close-set models to recent open-vocabulary large models on\ntheir robustness to different types of variations. We find that both local and\nglobal attribute variations affect segmentation performances, and the\nsensitivity of models diverges across different variation types. We argue that\nlocal attributes have the same importance as global attributes, and should be\nconsidered in the robustness evaluation of segmentation models. Code:\nhttps://github.com/PRIS-CV/Pascal-EA.", + "Diffusion models currently dominate the field of data-driven image synthesis\nwith their unparalleled scaling to large datasets. In this paper, we identify\nand rectify several causes for uneven and ineffective training in the popular\nADM diffusion model architecture, without altering its high-level structure.\nObserving uncontrolled magnitude changes and imbalances in both the network\nactivations and weights over the course of training, we redesign the network\nlayers to preserve activation, weight, and update magnitudes on expectation. We\nfind that systematic application of this philosophy eliminates the observed\ndrifts and imbalances, resulting in considerably better networks at equal\ncomputational complexity. Our modifications improve the previous record FID of\n2.41 in ImageNet-512 synthesis to 1.81, achieved using fast deterministic\nsampling.\n As an independent contribution, we present a method for setting the\nexponential moving average (EMA) parameters post-hoc, i.e., after completing\nthe training run. This allows precise tuning of EMA length without the cost of\nperforming several training runs, and reveals its surprising interactions with\nnetwork architecture, training time, and guidance.", + "We propose a hierarchical correlation clustering method that extends the\nwell-known correlation clustering to produce hierarchical clusters applicable\nto both positive and negative pairwise dissimilarities. Then, in the following,\nwe study unsupervised representation learning with such hierarchical\ncorrelation clustering. For this purpose, we first investigate embedding the\nrespective hierarchy to be used for tree preserving embedding and feature\nextraction. Thereafter, we study the extension of minimax distance measures to\ncorrelation clustering, as another representation learning paradigm. Finally,\nwe demonstrate the performance of our methods on several datasets.", + "Given a clothing image and a person image, an image-based virtual try-on aims\nto generate a customized image that appears natural and accurately reflects the\ncharacteristics of the clothing image. In this work, we aim to expand the\napplicability of the pre-trained diffusion model so that it can be utilized\nindependently for the virtual try-on task.The main challenge is to preserve the\nclothing details while effectively utilizing the robust generative capability\nof the pre-trained model. In order to tackle these issues, we propose\nStableVITON, learning the semantic correspondence between the clothing and the\nhuman body within the latent space of the pre-trained diffusion model in an\nend-to-end manner. Our proposed zero cross-attention blocks not only preserve\nthe clothing details by learning the semantic correspondence but also generate\nhigh-fidelity images by utilizing the inherent knowledge of the pre-trained\nmodel in the warping process. Through our proposed novel attention total\nvariation loss and applying augmentation, we achieve the sharp attention map,\nresulting in a more precise representation of clothing details. StableVITON\noutperforms the baselines in qualitative and quantitative evaluation, showing\npromising quality in arbitrary person images.", + "Through our proposed novel attention total\nvariation loss and applying augmentation, we achieve the sharp attention map,\nresulting in a more precise representation of clothing details. StableVITON\noutperforms the baselines in qualitative and quantitative evaluation, showing\npromising quality in arbitrary person images. Our code is available at\nhttps://github.com/rlawjdghek/StableVITON.", + "Stable Diffusion has established itself as a foundation model in generative\nAI artistic applications, receiving widespread research and application. Some\nrecent fine-tuning methods have made it feasible for individuals to implant\npersonalized concepts onto the basic Stable Diffusion model with minimal\ncomputational costs on small datasets. However, these innovations have also\ngiven rise to issues like facial privacy forgery and artistic copyright\ninfringement. In recent studies, researchers have explored the addition of\nimperceptible adversarial perturbations to images to prevent potential\nunauthorized exploitation and infringements when personal data is used for\nfine-tuning Stable Diffusion. Although these studies have demonstrated the\nability to protect images, it is essential to consider that these methods may\nnot be entirely applicable in real-world scenarios. In this paper, we\nsystematically evaluate the use of perturbations to protect images within a\npractical threat model. The results suggest that these approaches may not be\nsufficient to safeguard image privacy and copyright effectively. Furthermore,\nwe introduce a purification method capable of removing protected perturbations\nwhile preserving the original image structure to the greatest extent possible.\nExperiments reveal that Stable Diffusion can effectively learn from purified\nimages over all protective methods.", + "Human beings possess the capability to multiply a melange of multisensory\ncues while actively exploring and interacting with the 3D world. Current\nmulti-modal large language models, however, passively absorb sensory data as\ninputs, lacking the capacity to actively interact with the objects in the 3D\nenvironment and dynamically collect their multisensory information. To usher in\nthe study of this area, we propose MultiPLY, a multisensory embodied large\nlanguage model that could incorporate multisensory interactive data, including\nvisual, audio, tactile, and thermal information into large language models,\nthereby establishing the correlation among words, actions, and percepts. To\nthis end, we first collect Multisensory Universe, a large-scale multisensory\ninteraction dataset comprising 500k data by deploying an LLM-powered embodied\nagent to engage with the 3D environment.", + "To\nthis end, we first collect Multisensory Universe, a large-scale multisensory\ninteraction dataset comprising 500k data by deploying an LLM-powered embodied\nagent to engage with the 3D environment. To perform instruction tuning with\npre-trained LLM on such generated data, we first encode the 3D scene as\nabstracted object-centric representations and then introduce action tokens\ndenoting that the embodied agent takes certain actions within the environment,\nas well as state tokens that represent the multisensory state observations of\nthe agent at each time step. In the inference time, MultiPLY could generate\naction tokens, instructing the agent to take the action in the environment and\nobtain the next multisensory state observation. The observation is then\nappended back to the LLM via state tokens to generate subsequent text or action\ntokens. We demonstrate that MultiPLY outperforms baselines by a large margin\nthrough a diverse set of embodied tasks involving object retrieval, tool use,\nmultisensory captioning, and task decomposition.", + "The goal of the multi-sound source localization task is to localize sound\nsources from the mixture individually. While recent multi-sound source\nlocalization methods have shown improved performance, they face challenges due\nto their reliance on prior information about the number of objects to be\nseparated. In this paper, to overcome this limitation, we present a novel\nmulti-sound source localization method that can perform localization without\nprior knowledge of the number of sound sources. To achieve this goal, we\npropose an iterative object identification (IOI) module, which can recognize\nsound-making objects in an iterative manner. After finding the regions of\nsound-making objects, we devise object similarity-aware clustering (OSC) loss\nto guide the IOI module to effectively combine regions of the same object but\nalso distinguish between different objects and backgrounds. It enables our\nmethod to perform accurate localization of sound-making objects without any\nprior knowledge. Extensive experimental results on the MUSIC and VGGSound\nbenchmarks show the significant performance improvements of the proposed method\nover the existing methods for both single and multi-source. Our code is\navailable at: https://github.com/VisualAIKHU/NoPrior_MultiSSL", + "Recent works in implicit representations, such as Neural Radiance Fields\n(NeRF), have advanced the generation of realistic and animatable head avatars\nfrom video sequences. These implicit methods are still confronted by visual\nartifacts and jitters, since the lack of explicit geometric constraints poses a\nfundamental challenge in accurately modeling complex facial deformations. In\nthis paper, we introduce Dynamic Tetrahedra (DynTet), a novel hybrid\nrepresentation that encodes explicit dynamic meshes by neural networks to\nensure geometric consistency across various motions and viewpoints. DynTet is\nparameterized by the coordinate-based networks which learn signed distance,\ndeformation, and material texture, anchoring the training data into a\npredefined tetrahedra grid. Leveraging Marching Tetrahedra, DynTet efficiently\ndecodes textured meshes with a consistent topology, enabling fast rendering\nthrough a differentiable rasterizer and supervision via a pixel loss. To\nenhance training efficiency, we incorporate classical 3D Morphable Models to\nfacilitate geometry learning and define a canonical space for simplifying\ntexture learning. These advantages are readily achievable owing to the\neffective geometric representation employed in DynTet.", + "To\nenhance training efficiency, we incorporate classical 3D Morphable Models to\nfacilitate geometry learning and define a canonical space for simplifying\ntexture learning. These advantages are readily achievable owing to the\neffective geometric representation employed in DynTet. Compared with prior\nworks, DynTet demonstrates significant improvements in fidelity, lip\nsynchronization, and real-time performance according to various metrics. Beyond\nproducing stable and visually appealing synthesis videos, our method also\noutputs the dynamic meshes which is promising to enable many emerging\napplications.", + "Unsupervised (US) video anomaly detection (VAD) in surveillance applications\nis gaining more popularity recently due to its practical real-world\napplications. As surveillance videos are privacy sensitive and the availability\nof large-scale video data may enable better US-VAD systems, collaborative\nlearning can be highly rewarding in this setting. However, due to the extremely\nchallenging nature of the US-VAD task, where learning is carried out without\nany annotations, privacy-preserving collaborative learning of US-VAD systems\nhas not been studied yet. In this paper, we propose a new baseline for anomaly\ndetection capable of localizing anomalous events in complex surveillance videos\nin a fully unsupervised fashion without any labels on a privacy-preserving\nparticipant-based distributed training configuration. Additionally, we propose\nthree new evaluation protocols to benchmark anomaly detection approaches on\nvarious scenarios of collaborations and data availability. Based on these\nprotocols, we modify existing VAD datasets to extensively evaluate our approach\nas well as existing US SOTA methods on two large-scale datasets including\nUCF-Crime and XD-Violence. All proposed evaluation protocols, dataset splits,\nand codes are available here: https://github.com/AnasEmad11/CLAP", + "Crowd counting has achieved significant progress by training regressors to\npredict instance positions. In heavily crowded scenarios, however, regressors\nare challenged by uncontrollable annotation variance, which causes density map\nbias and context information inaccuracy. In this study, we propose mutual\nprompt learning (mPrompt), which leverages a regressor and a segmenter as\nguidance for each other, solving bias and inaccuracy caused by annotation\nvariance while distinguishing foreground from background. In specific, mPrompt\nleverages point annotations to tune the segmenter and predict pseudo head masks\nin a way of point prompt learning. It then uses the predicted segmentation\nmasks, which serve as spatial constraint, to rectify biased point annotations\nas context prompt learning. mPrompt defines a way of mutual information\nmaximization from prompt learning, mitigating the impact of annotation variance\nwhile improving model accuracy. Experiments show that mPrompt significantly\nreduces the Mean Average Error (MAE), demonstrating the potential to be general\nframework for down-stream vision tasks.", + "The perception of 3D motion of surrounding traffic participants is crucial\nfor driving safety. While existing works primarily focus on general large\nmotions, we contend that the instantaneous detection and quantification of\nsubtle motions is equally important as they indicate the nuances in driving\nbehavior that may be safety critical, such as behaviors near a stop sign of\nparking positions. We delve into this under-explored task, examining its unique\nchallenges and developing our solution, accompanied by a carefully designed\nbenchmark. Specifically, due to the lack of correspondences between consecutive\nframes of sparse Lidar point clouds, static objects might appear to be moving -\nthe so-called swimming effect. This intertwines with the true object motion,\nthereby posing ambiguity in accurate estimation, especially for subtle motions.\nTo address this, we propose to leverage local occupancy completion of object\npoint clouds to densify the shape cue, and mitigate the impact of swimming\nartifacts. The occupancy completion is learned in an end-to-end fashion\ntogether with the detection of moving objects and the estimation of their\nmotion, instantaneously as soon as objects start to move.", + "The occupancy completion is learned in an end-to-end fashion\ntogether with the detection of moving objects and the estimation of their\nmotion, instantaneously as soon as objects start to move. Extensive experiments\ndemonstrate superior performance compared to standard 3D motion estimation\napproaches, particularly highlighting our method's specialized treatment of\nsubtle motions.", + "In this paper, we propose a 3D geometry-aware deformable Gaussian Splatting\nmethod for dynamic view synthesis. Existing neural radiance fields (NeRF) based\nsolutions learn the deformation in an implicit manner, which cannot incorporate\n3D scene geometry. Therefore, the learned deformation is not necessarily\ngeometrically coherent, which results in unsatisfactory dynamic view synthesis\nand 3D dynamic reconstruction. Recently, 3D Gaussian Splatting provides a new\nrepresentation of the 3D scene, building upon which the 3D geometry could be\nexploited in learning the complex 3D deformation. Specifically, the scenes are\nrepresented as a collection of 3D Gaussian, where each 3D Gaussian is optimized\nto move and rotate over time to model the deformation. To enforce the 3D scene\ngeometry constraint during deformation, we explicitly extract 3D geometry\nfeatures and integrate them in learning the 3D deformation. In this way, our\nsolution achieves 3D geometry-aware deformation modeling, which enables\nimproved dynamic view synthesis and 3D dynamic reconstruction.", + "In this way, our\nsolution achieves 3D geometry-aware deformation modeling, which enables\nimproved dynamic view synthesis and 3D dynamic reconstruction. Extensive\nexperimental results on both synthetic and real datasets prove the superiority\nof our solution, which achieves new state-of-the-art performance.\n The project is available at https://npucvr.github.io/GaGS/", + "Real-world systems often encounter new data over time, which leads to\nexperiencing target domain shifts. Existing Test-Time Adaptation (TTA) methods\ntend to apply computationally heavy and memory-intensive backpropagation-based\napproaches to handle this. Here, we propose a novel method that uses a\nbackpropagation-free approach for TTA for the specific case of 3D data. Our\nmodel uses a two-stream architecture to maintain knowledge about the source\ndomain as well as complementary target-domain-specific information. The\nbackpropagation-free property of our model helps address the well-known\nforgetting problem and mitigates the error accumulation issue. The proposed\nmethod also eliminates the need for the usually noisy process of\npseudo-labeling and reliance on costly self-supervised training. Moreover, our\nmethod leverages subspace learning, effectively reducing the distribution\nvariance between the two domains. Furthermore, the source-domain-specific and\nthe target-domain-specific streams are aligned using a novel entropy-based\nadaptive fusion strategy. Extensive experiments on popular benchmarks\ndemonstrate the effectiveness of our method. The code will be available at\n\\url{https://github.com/abie-e/BFTT3D}.", + "RAW images are rarely shared mainly due to its excessive data size compared\nto their sRGB counterparts obtained by camera ISPs. Learning the forward and\ninverse processes of camera ISPs has been recently demonstrated, enabling\nphysically-meaningful RAW-level image processing on input sRGB images. However,\nexisting learning-based ISP methods fail to handle the large variations in the\nISP processes with respect to camera parameters such as ISO and exposure time,\nand have limitations when used for various applications. In this paper, we\npropose ParamISP, a learning-based method for forward and inverse conversion\nbetween sRGB and RAW images, that adopts a novel neural-network module to\nutilize camera parameters, which is dubbed as ParamNet. Given the camera\nparameters provided in the EXIF data, ParamNet converts them into a feature\nvector to control the ISP networks. Extensive experiments demonstrate that\nParamISP achieve superior RAW and sRGB reconstruction results compared to\nprevious methods and it can be effectively used for a variety of applications\nsuch as deblurring dataset synthesis, raw deblurring, HDR reconstruction, and\ncamera-to-camera transfer.", + "Diffusion models (DMs) embark a new era of generative modeling and offer more\nopportunities for efficient generating high-quality and realistic data samples.\nHowever, their widespread use has also brought forth new challenges in model\nsecurity, which motivates the creation of more effective adversarial attackers\non DMs to understand its vulnerability. We propose CAAT, a simple but generic\nand efficient approach that does not require costly training to effectively\nfool latent diffusion models (LDMs). The approach is based on the observation\nthat cross-attention layers exhibits higher sensitivity to gradient change,\nallowing for leveraging subtle perturbations on published images to\nsignificantly corrupt the generated images. We show that a subtle perturbation\non an image can significantly impact the cross-attention layers, thus changing\nthe mapping between text and image during the fine-tuning of customized\ndiffusion models. Extensive experiments demonstrate that CAAT is compatible\nwith diverse diffusion models and outperforms baseline attack methods in a more\neffective (more noise) and efficient (twice as fast as Anti-DreamBooth and\nMist) manner.", + "In this paper, we introduce Fairy, a minimalist yet robust adaptation of\nimage-editing diffusion models, enhancing them for video editing applications.\nOur approach centers on the concept of anchor-based cross-frame attention, a\nmechanism that implicitly propagates diffusion features across frames, ensuring\nsuperior temporal coherence and high-fidelity synthesis. Fairy not only\naddresses limitations of previous models, including memory and processing\nspeed. It also improves temporal consistency through a unique data augmentation\nstrategy. This strategy renders the model equivariant to affine transformations\nin both source and target images. Remarkably efficient, Fairy generates\n120-frame 512x384 videos (4-second duration at 30 FPS) in just 14 seconds,\noutpacing prior works by at least 44x. A comprehensive user study, involving\n1000 generated samples, confirms that our approach delivers superior quality,\ndecisively outperforming established methods.", + "Current instruction-based editing methods, such as InstructPix2Pix, often\nfail to produce satisfactory results in complex scenarios due to their\ndependence on the simple CLIP text encoder in diffusion models. To rectify\nthis, this paper introduces SmartEdit, a novel approach to instruction-based\nimage editing that leverages Multimodal Large Language Models (MLLMs) to\nenhance their understanding and reasoning capabilities. However, direct\nintegration of these elements still faces challenges in situations requiring\ncomplex reasoning. To mitigate this, we propose a Bidirectional Interaction\nModule that enables comprehensive bidirectional information interactions\nbetween the input image and the MLLM output. During training, we initially\nincorporate perception data to boost the perception and understanding\ncapabilities of diffusion models. Subsequently, we demonstrate that a small\namount of complex instruction editing data can effectively stimulate\nSmartEdit's editing capabilities for more complex instructions. We further\nconstruct a new evaluation dataset, Reason-Edit, specifically tailored for\ncomplex instruction-based image editing. Both quantitative and qualitative\nresults on this evaluation dataset indicate that our SmartEdit surpasses\nprevious methods, paving the way for the practical application of complex\ninstruction-based image editing.", + "The paper explores the industrial multimodal Anomaly Detection (AD) task,\nwhich exploits point clouds and RGB images to localize anomalies. We introduce\na novel light and fast framework that learns to map features from one modality\nto the other on nominal samples. At test time, anomalies are detected by\npinpointing inconsistencies between observed and mapped features. Extensive\nexperiments show that our approach achieves state-of-the-art detection and\nsegmentation performance in both the standard and few-shot settings on the\nMVTec 3D-AD dataset while achieving faster inference and occupying less memory\nthan previous multimodal AD methods. Moreover, we propose a layer-pruning\ntechnique to improve memory and time efficiency with a marginal sacrifice in\nperformance.", + "Despite noise and caption quality having been acknowledged as important\nfactors impacting vision-language contrastive pre-training, in this paper, we\nshow that the full potential of improving the training process by addressing\nsuch issues is yet to be realized. Specifically, we firstly study and analyze\ntwo issues affecting training: incorrect assignment of negative pairs, and low\ncaption quality and diversity. Then, we devise effective solutions for\naddressing both problems, which essentially require training with multiple true\npositive pairs. Finally, we propose training with sigmoid loss to address such\na requirement. We show very large gains over the current state-of-the-art for\nboth image recognition ($\\sim +6\\%$ on average over 11 datasets) and image\nretrieval ($\\sim +19\\%$ on Flickr30k and $\\sim +15\\%$ on MSCOCO).", + "We aim at finetuning a vision-language model without hurting its\nout-of-distribution (OOD) generalization. We address two types of OOD\ngeneralization, i.e., i) domain shift such as natural to sketch images, and ii)\nzero-shot capability to recognize the category that was not contained in the\nfinetune data. Arguably, the diminished OOD generalization after finetuning\nstems from the excessively simplified finetuning target, which only provides\nthe class information, such as ``a photo of a [CLASS]''. This is distinct from\nthe process in that CLIP was pretrained, where there is abundant text\nsupervision with rich semantic information. Therefore, we propose to compensate\nfor the finetune process using auxiliary supervision with rich semantic\ninformation, which acts as anchors to preserve the OOD generalization.", + "This is distinct from\nthe process in that CLIP was pretrained, where there is abundant text\nsupervision with rich semantic information. Therefore, we propose to compensate\nfor the finetune process using auxiliary supervision with rich semantic\ninformation, which acts as anchors to preserve the OOD generalization.\nSpecifically, two types of anchors are elaborated in our method, including i)\ntext-compensated anchor which uses the images from the finetune set but\nenriches the text supervision from a pretrained captioner, ii) image-text-pair\nanchor which is retrieved from the dataset similar to pretraining data of CLIP\naccording to the downstream task, associating with the original CLIP text with\nrich semantics. Those anchors are utilized as auxiliary semantic information to\nmaintain the original feature space of CLIP, thereby preserving the OOD\ngeneralization capabilities. Comprehensive experiments demonstrate that our\nmethod achieves in-distribution performance akin to conventional finetuning\nwhile attaining new state-of-the-art results on domain shift and zero-shot\nlearning benchmarks.", + "Researchers in natural science need reliable methods for quantifying animal\nbehavior. Recently, numerous computer vision methods emerged to automate the\nprocess. However, observing wild species at remote locations remains a\nchallenging task due to difficult lighting conditions and constraints on power\nsupply and data storage. Event cameras offer unique advantages for\nbattery-dependent remote monitoring due to their low power consumption and high\ndynamic range capabilities. We use this novel sensor to quantify a behavior in\nChinstrap penguins called ecstatic display. We formulate the problem as a\ntemporal action detection task, determining the start and end times of the\nbehavior. For this purpose, we recorded a colony of breeding penguins in\nAntarctica for several weeks and labeled event data on 16 nests. The developed\nmethod consists of a generator of candidate time intervals (proposals) and a\nclassifier of the actions within them. The experiments show that the event\ncameras' natural response to motion is effective for continuous behavior\nmonitoring and detection, reaching a mean average precision (mAP) of 58% (which\nincreases to 63% in good weather conditions).", + "The experiments show that the event\ncameras' natural response to motion is effective for continuous behavior\nmonitoring and detection, reaching a mean average precision (mAP) of 58% (which\nincreases to 63% in good weather conditions). The results also demonstrate the\nrobustness against various lighting conditions contained in the challenging\ndataset. The low-power capabilities of the event camera allow it to record\nsignificantly longer than with a conventional camera. This work pioneers the\nuse of event cameras for remote wildlife observation, opening new\ninterdisciplinary opportunities. https://tub-rip.github.io/eventpenguins/", + "Video-based visual relation detection tasks, such as video scene graph\ngeneration, play important roles in fine-grained video understanding. However,\ncurrent video visual relation detection datasets have two main limitations that\nhinder the progress of research in this area. First, they do not explore\ncomplex human-human interactions in multi-person scenarios. Second, the\nrelation types of existing datasets have relatively low-level semantics and can\nbe often recognized by appearance or simple prior information, without the need\nfor detailed spatio-temporal context reasoning. Nevertheless, comprehending\nhigh-level interactions between humans is crucial for understanding complex\nmulti-person videos, such as sports and surveillance videos. To address this\nissue, we propose a new video visual relation detection task: video human-human\ninteraction detection, and build a dataset named SportsHHI for it. SportsHHI\ncontains 34 high-level interaction classes from basketball and volleyball\nsports. 118,075 human bounding boxes and 50,649 interaction instances are\nannotated on 11,398 keyframes. To benchmark this, we propose a two-stage\nbaseline method and conduct extensive experiments to reveal the key factors for\na successful human-human interaction detector.", + "118,075 human bounding boxes and 50,649 interaction instances are\nannotated on 11,398 keyframes. To benchmark this, we propose a two-stage\nbaseline method and conduct extensive experiments to reveal the key factors for\na successful human-human interaction detector. We hope that SportsHHI can\nstimulate research on human interaction understanding in videos and promote the\ndevelopment of spatio-temporal context modeling techniques in video visual\nrelation detection.", + "Hyperspectral 3D imaging aims to acquire both depth and spectral information\nof a scene. However, existing methods are either prohibitively expensive and\nbulky or compromise on spectral and depth accuracy. In this work, we present\nDispersed Structured Light (DSL), a cost-effective and compact method for\naccurate hyperspectral 3D imaging. DSL modifies a traditional projector-camera\nsystem by placing a sub-millimeter thick diffraction grating film front of the\nprojector. The grating disperses structured light based on light wavelength. To\nutilize the dispersed structured light, we devise a model for dispersive\nprojection image formation and a per-pixel hyperspectral 3D reconstruction\nmethod. We validate DSL by instantiating a compact experimental prototype. DSL\nachieves spectral accuracy of 18.8nm full-width half-maximum (FWHM) and depth\nerror of 1mm. We demonstrate that DSL outperforms prior work on practical\nhyperspectral 3D imaging. DSL promises accurate and practical hyperspectral 3D\nimaging for diverse application domains, including computer vision and\ngraphics, cultural heritage, geology, and biology.", + "Crowd counting is a fundamental problem in crowd analysis which is typically\naccomplished by estimating a crowd density map and summing over the density\nvalues. However, this approach suffers from background noise accumulation and\nloss of density due to the use of broad Gaussian kernels to create the ground\ntruth density maps. This issue can be overcome by narrowing the Gaussian\nkernel. However, existing approaches perform poorly when trained with ground\ntruth density maps with broad kernels. To deal with this limitation, we propose\nusing conditional diffusion models to predict density maps, as diffusion models\nshow high fidelity to training data during generation. With that, we present\n$CrowdDiff$ that generates the crowd density map as a reverse diffusion\nprocess. Furthermore, as the intermediate time steps of the diffusion process\nare noisy, we incorporate a regression branch for direct crowd estimation only\nduring training to improve the feature learning. In addition, owing to the\nstochastic nature of the diffusion model, we introduce producing multiple\ndensity maps to improve the counting performance contrary to the existing crowd\ncounting pipelines. We conduct extensive experiments on publicly available\ndatasets to validate the effectiveness of our method.", + "In addition, owing to the\nstochastic nature of the diffusion model, we introduce producing multiple\ndensity maps to improve the counting performance contrary to the existing crowd\ncounting pipelines. We conduct extensive experiments on publicly available\ndatasets to validate the effectiveness of our method. $CrowdDiff$ outperforms\nexisting state-of-the-art crowd counting methods on several public crowd\nanalysis benchmarks with significant improvements.", + "This paper proposes a GeneraLIst encoder-Decoder (GLID) pre-training method\nfor better handling various downstream computer vision tasks. While\nself-supervised pre-training approaches, e.g., Masked Autoencoder, have shown\nsuccess in transfer learning, task-specific sub-architectures are still\nrequired to be appended for different downstream tasks, which cannot enjoy the\nbenefits of large-scale pre-training. GLID overcomes this challenge by allowing\nthe pre-trained generalist encoder-decoder to be fine-tuned on various vision\ntasks with minimal task-specific architecture modifications. In the GLID\ntraining scheme, pre-training pretext task and other downstream tasks are\nmodeled as \"query-to-answer\" problems, including the pre-training pretext task\nand other downstream tasks. We pre-train a task-agnostic encoder-decoder with\nquery-mask pairs. During fine-tuning, GLID maintains the pre-trained\nencoder-decoder and queries, only replacing the topmost linear transformation\nlayer with task-specific linear heads. This minimizes the pretrain-finetune\narchitecture inconsistency and enables the pre-trained model to better adapt to\ndownstream tasks.", + "During fine-tuning, GLID maintains the pre-trained\nencoder-decoder and queries, only replacing the topmost linear transformation\nlayer with task-specific linear heads. This minimizes the pretrain-finetune\narchitecture inconsistency and enables the pre-trained model to better adapt to\ndownstream tasks. GLID achieves competitive performance on various vision\ntasks, including object detection, image segmentation, pose estimation, and\ndepth estimation, outperforming or matching specialist models such as\nMask2Former, DETR, ViTPose, and BinsFormer.", + "In the context of autonomous navigation of terrestrial robots, the creation\nof realistic models for agent dynamics and sensing is a widespread habit in the\nrobotics literature and in commercial applications, where they are used for\nmodel based control and/or for localization and mapping. The more recent\nEmbodied AI literature, on the other hand, focuses on modular or end-to-end\nagents trained in simulators like Habitat or AI-Thor, where the emphasis is put\non photo-realistic rendering and scene diversity, but high-fidelity robot\nmotion is assigned a less privileged role. The resulting sim2real gap\nsignificantly impacts transfer of the trained models to real robotic platforms.\nIn this work we explore end-to-end training of agents in simulation in settings\nwhich minimize the sim2real gap both, in sensing and in actuation. Our agent\ndirectly predicts (discretized) velocity commands, which are maintained through\nclosed-loop control in the real robot. The behavior of the real robot\n(including the underlying low-level controller) is identified and simulated in\na modified Habitat simulator. Noise models for odometry and localization\nfurther contribute in lowering the sim2real gap.", + "The behavior of the real robot\n(including the underlying low-level controller) is identified and simulated in\na modified Habitat simulator. Noise models for odometry and localization\nfurther contribute in lowering the sim2real gap. We evaluate on real navigation\nscenarios, explore different localization and point goal calculation methods\nand report significant gains in performance and robustness compared to prior\nwork.", + "Recently, convolutional neural networks (CNNs) with large size kernels have\nattracted much attention in the computer vision field, following the success of\nthe Vision Transformers. Large kernel CNNs have been reported to perform well\nin downstream vision tasks as well as in classification performance. The reason\nfor the high-performance of large kernel CNNs in downstream tasks has been\nattributed to the large effective receptive field (ERF) produced by large size\nkernels, but this view has not been fully tested. We therefore revisit the\nperformance of large kernel CNNs in downstream task, focusing on the weakly\nsupervised object localization (WSOL) task. WSOL, a difficult downstream task\nthat is not fully supervised, provides a new angle to explore the capabilities\nof the large kernel CNNs. Our study compares the modern large kernel CNNs\nConvNeXt, RepLKNet, and SLaK to test the validity of the naive expectation that\nERF size is important for improving downstream task performance. Our analysis\nof the factors contributing to high performance provides a different\nperspective, in which the main factor is feature map improvement.", + "Our analysis\nof the factors contributing to high performance provides a different\nperspective, in which the main factor is feature map improvement. Furthermore,\nwe find that modern CNNs are robust to the CAM problems of local regions of\nobjects being activated, which has long been discussed in WSOL. CAM is the most\nclassic WSOL method, but because of the above-mentioned problems, it is often\nused as a baseline method for comparison. However, experiments on the\nCUB-200-2011 dataset show that simply combining a large kernel CNN, CAM, and\nsimple data augmentation methods can achieve performance (90.99% MaxBoxAcc)\ncomparable to the latest WSOL method, which is CNN-based and requires special\ntraining or complex post-processing. The code is available at\nhttps://github.com/snskysk/CAM-Back-Again.", + "We present Cutie, a video object segmentation (VOS) network with object-level\nmemory reading, which puts the object representation from memory back into the\nvideo object segmentation result. Recent works on VOS employ bottom-up\npixel-level memory reading which struggles due to matching noise, especially in\nthe presence of distractors, resulting in lower performance in more challenging\ndata. In contrast, Cutie performs top-down object-level memory reading by\nadapting a small set of object queries. Via those, it interacts with the\nbottom-up pixel features iteratively with a query-based object transformer (qt,\nhence Cutie). The object queries act as a high-level summary of the target\nobject, while high-resolution feature maps are retained for accurate\nsegmentation. Together with foreground-background masked attention, Cutie\ncleanly separates the semantics of the foreground object from the background.\nOn the challenging MOSE dataset, Cutie improves by 8.7 J&F over XMem with a\nsimilar running time and improves by 4.2 J&F over DeAOT while being three times\nfaster. Code is available at: https://hkchengrex.github.io/Cutie", + "While there has been significant progress in customizing text-to-image\ngeneration models, generating images that combine multiple personalized\nconcepts remains challenging. In this work, we introduce Concept Weaver, a\nmethod for composing customized text-to-image diffusion models at inference\ntime. Specifically, the method breaks the process into two steps: creating a\ntemplate image aligned with the semantics of input prompts, and then\npersonalizing the template using a concept fusion strategy. The fusion strategy\nincorporates the appearance of the target concepts into the template image\nwhile retaining its structural details. The results indicate that our method\ncan generate multiple custom concepts with higher identity fidelity compared to\nalternative approaches. Furthermore, the method is shown to seamlessly handle\nmore than two concepts and closely follow the semantic meaning of the input\nprompt without blending appearances across different subjects.", + "Cross-Domain Few-Shot Segmentation (CD-FSS) poses the challenge of segmenting\nnovel categories from a distinct domain using only limited exemplars. In this\npaper, we undertake a comprehensive study of CD-FSS and uncover two crucial\ninsights: (i) the necessity of a fine-tuning stage to effectively transfer the\nlearned meta-knowledge across domains, and (ii) the overfitting risk during the\nna\\\"ive fine-tuning due to the scarcity of novel category examples. With these\ninsights, we propose a novel cross-domain fine-tuning strategy that addresses\nthe challenging CD-FSS tasks. We first design Bi-directional Few-shot\nPrediction (BFP), which establishes support-query correspondence in a\nbi-directional manner, crafting augmented supervision to reduce the overfitting\nrisk. Then we further extend BFP into Iterative Few-shot Adaptor (IFA), which\nis a recursive framework to capture the support-query correspondence\niteratively, targeting maximal exploitation of supervisory signals from the\nsparse novel category samples.", + "Then we further extend BFP into Iterative Few-shot Adaptor (IFA), which\nis a recursive framework to capture the support-query correspondence\niteratively, targeting maximal exploitation of supervisory signals from the\nsparse novel category samples. Extensive empirical evaluations show that our\nmethod significantly outperforms the state-of-the-arts (+7.8\\%), which verifies\nthat IFA tackles the cross-domain challenges and mitigates the overfitting\nsimultaneously. The code is available at: https://github.com/niejiahao1998/IFA.", + "Instance shape reconstruction from a 3D scene involves recovering the full\ngeometries of multiple objects at the semantic instance level. Many methods\nleverage data-driven learning due to the intricacies of scene complexity and\nsignificant indoor occlusions. Training these methods often requires a\nlarge-scale, high-quality dataset with aligned and paired shape annotations\nwith real-world scans. Existing datasets are either synthetic or misaligned,\nrestricting the performance of data-driven methods on real data. To this end,\nwe introduce LASA, a Large-scale Aligned Shape Annotation Dataset comprising\n10,412 high-quality CAD annotations aligned with 920 real-world scene scans\nfrom ArkitScenes, created manually by professional artists. On this top, we\npropose a novel Diffusion-based Cross-Modal Shape Reconstruction (DisCo)\nmethod. It is empowered by a hybrid feature aggregation design to fuse\nmulti-modal inputs and recover high-fidelity object geometries. Besides, we\npresent an Occupancy-Guided 3D Object Detection (OccGOD) method and demonstrate\nthat our shape annotations provide scene occupancy clues that can further\nimprove 3D object detection.", + "Besides, we\npresent an Occupancy-Guided 3D Object Detection (OccGOD) method and demonstrate\nthat our shape annotations provide scene occupancy clues that can further\nimprove 3D object detection. Supported by LASA, extensive experiments show that\nour methods achieve state-of-the-art performance in both instance-level scene\nreconstruction and 3D object detection tasks.", + "This paper endeavors to advance the precision of snapshot compressive imaging\n(SCI) reconstruction for multispectral image (MSI). To achieve this, we\nintegrate the advantageous attributes of established SCI techniques and an\nimage generative model, propose a novel structured zero-shot diffusion model,\ndubbed DiffSCI. DiffSCI leverages the structural insights from the deep prior\nand optimization-based methodologies, complemented by the generative\ncapabilities offered by the contemporary denoising diffusion model.\nSpecifically, firstly, we employ a pre-trained diffusion model, which has been\ntrained on a substantial corpus of RGB images, as the generative denoiser\nwithin the Plug-and-Play framework for the first time. This integration allows\nfor the successful completion of SCI reconstruction, especially in the case\nthat current methods struggle to address effectively. Secondly, we\nsystematically account for spectral band correlations and introduce a robust\nmethodology to mitigate wavelength mismatch, thus enabling seamless adaptation\nof the RGB diffusion model to MSIs. Thirdly, an accelerated algorithm is\nimplemented to expedite the resolution of the data subproblem.", + "Secondly, we\nsystematically account for spectral band correlations and introduce a robust\nmethodology to mitigate wavelength mismatch, thus enabling seamless adaptation\nof the RGB diffusion model to MSIs. Thirdly, an accelerated algorithm is\nimplemented to expedite the resolution of the data subproblem. This\naugmentation not only accelerates the convergence rate but also elevates the\nquality of the reconstruction process. We present extensive testing to show\nthat DiffSCI exhibits discernible performance enhancements over prevailing\nself-supervised and zero-shot approaches, surpassing even supervised\ntransformer counterparts across both simulated and real datasets. Our code will\nbe available.", + "We propose DiffSHEG, a Diffusion-based approach for Speech-driven Holistic 3D\nExpression and Gesture generation with arbitrary length. While previous works\nfocused on co-speech gesture or expression generation individually, the joint\ngeneration of synchronized expressions and gestures remains barely explored. To\naddress this, our diffusion-based co-speech motion generation transformer\nenables uni-directional information flow from expression to gesture,\nfacilitating improved matching of joint expression-gesture distributions.\nFurthermore, we introduce an outpainting-based sampling strategy for arbitrary\nlong sequence generation in diffusion models, offering flexibility and\ncomputational efficiency. Our method provides a practical solution that\nproduces high-quality synchronized expression and gesture generation driven by\nspeech. Evaluated on two public datasets, our approach achieves\nstate-of-the-art performance both quantitatively and qualitatively.\nAdditionally, a user study confirms the superiority of DiffSHEG over prior\napproaches. By enabling the real-time generation of expressive and synchronized\nmotions, DiffSHEG showcases its potential for various applications in the\ndevelopment of digital humans and embodied agents.", + "Trajectory prediction is a challenging problem that requires considering\ninteractions among multiple actors and the surrounding environment. While\ndata-driven approaches have been used to address this complex problem, they\nsuffer from unreliable predictions under distribution shifts during test time.\nAccordingly, several online learning methods have been proposed using\nregression loss from the ground truth of observed data leveraging the\nauto-labeling nature of trajectory prediction task. We mainly tackle the\nfollowing two issues. First, previous works underfit and overfit as they only\noptimize the last layer of the motion decoder. To this end, we employ the\nmasked autoencoder (MAE) for representation learning to encourage complex\ninteraction modeling in shifted test distribution for updating deeper layers.\nSecond, utilizing the sequential nature of driving data, we propose an\nactor-specific token memory that enables the test-time learning of actor-wise\nmotion characteristics. Our proposed method has been validated across various\nchallenging cross-dataset distribution shift scenarios including nuScenes,\nLyft, Waymo, and Interaction. Our method surpasses the performance of existing\nstate-of-the-art online learning methods in terms of both prediction accuracy\nand computational efficiency. The code is available at\nhttps://github.com/daeheepark/T4P.", + "Text-to-image person re-identification (TIReID) is a compelling topic in the\ncross-modal community, which aims to retrieve the target person based on a\ntextual query. Although numerous TIReID methods have been proposed and achieved\npromising performance, they implicitly assume the training image-text pairs are\ncorrectly aligned, which is not always the case in real-world scenarios. In\npractice, the image-text pairs inevitably exist under-correlated or even\nfalse-correlated, a.k.a noisy correspondence (NC), due to the low quality of\nthe images and annotation errors. To address this problem, we propose a novel\nRobust Dual Embedding method (RDE) that can learn robust visual-semantic\nassociations even with NC. Specifically, RDE consists of two main components:\n1) A Confident Consensus Division (CCD) module that leverages the dual-grained\ndecisions of dual embedding modules to obtain a consensus set of clean training\ndata, which enables the model to learn correct and reliable visual-semantic\nassociations.", + "2) A Triplet Alignment Loss (TAL) relaxes the conventional\nTriplet Ranking loss with the hardest negative samples to a log-exponential\nupper bound over all negative ones, thus preventing the model collapse under NC\nand can also focus on hard-negative samples for promising performance. We\nconduct extensive experiments on three public benchmarks, namely CUHK-PEDES,\nICFG-PEDES, and RSTPReID, to evaluate the performance and robustness of our\nRDE. Our method achieves state-of-the-art results both with and without\nsynthetic noisy correspondences on all three datasets. Code is available at\nhttps://github.com/QinYang79/RDE.", + "In this paper, we present a novel paradigm to enhance the ability of object\ndetector, e.g., expanding categories or improving detection performance, by\ntraining on synthetic dataset generated from diffusion models. Specifically, we\nintegrate an instance-level grounding head into a pre-trained, generative\ndiffusion model, to augment it with the ability of localising instances in the\ngenerated images. The grounding head is trained to align the text embedding of\ncategory names with the regional visual feature of the diffusion model, using\nsupervision from an off-the-shelf object detector, and a novel self-training\nscheme on (novel) categories not covered by the detector. We conduct thorough\nexperiments to show that, this enhanced version of diffusion model, termed as\nInstaGen, can serve as a data synthesizer, to enhance object detectors by\ntraining on its generated samples, demonstrating superior performance over\nexisting state-of-the-art methods in open-vocabulary (+4.5 AP) and data-sparse\n(+1.2 to 5.2 AP) scenarios. Project page with code:\nhttps://fcjian.github.io/InstaGen.", + "In contrast to extensive studies on general vision, pre-training for scalable\nvisual autonomous driving remains seldom explored. Visual autonomous driving\napplications require features encompassing semantics, 3D geometry, and temporal\ninformation simultaneously for joint perception, prediction, and planning,\nposing dramatic challenges for pre-training. To resolve this, we bring up a new\npre-training task termed as visual point cloud forecasting - predicting future\npoint clouds from historical visual input. The key merit of this task captures\nthe synergic learning of semantics, 3D structures, and temporal dynamics. Hence\nit shows superiority in various downstream tasks. To cope with this new\nproblem, we present ViDAR, a general model to pre-train downstream visual\nencoders. It first extracts historical embeddings by the encoder. These\nrepresentations are then transformed to 3D geometric space via a novel Latent\nRendering operator for future point cloud prediction. Experiments show\nsignificant gain in downstream tasks, e.g., 3.1% NDS on 3D detection, ~10%\nerror reduction on motion forecasting, and ~15% less collision rate on\nplanning.", + "Compared with transferable untargeted attacks, transferable targeted\nadversarial attacks could specify the misclassification categories of\nadversarial samples, posing a greater threat to security-critical tasks. In the\nmeanwhile, 3D adversarial samples, due to their potential of multi-view\nrobustness, can more comprehensively identify weaknesses in existing deep\nlearning systems, possessing great application value. However, the field of\ntransferable targeted 3D adversarial attacks remains vacant. The goal of this\nwork is to develop a more effective technique that could generate transferable\ntargeted 3D adversarial examples, filling the gap in this field. To achieve\nthis goal, we design a novel framework named TT3D that could rapidly\nreconstruct from few multi-view images into Transferable Targeted 3D textured\nmeshes.", + "To achieve\nthis goal, we design a novel framework named TT3D that could rapidly\nreconstruct from few multi-view images into Transferable Targeted 3D textured\nmeshes. While existing mesh-based texture optimization methods compute\ngradients in the high-dimensional mesh space and easily fall into local optima,\nleading to unsatisfactory transferability and distinct distortions, TT3D\ninnovatively performs dual optimization towards both feature grid and\nMulti-layer Perceptron (MLP) parameters in the grid-based NeRF space, which\nsignificantly enhances black-box transferability while enjoying naturalness.\nExperimental results show that TT3D not only exhibits superior cross-model\ntransferability but also maintains considerable adaptability across different\nrenders and vision tasks. More importantly, we produce 3D adversarial examples\nwith 3D printing techniques in the real world and verify their robust\nperformance under various scenarios.", + "We introduce a co-designed approach for human portrait relighting that\ncombines a physics-guided architecture with a pre-training framework. Drawing\non the Cook-Torrance reflectance model, we have meticulously configured the\narchitecture design to precisely simulate light-surface interactions.\nFurthermore, to overcome the limitation of scarce high-quality lightstage data,\nwe have developed a self-supervised pre-training strategy. This novel\ncombination of accurate physical modeling and expanded training dataset\nestablishes a new benchmark in relighting realism.", + "Recently, leveraging large language models (LLMs) or multimodal large\nlanguage models (MLLMs) for document understanding has been proven very\npromising. However, previous works that employ LLMs/MLLMs for document\nunderstanding have not fully explored and utilized the document layout\ninformation, which is vital for precise document understanding. In this paper,\nwe propose LayoutLLM, an LLM/MLLM based method for document understanding. The\ncore of LayoutLLM is a layout instruction tuning strategy, which is specially\ndesigned to enhance the comprehension and utilization of document layouts. The\nproposed layout instruction tuning strategy consists of two components:\nLayout-aware Pre-training and Layout-aware Supervised Fine-tuning. To capture\nthe characteristics of document layout in Layout-aware Pre-training, three\ngroups of pre-training tasks, corresponding to document-level, region-level and\nsegment-level information, are introduced. Furthermore, a novel module called\nlayout chain-of-thought (LayoutCoT) is devised to enable LayoutLLM to focus on\nregions relevant to the question and generate accurate answers. LayoutCoT is\neffective for boosting the performance of document understanding.", + "Furthermore, a novel module called\nlayout chain-of-thought (LayoutCoT) is devised to enable LayoutLLM to focus on\nregions relevant to the question and generate accurate answers. LayoutCoT is\neffective for boosting the performance of document understanding. Meanwhile, it\nbrings a certain degree of interpretability, which could facilitate manual\ninspection and correction. Experiments on standard benchmarks show that the\nproposed LayoutLLM significantly outperforms existing methods that adopt\nopen-source 7B LLMs/MLLMs for document understanding. The training data of the\nLayoutLLM is publicly available at\nhttps://github.com/AlibabaResearch/AdvancedLiterateMachinery/tree/main/DocumentUnderstanding/LayoutLLM", + "Visual-language foundation models, like CLIP, learn generalized\nrepresentations that enable zero-shot open-set classification. Few-shot\nadaptation methods, based on prompt tuning, have been shown to further improve\nperformance on downstream datasets. However, these methods do not fare well in\nthe taxonomic open set (TOS) setting, where the classifier is asked to make\npredictions from label sets across different levels of semantic granularity.\nFrequently, they infer incorrect labels at coarser taxonomic class levels, even\nwhen the inference at the leaf level (original class labels) is correct. To\naddress this problem, we propose a prompt tuning technique that calibrates the\nhierarchical consistency of model predictions. A set of metrics of hierarchical\nconsistency, the Hierarchical Consistent Accuracy (HCA) and the Mean Treecut\nAccuracy (MTA), are first proposed to evaluate TOS model performance. A new\nPrompt Tuning for Hierarchical Consistency (ProTeCt) technique is then proposed\nto calibrate classification across label set granularities. Results show that\nProTeCt can be combined with existing prompt tuning methods to significantly\nimprove TOS classification without degrading the leaf level classification\nperformance.", + "Featurizing microscopy images for use in biological research remains a\nsignificant challenge, especially for large-scale experiments spanning millions\nof images. This work explores the scaling properties of weakly supervised\nclassifiers and self-supervised masked autoencoders (MAEs) when training with\nincreasingly larger model backbones and microscopy datasets. Our results show\nthat ViT-based MAEs outperform weakly supervised classifiers on a variety of\ntasks, achieving as much as a 11.5% relative improvement when recalling known\nbiological relationships curated from public databases. Additionally, we\ndevelop a new channel-agnostic MAE architecture (CA-MAE) that allows for\ninputting images of different numbers and orders of channels at inference time.\nWe demonstrate that CA-MAEs effectively generalize by inferring and evaluating\non a microscopy image dataset (JUMP-CP) generated under different experimental\nconditions with a different channel structure than our pretraining data\n(RPI-93M). Our findings motivate continued research into scaling\nself-supervised learning on microscopy data in order to create powerful\nfoundation models of cellular biology that have the potential to catalyze\nadvancements in drug discovery and beyond.", + "In this paper, we delve into the creation of one-shot hand avatars, attaining\nhigh-fidelity and drivable hand representations swiftly from a single image.\nWith the burgeoning domains of the digital human, the need for quick and\npersonalized hand avatar creation has become increasingly critical. Existing\ntechniques typically require extensive input data and may prove cumbersome or\neven impractical in certain scenarios. To enhance accessibility, we present a\nnovel method OHTA (One-shot Hand avaTAr) that enables the creation of detailed\nhand avatars from merely one image. OHTA tackles the inherent difficulties of\nthis data-limited problem by learning and utilizing data-driven hand priors.\nSpecifically, we design a hand prior model initially employed for 1) learning\nvarious hand priors with available data and subsequently for 2) the inversion\nand fitting of the target identity with prior knowledge. OHTA demonstrates the\ncapability to create high-fidelity hand avatars with consistent animatable\nquality, solely relying on a single image. Furthermore, we illustrate the\nversatility of OHTA through diverse applications, encompassing text-to-avatar\nconversion, hand editing, and identity latent space manipulation.", + "We propose a method to efficiently equip the Segment Anything Model (SAM)\nwith the ability to generate regional captions. SAM presents strong\ngeneralizability to segment anything while is short for semantic understanding.\nBy introducing a lightweight query-based feature mixer, we align the\nregion-specific features with the embedding space of language models for later\ncaption generation. As the number of trainable parameters is small (typically\nin the order of tens of millions), it costs less computation, less memory\nusage, and less communication bandwidth, resulting in both fast and scalable\ntraining. To address the scarcity problem of regional caption data, we propose\nto first pre-train our model on objection detection and segmentation tasks. We\ncall this step weak supervision pretraining since the pre-training data only\ncontains category names instead of full-sentence descriptions. The weak\nsupervision pretraining allows us to leverage many publicly available object\ndetection and segmentation datasets. We conduct extensive experiments to\ndemonstrate the superiority of our method and validate each design choice. This\nwork serves as a stepping stone towards scaling up regional captioning data and\nsheds light on exploring efficient ways to augment SAM with regional semantics.", + "We conduct extensive experiments to\ndemonstrate the superiority of our method and validate each design choice. This\nwork serves as a stepping stone towards scaling up regional captioning data and\nsheds light on exploring efficient ways to augment SAM with regional semantics.\nThe project page, along with the associated code, can be accessed via\nhttps://xk-huang.github.io/segment-caption-anything/.", + "Most 3D generation research focuses on up-projecting 2D foundation models\ninto the 3D space, either by minimizing 2D Score Distillation Sampling (SDS)\nloss or fine-tuning on multi-view datasets. Without explicit 3D priors, these\nmethods often lead to geometric anomalies and multi-view inconsistency.\nRecently, researchers have attempted to improve the genuineness of 3D objects\nby directly training on 3D datasets, albeit at the cost of low-quality texture\ngeneration due to the limited texture diversity in 3D datasets. To harness the\nadvantages of both approaches, we propose Bidirectional Diffusion(BiDiff), a\nunified framework that incorporates both a 3D and a 2D diffusion process, to\npreserve both 3D fidelity and 2D texture richness, respectively. Moreover, as a\nsimple combination may yield inconsistent generation results, we further bridge\nthem with novel bidirectional guidance. In addition, our method can be used as\nan initialization of optimization-based models to further improve the quality\nof 3D model and efficiency of optimization, reducing the generation process\nfrom 3.4 hours to 20 minutes.", + "In addition, our method can be used as\nan initialization of optimization-based models to further improve the quality\nof 3D model and efficiency of optimization, reducing the generation process\nfrom 3.4 hours to 20 minutes. Experimental results have shown that our model\nachieves high-quality, diverse, and scalable 3D generation. Project website:\nhttps://bidiff.github.io/.", + "In autonomous driving, behavior prediction is fundamental for safe motion\nplanning, hence the security and robustness of prediction models against\nadversarial attacks are of paramount importance. We propose a novel adversarial\nbackdoor attack against trajectory prediction models as a means of studying\ntheir potential vulnerabilities. Our attack affects the victim at training time\nvia naturalistic, hence stealthy, poisoned samples crafted using a novel\ntwo-step approach. First, the triggers are crafted by perturbing the trajectory\nof attacking vehicle and then disguised by transforming the scene using a\nbi-level optimization technique. The proposed attack does not depend on a\nparticular model architecture and operates in a black-box manner, thus can be\neffective without any knowledge of the victim model. We conduct extensive\nempirical studies using state-of-the-art prediction models on two benchmark\ndatasets using metrics customized for trajectory prediction. We show that the\nproposed attack is highly effective, as it can significantly hinder the\nperformance of prediction models, unnoticeable by the victims, and efficient as\nit forces the victim to generate malicious behavior even under constrained\nconditions. Via ablative studies, we analyze the impact of different attack\ndesign choices followed by an evaluation of existing defence mechanisms against\nthe proposed attack.", + "Point cloud filtering is a fundamental 3D vision task, which aims to remove\nnoise while recovering the underlying clean surfaces. State-of-the-art methods\nremove noise by moving noisy points along stochastic trajectories to the clean\nsurfaces. These methods often require regularization within the training\nobjective and/or during post-processing, to ensure fidelity. In this paper, we\nintroduce StraightPCF, a new deep learning based method for point cloud\nfiltering. It works by moving noisy points along straight paths, thus reducing\ndiscretization errors while ensuring faster convergence to the clean surfaces.\nWe model noisy patches as intermediate states between high noise patch variants\nand their clean counterparts, and design the VelocityModule to infer a constant\nflow velocity from the former to the latter. This constant flow leads to\nstraight filtering trajectories. In addition, we introduce a DistanceModule\nthat scales the straight trajectory using an estimated distance scalar to\nattain convergence near the clean surface. Our network is lightweight and only\nhas $\\sim530K$ parameters, being 17% of IterativePFN (a most recent point cloud\nfiltering network).", + "Our network is lightweight and only\nhas $\\sim530K$ parameters, being 17% of IterativePFN (a most recent point cloud\nfiltering network). Extensive experiments on both synthetic and real-world data\nshow our method achieves state-of-the-art results. Our method also demonstrates\nnice distributions of filtered points without the need for regularization. The\nimplementation code can be found at: https://github.com/ddsediri/StraightPCF.", + "One of the main challenges of multimodal learning is the need to combine\nheterogeneous modalities (e.g., video, audio, text). For example, video and\naudio are obtained at much higher rates than text and are roughly aligned in\ntime. They are often not synchronized with text, which comes as a global\ncontext, e.g., a title, or a description. Furthermore, video and audio inputs\nare of much larger volumes, and grow as the video length increases, which\nnaturally requires more compute dedicated to these modalities and makes\nmodeling of long-range dependencies harder.\n We here decouple the multimodal modeling, dividing it into separate, focused\nautoregressive models, processing the inputs according to the characteristics\nof the modalities. We propose a multimodal model, called Mirasol3B, consisting\nof an autoregressive component for the time-synchronized modalities (audio and\nvideo), and an autoregressive component for the context modalities which are\nnot necessarily aligned in time but are still sequential.", + "We propose a multimodal model, called Mirasol3B, consisting\nof an autoregressive component for the time-synchronized modalities (audio and\nvideo), and an autoregressive component for the context modalities which are\nnot necessarily aligned in time but are still sequential. To address the\nlong-sequences of the video-audio inputs, we propose to further partition the\nvideo and audio sequences in consecutive snippets and autoregressively process\ntheir representations. To that end, we propose a Combiner mechanism, which\nmodels the audio-video information jointly within a timeframe. The Combiner\nlearns to extract audio and video features from raw spatio-temporal signals,\nand then learns to fuse these features producing compact but expressive\nrepresentations per snippet.\n Our approach achieves the state-of-the-art on well established multimodal\nbenchmarks, outperforming much larger models. It effectively addresses the high\ncomputational demand of media inputs by both learning compact representations,\ncontrolling the sequence length of the audio-video feature representations, and\nmodeling their dependencies in time.", + "Sign Languages (SL) serve as the primary mode of communication for the Deaf\nand Hard of Hearing communities. Deep learning methods for SL recognition and\ntranslation have achieved promising results. However, Sign Language Production\n(SLP) poses a challenge as the generated motions must be realistic and have\nprecise semantic meaning. Most SLP methods rely on 2D data, which hinders their\nrealism. In this work, a diffusion-based SLP model is trained on a curated\nlarge-scale dataset of 4D signing avatars and their corresponding text\ntranscripts. The proposed method can generate dynamic sequences of 3D avatars\nfrom an unconstrained domain of discourse using a diffusion process formed on a\nnovel and anatomically informed graph neural network defined on the SMPL-X body\nskeleton. Through quantitative and qualitative experiments, we show that the\nproposed method considerably outperforms previous methods of SLP. This work\nmakes an important step towards realistic neural sign avatars, bridging the\ncommunication gap between Deaf and hearing communities.", + "Contemporary machine learning requires training large neural networks on\nmassive datasets and thus faces the challenges of high computational demands.\nDataset distillation, as a recent emerging strategy, aims to compress\nreal-world datasets for efficient training. However, this line of research\ncurrently struggle with large-scale and high-resolution datasets, hindering its\npracticality and feasibility. To this end, we re-examine the existing dataset\ndistillation methods and identify three properties required for large-scale\nreal-world applications, namely, realism, diversity, and efficiency. As a\nremedy, we propose RDED, a novel computationally-efficient yet effective data\ndistillation paradigm, to enable both diversity and realism of the distilled\ndata. Extensive empirical results over various neural architectures and\ndatasets demonstrate the advancement of RDED: we can distill the full\nImageNet-1K to a small dataset comprising 10 images per class within 7 minutes,\nachieving a notable 42% top-1 accuracy with ResNet-18 on a single RTX-4090 GPU\n(while the SOTA only achieves 21% but requires 6 hours).", + "Capturing and preserving motion semantics is essential to motion retargeting\nbetween animation characters. However, most of the previous works neglect the\nsemantic information or rely on human-designed joint-level representations.\nHere, we present a novel Semantics-aware Motion reTargeting (SMT) method with\nthe advantage of vision-language models to extract and maintain meaningful\nmotion semantics. We utilize a differentiable module to render 3D motions. Then\nthe high-level motion semantics are incorporated into the motion retargeting\nprocess by feeding the vision-language model with the rendered images and\naligning the extracted semantic embeddings. To ensure the preservation of\nfine-grained motion details and high-level semantics, we adopt a two-stage\npipeline consisting of skeleton-aware pre-training and fine-tuning with\nsemantics and geometry constraints. Experimental results show the effectiveness\nof the proposed method in producing high-quality motion retargeting results\nwhile accurately preserving motion semantics.", + "Class-incremental learning (CIL) aims to enable models to continuously learn\nnew classes while overcoming catastrophic forgetting. The introduction of\npre-trained models has brought new tuning paradigms to CIL. In this paper, we\nrevisit different parameter-efficient tuning (PET) methods within the context\nof continual learning. We observe that adapter tuning demonstrates superiority\nover prompt-based methods, even without parameter expansion in each learning\nsession. Motivated by this, we propose incrementally tuning the shared adapter\nwithout imposing parameter update constraints, enhancing the learning capacity\nof the backbone. Additionally, we employ feature sampling from stored\nprototypes to retrain a unified classifier, further improving its performance.\nWe estimate the semantic shift of old prototypes without access to past samples\nand update stored prototypes session by session. Our proposed method eliminates\nmodel expansion and avoids retaining any image samples. It surpasses previous\npre-trained model-based CIL methods and demonstrates remarkable continual\nlearning capabilities. Experimental results on five CIL benchmarks validate the\neffectiveness of our approach, achieving state-of-the-art (SOTA) performance.", + "The recently proposed SparseFormer architecture provides an alternative\napproach to visual understanding by utilizing a significantly lower number of\nvisual tokens via adjusting RoIs, greatly reducing computational costs while\nstill achieving promising performance. However, training SparseFormers from\nscratch is still expensive, and scaling up the number of parameters can be\nchallenging. In this paper, we propose to bootstrap SparseFormers from\nViT-based vision foundation models in a simple and efficient way. Since the\nmajority of SparseFormer blocks are the standard transformer ones, we can\ninherit weights from large-scale pre-trained vision transformers and freeze\nthem as much as possible. Therefore, we only need to train the\nSparseFormer-specific lightweight focusing transformer to adjust token RoIs and\nfine-tune a few early pre-trained blocks to align the final token\nrepresentation. In such a way, we can bootstrap SparseFormer architectures from\nvarious large-scale pre-trained models (e.g., IN-21K pre-trained AugRegs or\nCLIPs) using a rather smaller amount of training samples (e.g., IN-1K) and\nwithout labels or captions within just a few hours.", + "As a result, the\nbootstrapped unimodal SparseFormer (from AugReg-ViT-L/16-384) can reach 84.9%\naccuracy on IN-1K with only 49 tokens, and the multimodal SparseFormer from\nCLIPs also demonstrates notable zero-shot performance with highly reduced\ncomputational cost without seeing any caption during the bootstrapping\nprocedure. In addition, CLIP-bootstrapped SparseFormers, which align the output\nspace with language without seeing a word, can serve as efficient vision\nencoders in multimodal large language models. Code and models are available at\nhttps://github.com/showlab/sparseformer", + "Traditionally, training neural networks to perform semantic segmentation\nrequired expensive human-made annotations. But more recently, advances in the\nfield of unsupervised learning have made significant progress on this issue and\ntowards closing the gap to supervised algorithms. To achieve this, semantic\nknowledge is distilled by learning to correlate randomly sampled features from\nimages across an entire dataset. In this work, we build upon these advances by\nincorporating information about the structure of the scene into the training\nprocess through the use of depth information. We achieve this by (1) learning\ndepth-feature correlation by spatially correlate the feature maps with the\ndepth maps to induce knowledge about the structure of the scene and (2)\nimplementing farthest-point sampling to more effectively select relevant\nfeatures by utilizing 3D sampling techniques on depth information of the scene.\nFinally, we demonstrate the effectiveness of our technical contributions\nthrough extensive experimentation and present significant improvements in\nperformance across multiple benchmark datasets.", + "End-to-end motion planning models equipped with deep neural networks have\nshown great potential for enabling full autonomous driving. However, the\noversized neural networks render them impractical for deployment on\nresource-constrained systems, which unavoidably requires more computational\ntime and resources during reference.To handle this, knowledge distillation\noffers a promising approach that compresses models by enabling a smaller\nstudent model to learn from a larger teacher model. Nevertheless, how to apply\nknowledge distillation to compress motion planners has not been explored so\nfar. In this paper, we propose PlanKD, the first knowledge distillation\nframework tailored for compressing end-to-end motion planners. First,\nconsidering that driving scenes are inherently complex, often containing\nplanning-irrelevant or even noisy information, transferring such information is\nnot beneficial for the student planner. Thus, we design an information\nbottleneck based strategy to only distill planning-relevant information, rather\nthan transfer all information indiscriminately. Second, different waypoints in\nan output planned trajectory may hold varying degrees of importance for motion\nplanning, where a slight deviation in certain crucial waypoints might lead to a\ncollision.", + "Second, different waypoints in\nan output planned trajectory may hold varying degrees of importance for motion\nplanning, where a slight deviation in certain crucial waypoints might lead to a\ncollision. Therefore, we devise a safety-aware waypoint-attentive distillation\nmodule that assigns adaptive weights to different waypoints based on the\nimportance, to encourage the student to accurately mimic more crucial\nwaypoints, thereby improving overall safety. Experiments demonstrate that our\nPlanKD can boost the performance of smaller planners by a large margin, and\nsignificantly reduce their reference time.", + "Recent advancements in diffusion-based models have demonstrated significant\nsuccess in generating images from text. However, video editing models have not\nyet reached the same level of visual quality and user control. To address this,\nwe introduce RAVE, a zero-shot video editing method that leverages pre-trained\ntext-to-image diffusion models without additional training. RAVE takes an input\nvideo and a text prompt to produce high-quality videos while preserving the\noriginal motion and semantic structure. It employs a novel noise shuffling\nstrategy, leveraging spatio-temporal interactions between frames, to produce\ntemporally consistent videos faster than existing methods. It is also efficient\nin terms of memory requirements, allowing it to handle longer videos. RAVE is\ncapable of a wide range of edits, from local attribute modifications to shape\ntransformations. In order to demonstrate the versatility of RAVE, we create a\ncomprehensive video evaluation dataset ranging from object-focused scenes to\ncomplex human activities like dancing and typing, and dynamic scenes featuring\nswimming fish and boats. Our qualitative and quantitative experiments highlight\nthe effectiveness of RAVE in diverse video editing scenarios compared to\nexisting methods. Our code, dataset and videos can be found in\nhttps://rave-video.github.io.", + "By leveraging temporal dependency in video sequences, multi-frame human pose\nestimation algorithms have demonstrated remarkable results in complicated\nsituations, such as occlusion, motion blur, and video defocus. These algorithms\nare predominantly based on heatmaps, resulting in high computation and storage\nrequirements per frame, which limits their flexibility and real-time\napplication in video scenarios, particularly on edge devices. In this paper, we\ndevelop an efficient and effective video-based human pose regression method,\nwhich bypasses intermediate representations such as heatmaps and instead\ndirectly maps the input to the output joint coordinates. Despite the inherent\nspatial correlation among adjacent joints of the human pose, the temporal\ntrajectory of each individual joint exhibits relative independence. In light of\nthis, we propose a novel Decoupled Space-Time Aggregation network (DSTA) to\nseparately capture the spatial contexts between adjacent joints and the\ntemporal cues of each individual joint, thereby avoiding the conflation of\nspatiotemporal dimensions. Concretely, DSTA learns a dedicated feature token\nfor each joint to facilitate the modeling of their spatiotemporal dependencies.", + "Concretely, DSTA learns a dedicated feature token\nfor each joint to facilitate the modeling of their spatiotemporal dependencies.\nWith the proposed joint-wise local-awareness attention mechanism, our method is\ncapable of efficiently and flexibly utilizing the spatial dependency of\nadjacent joints and the temporal dependency of each joint itself. Extensive\nexperiments demonstrate the superiority of our method. Compared to previous\nregression-based single-frame human pose estimation methods, DSTA significantly\nenhances performance, achieving an 8.9 mAP improvement on PoseTrack2017.\nFurthermore, our approach either surpasses or is on par with the\nstate-of-the-art heatmap-based multi-frame human pose estimation methods.\nProject page: https://github.com/zgspose/DSTA.", + "When working with 3D facial data, improving fidelity and avoiding the uncanny\nvalley effect is critically dependent on accurate 3D facial performance\ncapture. Because such methods are expensive and due to the widespread\navailability of 2D videos, recent methods have focused on how to perform\nmonocular 3D face tracking. However, these methods often fall short in\ncapturing precise facial movements due to limitations in their network\narchitecture, training, and evaluation processes. Addressing these challenges,\nwe propose a novel face tracker, FlowFace, that introduces an innovative 2D\nalignment network for dense per-vertex alignment. Unlike prior work, FlowFace\nis trained on high-quality 3D scan annotations rather than weak supervision or\nsynthetic data. Our 3D model fitting module jointly fits a 3D face model from\none or many observations, integrating existing neutral shape priors for\nenhanced identity and expression disentanglement and per-vertex deformations\nfor detailed facial feature reconstruction. Additionally, we propose a novel\nmetric and benchmark for assessing tracking accuracy. Our method exhibits\nsuperior performance on both custom and publicly available benchmarks.", + "Additionally, we propose a novel\nmetric and benchmark for assessing tracking accuracy. Our method exhibits\nsuperior performance on both custom and publicly available benchmarks. We\nfurther validate the effectiveness of our tracker by generating high-quality 3D\ndata from 2D videos, which leads to performance gains on downstream tasks.", + "Multi-view diffusion models, obtained by applying Supervised Finetuning (SFT)\nto text-to-image diffusion models, have driven recent breakthroughs in\ntext-to-3D research. However, due to the limited size and quality of existing\n3D datasets, they still suffer from multi-view inconsistencies and Neural\nRadiance Field (NeRF) reconstruction artifacts. We argue that multi-view\ndiffusion models can benefit from further Reinforcement Learning Finetuning\n(RLFT), which allows models to learn from the data generated by themselves and\nimprove beyond their dataset limitations during SFT. To this end, we introduce\nCarve3D, an improved RLFT algorithm coupled with a novel Multi-view\nReconstruction Consistency (MRC) metric, to enhance the consistency of\nmulti-view diffusion models. To measure the MRC metric on a set of multi-view\nimages, we compare them with their corresponding NeRF renderings at the same\ncamera viewpoints. The resulting model, which we denote as Carve3DM,\ndemonstrates superior multi-view consistency and NeRF reconstruction quality\nthan existing models.", + "The resulting model, which we denote as Carve3DM,\ndemonstrates superior multi-view consistency and NeRF reconstruction quality\nthan existing models. Our results suggest that pairing SFT with Carve3D's RLFT\nis essential for developing multi-view-consistent diffusion models, mirroring\nthe standard Large Language Model (LLM) alignment pipeline. Our code, training\nand testing data, and video results are available at:\nhttps://desaixie.github.io/carve-3d.", + "In the realm of image composition, generating realistic shadow for the\ninserted foreground remains a formidable challenge. Previous works have\ndeveloped image-to-image translation models which are trained on paired\ntraining data. However, they are struggling to generate shadows with accurate\nshapes and intensities, hindered by data scarcity and inherent task complexity.\nIn this paper, we resort to foundation model with rich prior knowledge of\nnatural shadow images. Specifically, we first adapt ControlNet to our task and\nthen propose intensity modulation modules to improve the shadow intensity.\nMoreover, we extend the small-scale DESOBA dataset to DESOBAv2 using a novel\ndata acquisition pipeline. Experimental results on both DESOBA and DESOBAv2\ndatasets as well as real composite images demonstrate the superior capability\nof our model for shadow generation task. The dataset, code, and model are\nreleased at https://github.com/bcmi/Object-Shadow-Generation-Dataset-DESOBAv2.", + "Generative AI has made significant strides in computer vision, particularly\nin text-driven image/video synthesis (T2I/T2V). Despite the notable\nadvancements, it remains challenging in human-centric content synthesis such as\nrealistic dance generation. Current methodologies, primarily tailored for human\nmotion transfer, encounter difficulties when confronted with real-world dance\nscenarios (e.g., social media dance), which require to generalize across a wide\nspectrum of poses and intricate human details. In this paper, we depart from\nthe traditional paradigm of human motion transfer and emphasize two additional\ncritical attributes for the synthesis of human dance content in social media\ncontexts: (i) Generalizability: the model should be able to generalize beyond\ngeneric human viewpoints as well as unseen human subjects, backgrounds, and\nposes; (ii) Compositionality: it should allow for the seamless composition of\nseen/unseen subjects, backgrounds, and poses from different sources. To address\nthese challenges, we introduce DISCO, which includes a novel model architecture\nwith disentangled control to improve the compositionality of dance synthesis,\nand an effective human attribute pre-training for better generalizability to\nunseen humans.", + "To address\nthese challenges, we introduce DISCO, which includes a novel model architecture\nwith disentangled control to improve the compositionality of dance synthesis,\nand an effective human attribute pre-training for better generalizability to\nunseen humans. Extensive qualitative and quantitative results demonstrate that\nDisCc can generate high-quality human dance images and videos with diverse\nappearances and flexible motions. Code is available at\nhttps://disco-dance.github.io/.", + "Deep neural networks have shown great success in representation learning.\nHowever, when learning with noisy labels (LNL), they can easily overfit and\nfail to generalize to new data. This paper introduces a simple and effective\nmethod, named Learning to Bootstrap (L2B), which enables models to bootstrap\nthemselves using their own predictions without being adversely affected by\nerroneous pseudo-labels. It achieves this by dynamically adjusting the\nimportance weight between real observed and generated labels, as well as\nbetween different samples through meta-learning. Unlike existing instance\nreweighting methods, the key to our method lies in a new, versatile objective\nthat enables implicit relabeling concurrently, leading to significant\nimprovements without incurring additional costs.\n L2B offers several benefits over the baseline methods. It yields more robust\nmodels that are less susceptible to the impact of noisy labels by guiding the\nbootstrapping procedure more effectively. It better exploits the valuable\ninformation contained in corrupted instances by adapting the weights of both\ninstances and labels.", + "L2B offers several benefits over the baseline methods. It yields more robust\nmodels that are less susceptible to the impact of noisy labels by guiding the\nbootstrapping procedure more effectively. It better exploits the valuable\ninformation contained in corrupted instances by adapting the weights of both\ninstances and labels. Furthermore, L2B is compatible with existing LNL methods\nand delivers competitive results spanning natural and medical imaging tasks\nincluding classification and segmentation under both synthetic and real-world\nnoise. Extensive experiments demonstrate that our method effectively mitigates\nthe challenges of noisy labels, often necessitating few to no validation\nsamples, and is well generalized to other tasks such as image segmentation.\nThis not only positions it as a robust complement to existing LNL techniques\nbut also underscores its practical applicability. The code and models are\navailable at https://github.com/yuyinzhou/l2b.", + "The advent of neural 3D Gaussians has recently brought about a revolution in\nthe field of neural rendering, facilitating the generation of high-quality\nrenderings at real-time speeds. However, the explicit and discrete\nrepresentation encounters challenges when applied to scenes featuring\nreflective surfaces. In this paper, we present GaussianShader, a novel method\nthat applies a simplified shading function on 3D Gaussians to enhance the\nneural rendering in scenes with reflective surfaces while preserving the\ntraining and rendering efficiency. The main challenge in applying the shading\nfunction lies in the accurate normal estimation on discrete 3D Gaussians.\nSpecifically, we proposed a novel normal estimation framework based on the\nshortest axis directions of 3D Gaussians with a delicately designed loss to\nmake the consistency between the normals and the geometries of Gaussian\nspheres. Experiments show that GaussianShader strikes a commendable balance\nbetween efficiency and visual quality. Our method surpasses Gaussian Splatting\nin PSNR on specular object datasets, exhibiting an improvement of 1.57dB.", + "Experiments show that GaussianShader strikes a commendable balance\nbetween efficiency and visual quality. Our method surpasses Gaussian Splatting\nin PSNR on specular object datasets, exhibiting an improvement of 1.57dB. When\ncompared to prior works handling reflective surfaces, such as Ref-NeRF, our\noptimization time is significantly accelerated (23h vs. 0.58h). Please click on\nour project website to see more results.", + "We present a scene representation, which we call a tactile-augmented radiance\nfield (TaRF), that brings vision and touch into a shared 3D space. This\nrepresentation can be used to estimate the visual and tactile signals for a\ngiven 3D position within a scene. We capture a scene's TaRF from a collection\nof photos and sparsely sampled touch probes. Our approach makes use of two\ninsights: (i) common vision-based touch sensors are built on ordinary cameras\nand thus can be registered to images using methods from multi-view geometry,\nand (ii) visually and structurally similar regions of a scene share the same\ntactile features. We use these insights to register touch signals to a captured\nvisual scene, and to train a conditional diffusion model that, provided with an\nRGB-D image rendered from a neural radiance field, generates its corresponding\ntactile signal. To evaluate our approach, we collect a dataset of TaRFs. This\ndataset contains more touch samples than previous real-world datasets, and it\nprovides spatially aligned visual signals for each captured touch signal.", + "To evaluate our approach, we collect a dataset of TaRFs. This\ndataset contains more touch samples than previous real-world datasets, and it\nprovides spatially aligned visual signals for each captured touch signal. We\ndemonstrate the accuracy of our cross-modal generative model and the utility of\nthe captured visual-tactile data on several downstream tasks. Project page:\nhttps://dou-yiming.github.io/TaRF", + "Fairness is a critical concern in deep learning, especially in healthcare,\nwhere these models influence diagnoses and treatment decisions. Although\nfairness has been investigated in the vision-only domain, the fairness of\nmedical vision-language (VL) models remains unexplored due to the scarcity of\nmedical VL datasets for studying fairness. To bridge this research gap, we\nintroduce the first fair vision-language medical dataset Harvard-FairVLMed that\nprovides detailed demographic attributes, ground-truth labels, and clinical\nnotes to facilitate an in-depth examination of fairness within VL foundation\nmodels. Using Harvard-FairVLMed, we conduct a comprehensive fairness analysis\nof two widely-used VL models (CLIP and BLIP2), pre-trained on both natural and\nmedical domains, across four different protected attributes. Our results\nhighlight significant biases in all VL models, with Asian, Male, Non-Hispanic,\nand Spanish being the preferred subgroups across the protected attributes of\nrace, gender, ethnicity, and language, respectively.", + "Our results\nhighlight significant biases in all VL models, with Asian, Male, Non-Hispanic,\nand Spanish being the preferred subgroups across the protected attributes of\nrace, gender, ethnicity, and language, respectively. In order to alleviate\nthese biases, we propose FairCLIP, an optimal-transport-based approach that\nachieves a favorable trade-off between performance and fairness by reducing the\nSinkhorn distance between the overall sample distribution and the distributions\ncorresponding to each demographic group. As the first VL dataset of its kind,\nHarvard-FairVLMed holds the potential to catalyze advancements in the\ndevelopment of machine learning models that are both ethically aware and\nclinically effective. Our dataset and code are available at\nhttps://ophai.hms.harvard.edu/datasets/harvard-fairvlmed10k.", + "While neural networks have excelled in video action recognition tasks, their\nblack-box nature often obscures the understanding of their decision-making\nprocesses. Recent approaches used inherently interpretable models to analyze\nvideo actions in a manner akin to human reasoning. These models, however,\nusually fall short in performance compared to their black-box counterparts. In\nthis work, we present a new framework named Language-guided Interpretable\nAction Recognition framework (LaIAR). LaIAR leverages knowledge from language\nmodels to enhance both the recognition capabilities and the interpretability of\nvideo models. In essence, we redefine the problem of understanding video model\ndecisions as a task of aligning video and language models. Using the logical\nreasoning captured by the language model, we steer the training of the video\nmodel. This integrated approach not only improves the video model's\nadaptability to different domains but also boosts its overall performance.\nExtensive experiments on two complex video action datasets, Charades & CAD-120,\nvalidates the improved performance and interpretability of our LaIAR framework.\nThe code of LaIAR is available at https://github.com/NingWang2049/LaIAR.", + "The autonomous driving community has shown significant interest in 3D\noccupancy prediction, driven by its exceptional geometric perception and\ngeneral object recognition capabilities. To achieve this, current works try to\nconstruct a Tri-Perspective View (TPV) or Occupancy (OCC) representation\nextending from the Bird-Eye-View perception. However, compressed views like TPV\nrepresentation lose 3D geometry information while raw and sparse OCC\nrepresentation requires heavy but redundant computational costs. To address the\nabove limitations, we propose Compact Occupancy TRansformer (COTR), with a\ngeometry-aware occupancy encoder and a semantic-aware group decoder to\nreconstruct a compact 3D OCC representation. The occupancy encoder first\ngenerates a compact geometrical OCC feature through efficient explicit-implicit\nview transformation. Then, the occupancy decoder further enhances the semantic\ndiscriminability of the compact OCC representation by a coarse-to-fine semantic\ngrouping strategy. Empirical experiments show that there are evident\nperformance gains across multiple baselines, e.g., COTR outperforms baselines\nwith a relative improvement of 8%-15%, demonstrating the superiority of our\nmethod.", + "Diffusion probabilistic models (DPMs) have shown remarkable performance in\nhigh-resolution image synthesis, but their sampling efficiency is still to be\ndesired due to the typically large number of sampling steps. Recent\nadvancements in high-order numerical ODE solvers for DPMs have enabled the\ngeneration of high-quality images with much fewer sampling steps. While this is\na significant development, most sampling methods still employ uniform time\nsteps, which is not optimal when using a small number of steps. To address this\nissue, we propose a general framework for designing an optimization problem\nthat seeks more appropriate time steps for a specific numerical ODE solver for\nDPMs. This optimization problem aims to minimize the distance between the\nground-truth solution to the ODE and an approximate solution corresponding to\nthe numerical solver. It can be efficiently solved using the constrained trust\nregion method, taking less than $15$ seconds.", + "This optimization problem aims to minimize the distance between the\nground-truth solution to the ODE and an approximate solution corresponding to\nthe numerical solver. It can be efficiently solved using the constrained trust\nregion method, taking less than $15$ seconds. Our extensive experiments on both\nunconditional and conditional sampling using pixel- and latent-space DPMs\ndemonstrate that, when combined with the state-of-the-art sampling method\nUniPC, our optimized time steps significantly improve image generation\nperformance in terms of FID scores for datasets such as CIFAR-10 and ImageNet,\ncompared to using uniform time steps.", + "Current open-source Large Multimodal Models (LMMs) excel at tasks such as\nopen-vocabulary language grounding and segmentation but can suffer under false\npremises when queries imply the existence of something that is not actually\npresent in the image. We observe that existing methods that fine-tune an LMM to\nsegment images significantly degrade their ability to reliably determine\n(\"see\") if an object is present and to interact naturally with humans (\"say\"),\na form of catastrophic forgetting. In this work, we propose a cascading and\njoint training approach for LMMs to solve this task, avoiding catastrophic\nforgetting of previous skills. Our resulting model can \"see\" by detecting\nwhether objects are present in an image, \"say\" by telling the user if they are\nnot, proposing alternative queries or correcting semantic errors in the query,\nand finally \"segment\" by outputting the mask of the desired objects if they\nexist. Additionally, we introduce a novel False Premise Correction benchmark\ndataset, an extension of existing RefCOCO(+/g) referring segmentation datasets\n(which we call FP-RefCOCO(+/g)).", + "Additionally, we introduce a novel False Premise Correction benchmark\ndataset, an extension of existing RefCOCO(+/g) referring segmentation datasets\n(which we call FP-RefCOCO(+/g)). The results show that our method not only\ndetects false premises up to 55% better than existing approaches, but under\nfalse premise conditions produces relative cIOU improvements of more than 31%\nover baselines, and produces natural language feedback judged helpful up to 67%\nof the time.", + "End-to-end autonomous driving recently emerged as a promising research\ndirection to target autonomy from a full-stack perspective. Along this line,\nmany of the latest works follow an open-loop evaluation setting on nuScenes to\nstudy the planning behavior. In this paper, we delve deeper into the problem by\nconducting thorough analyses and demystifying more devils in the details. We\ninitially observed that the nuScenes dataset, characterized by relatively\nsimple driving scenarios, leads to an under-utilization of perception\ninformation in end-to-end models incorporating ego status, such as the ego\nvehicle's velocity. These models tend to rely predominantly on the ego\nvehicle's status for future path planning. Beyond the limitations of the\ndataset, we also note that current metrics do not comprehensively assess the\nplanning quality, leading to potentially biased conclusions drawn from existing\nbenchmarks. To address this issue, we introduce a new metric to evaluate\nwhether the predicted trajectories adhere to the road. We further propose a\nsimple baseline able to achieve competitive results without relying on\nperception annotations.", + "To address this issue, we introduce a new metric to evaluate\nwhether the predicted trajectories adhere to the road. We further propose a\nsimple baseline able to achieve competitive results without relying on\nperception annotations. Given the current limitations on the benchmark and\nmetrics, we suggest the community reassess relevant prevailing research and be\ncautious whether the continued pursuit of state-of-the-art would yield\nconvincing and universal conclusions. Code and models are available at\n\\url{https://github.com/NVlabs/BEV-Planner}", + "Unsupervised point cloud shape correspondence aims to establish point-wise\ncorrespondences between source and target point clouds. Existing methods obtain\ncorrespondences directly by computing point-wise feature similarity between\npoint clouds. However, non-rigid objects possess strong deformability and\nunusual shapes, making it a longstanding challenge to directly establish\ncorrespondences between point clouds with unconventional shapes. To address\nthis challenge, we propose an unsupervised Template-Assisted point cloud shape\ncorrespondence Network, termed TANet, including a template generation module\nand a template assistance module. The proposed TANet enjoys several merits.\nFirstly, the template generation module establishes a set of learnable\ntemplates with explicit structures. Secondly, we introduce a template\nassistance module that extensively leverages the generated templates to\nestablish more accurate shape correspondences from multiple perspectives.\nExtensive experiments on four human and animal datasets demonstrate that TANet\nachieves favorable performance against state-of-the-art methods.", + "We introduce a new family of minimal problems for reconstruction from\nmultiple views. Our primary focus is a novel approach to autocalibration, a\nlong-standing problem in computer vision. Traditional approaches to this\nproblem, such as those based on Kruppa's equations or the modulus constraint,\nrely explicitly on the knowledge of multiple fundamental matrices or a\nprojective reconstruction. In contrast, we consider a novel formulation\ninvolving constraints on image points, the unknown depths of 3D points, and a\npartially specified calibration matrix $K$. For $2$ and $3$ views, we present a\ncomprehensive taxonomy of minimal autocalibration problems obtained by relaxing\nsome of these constraints. These problems are organized into classes according\nto the number of views and any assumed prior knowledge of $K$. Within each\nclass, we determine problems with the fewest -- or a relatively small number of\n-- solutions. From this zoo of problems, we devise three practical solvers.\nExperiments with synthetic and real data and interfacing our solvers with\nCOLMAP demonstrate that we achieve superior accuracy compared to\nstate-of-the-art calibration methods.", + "From this zoo of problems, we devise three practical solvers.\nExperiments with synthetic and real data and interfacing our solvers with\nCOLMAP demonstrate that we achieve superior accuracy compared to\nstate-of-the-art calibration methods. The code is available at\nhttps://github.com/andreadalcin/MinimalPerspectiveAutocalibration", + "Previous works concerning single-view hand-held object reconstruction\ntypically rely on supervision from 3D ground-truth models, which are hard to\ncollect in real world. In contrast, readily accessible hand-object videos offer\na promising training data source, but they only give heavily occluded object\nobservations. In this paper, we present a novel synthetic-to-real framework to\nexploit Multi-view Occlusion-aware supervision from hand-object videos for\nHand-held Object reconstruction (MOHO) from a single image, tackling two\npredominant challenges in such setting: hand-induced occlusion and object's\nself-occlusion. First, in the synthetic pre-training stage, we render a\nlarge-scaled synthetic dataset SOMVideo with hand-object images and multi-view\nocclusion-free supervisions, adopted to address hand-induced occlusion in both\n2D and 3D spaces. Second, in the real-world finetuning stage, MOHO leverages\nthe amodal-mask-weighted geometric supervision to mitigate the unfaithful\nguidance caused by the hand-occluded supervising views in real world.", + "Second, in the real-world finetuning stage, MOHO leverages\nthe amodal-mask-weighted geometric supervision to mitigate the unfaithful\nguidance caused by the hand-occluded supervising views in real world. Moreover,\ndomain-consistent occlusion-aware features are amalgamated in MOHO to resist\nobject's self-occlusion for inferring the complete object shape. Extensive\nexperiments on HO3D and DexYCB datasets demonstrate 2D-supervised MOHO gains\nsuperior results against 3D-supervised methods by a large margin.", + "Largely due to their implicit nature, neural fields lack a direct mechanism\nfor filtering, as Fourier analysis from discrete signal processing is not\ndirectly applicable to these representations. Effective filtering of neural\nfields is critical to enable level-of-detail processing in downstream\napplications, and support operations that involve sampling the field on regular\ngrids (e.g. marching cubes). Existing methods that attempt to decompose neural\nfields in the frequency domain either resort to heuristics or require extensive\nmodifications to the neural field architecture. We show that via a simple\nmodification, one can obtain neural fields that are low-pass filtered, and in\nturn show how this can be exploited to obtain a frequency decomposition of the\nentire signal. We demonstrate the validity of our technique by investigating\nlevel-of-detail reconstruction, and showing how coarser representations can be\ncomputed effectively.", + "Can computers perceive the physical properties of objects solely through\nvision? Research in cognitive science and vision science has shown that humans\nexcel at identifying materials and estimating their physical properties based\npurely on visual appearance. In this paper, we present a novel approach for\ndense prediction of the physical properties of objects using a collection of\nimages. Inspired by how humans reason about physics through vision, we leverage\nlarge language models to propose candidate materials for each object. We then\nconstruct a language-embedded point cloud and estimate the physical properties\nof each 3D point using a zero-shot kernel regression approach. Our method is\naccurate, annotation-free, and applicable to any object in the open world.\nExperiments demonstrate the effectiveness of the proposed approach in various\nphysical property reasoning tasks, such as estimating the mass of common\nobjects, as well as other properties like friction and hardness.", + "Understanding the world in first-person view is fundamental in Augmented\nReality (AR). This immersive perspective brings dramatic visual changes and\nunique challenges compared to third-person views. Synthetic data has empowered\nthird-person-view vision models, but its application to embodied egocentric\nperception tasks remains largely unexplored. A critical challenge lies in\nsimulating natural human movements and behaviors that effectively steer the\nembodied cameras to capture a faithful egocentric representation of the 3D\nworld. To address this challenge, we introduce EgoGen, a new synthetic data\ngenerator that can produce accurate and rich ground-truth training data for\negocentric perception tasks. At the heart of EgoGen is a novel human motion\nsynthesis model that directly leverages egocentric visual inputs of a virtual\nhuman to sense the 3D environment. Combined with collision-avoiding motion\nprimitives and a two-stage reinforcement learning approach, our motion\nsynthesis model offers a closed-loop solution where the embodied perception and\nmovement of the virtual human are seamlessly coupled. Compared to previous\nworks, our model eliminates the need for a pre-defined global path, and is\ndirectly applicable to dynamic environments.", + "Compared to previous\nworks, our model eliminates the need for a pre-defined global path, and is\ndirectly applicable to dynamic environments. Combined with our easy-to-use and\nscalable data generation pipeline, we demonstrate EgoGen's efficacy in three\ntasks: mapping and localization for head-mounted cameras, egocentric camera\ntracking, and human mesh recovery from egocentric views. EgoGen will be fully\nopen-sourced, offering a practical solution for creating realistic egocentric\ntraining data and aiming to serve as a useful tool for egocentric computer\nvision research. Refer to our project page: https://ego-gen.github.io/.", + "Face Anti-Spoofing (FAS) is crucial for securing face recognition systems\nagainst presentation attacks. With advancements in sensor manufacture and\nmulti-modal learning techniques, many multi-modal FAS approaches have emerged.\nHowever, they face challenges in generalizing to unseen attacks and deployment\nconditions. These challenges arise from (1) modality unreliability, where some\nmodality sensors like depth and infrared undergo significant domain shifts in\nvarying environments, leading to the spread of unreliable information during\ncross-modal feature fusion, and (2) modality imbalance, where training overly\nrelies on a dominant modality hinders the convergence of others, reducing\neffectiveness against attack types that are indistinguishable sorely using the\ndominant modality. To address modality unreliability, we propose the\nUncertainty-Guided Cross-Adapter (U-Adapter) to recognize unreliably detected\nregions within each modality and suppress the impact of unreliable regions on\nother modalities. For modality imbalance, we propose a Rebalanced Modality\nGradient Modulation (ReGrad) strategy to rebalance the convergence speed of all\nmodalities by adaptively adjusting their gradients.", + "For modality imbalance, we propose a Rebalanced Modality\nGradient Modulation (ReGrad) strategy to rebalance the convergence speed of all\nmodalities by adaptively adjusting their gradients. Besides, we provide the\nfirst large-scale benchmark for evaluating multi-modal FAS performance under\ndomain generalization scenarios. Extensive experiments demonstrate that our\nmethod outperforms state-of-the-art methods. Source code and protocols will be\nreleased on https://github.com/OMGGGGG/mmdg.", + "Most video captioning models are designed to process short video clips of few\nseconds and output text describing low-level visual concepts (e.g., objects,\nscenes, atomic actions). However, most real-world videos last for minutes or\nhours and have a complex hierarchical structure spanning different temporal\ngranularities. We propose Video ReCap, a recursive video captioning model that\ncan process video inputs of dramatically different lengths (from 1 second to 2\nhours) and output video captions at multiple hierarchy levels. The recursive\nvideo-language architecture exploits the synergy between different video\nhierarchies and can process hour-long videos efficiently. We utilize a\ncurriculum learning training scheme to learn the hierarchical structure of\nvideos, starting from clip-level captions describing atomic actions, then\nfocusing on segment-level descriptions, and concluding with generating\nsummaries for hour-long videos. Furthermore, we introduce Ego4D-HCap dataset by\naugmenting Ego4D with 8,267 manually collected long-range video summaries. Our\nrecursive model can flexibly generate captions at different hierarchy levels\nwhile also being useful for other complex video understanding tasks, such as\nVideoQA on EgoSchema.", + "Our\nrecursive model can flexibly generate captions at different hierarchy levels\nwhile also being useful for other complex video understanding tasks, such as\nVideoQA on EgoSchema. Data, code, and models are available at:\nhttps://sites.google.com/view/vidrecap", + "Diffusion models (DMs) excel in photo-realistic image synthesis, but their\nadaptation to LiDAR scene generation poses a substantial hurdle. This is\nprimarily because DMs operating in the point space struggle to preserve the\ncurve-like patterns and 3D geometry of LiDAR scenes, which consumes much of\ntheir representation power. In this paper, we propose LiDAR Diffusion Models\n(LiDMs) to generate LiDAR-realistic scenes from a latent space tailored to\ncapture the realism of LiDAR scenes by incorporating geometric priors into the\nlearning pipeline. Our method targets three major desiderata: pattern realism,\ngeometry realism, and object realism. Specifically, we introduce curve-wise\ncompression to simulate real-world LiDAR patterns, point-wise coordinate\nsupervision to learn scene geometry, and patch-wise encoding for a full 3D\nobject context. With these three core designs, our method achieves competitive\nperformance on unconditional LiDAR generation in 64-beam scenario and state of\nthe art on conditional LiDAR generation, while maintaining high efficiency\ncompared to point-based DMs (up to 107$\\times$ faster).", + "With these three core designs, our method achieves competitive\nperformance on unconditional LiDAR generation in 64-beam scenario and state of\nthe art on conditional LiDAR generation, while maintaining high efficiency\ncompared to point-based DMs (up to 107$\\times$ faster). Furthermore, by\ncompressing LiDAR scenes into a latent space, we enable the controllability of\nDMs with various conditions such as semantic maps, camera views, and text\nprompts.", + "Reflectance bounds the frequency spectrum of illumination in the object\nappearance. In this paper, we introduce the first stochastic inverse rendering\nmethod, which recovers the attenuated frequency spectrum of an illumination\njointly with the reflectance of an object of known geometry from a single\nimage. Our key idea is to solve this blind inverse problem in the reflectance\nmap, an appearance representation invariant to the underlying geometry, by\nlearning to reverse the image formation with a novel diffusion model which we\nrefer to as the Diffusion Reflectance Map Network (DRMNet). Given an observed\nreflectance map converted and completed from the single input image, DRMNet\ngenerates a reflectance map corresponding to a perfect mirror sphere while\njointly estimating the reflectance. The forward process can be understood as\ngradually filtering a natural illumination with lower and lower frequency\nreflectance and additive Gaussian noise. DRMNet learns to invert this process\nwith two subnetworks, IllNet and RefNet, which work in concert towards this\njoint estimation. The network is trained on an extensive synthetic dataset and\nis demonstrated to generalize to real images, showing state-of-the-art accuracy\non established datasets.", + "This paper aims to achieve universal segmentation of arbitrary semantic\nlevel. Despite significant progress in recent years, specialist segmentation\napproaches are limited to specific tasks and data distribution. Retraining a\nnew model for adaptation to new scenarios or settings takes expensive\ncomputation and time cost, which raises the demand for versatile and universal\nsegmentation model that can cater to various granularity. Although some\nattempts have been made for unifying different segmentation tasks or\ngeneralization to various scenarios, limitations in the definition of paradigms\nand input-output spaces make it difficult for them to achieve accurate\nunderstanding of content at arbitrary granularity. To this end, we present\nUniLSeg, a universal segmentation model that can perform segmentation at any\nsemantic level with the guidance of language instructions. For training\nUniLSeg, we reorganize a group of tasks from original diverse distributions\ninto a unified data format, where images with texts describing segmentation\ntargets as input and corresponding masks are output. Combined with a automatic\nannotation engine for utilizing numerous unlabeled data, UniLSeg achieves\nexcellent performance on various tasks and settings, surpassing both specialist\nand unified segmentation models.", + "We introduce GaussianAvatars, a new method to create photorealistic head\navatars that are fully controllable in terms of expression, pose, and\nviewpoint. The core idea is a dynamic 3D representation based on 3D Gaussian\nsplats that are rigged to a parametric morphable face model. This combination\nfacilitates photorealistic rendering while allowing for precise animation\ncontrol via the underlying parametric model, e.g., through expression transfer\nfrom a driving sequence or by manually changing the morphable model parameters.\nWe parameterize each splat by a local coordinate frame of a triangle and\noptimize for explicit displacement offset to obtain a more accurate geometric\nrepresentation. During avatar reconstruction, we jointly optimize for the\nmorphable model parameters and Gaussian splat parameters in an end-to-end\nfashion. We demonstrate the animation capabilities of our photorealistic avatar\nin several challenging scenarios. For instance, we show reenactments from a\ndriving video, where our method outperforms existing works by a significant\nmargin.", + "We introduce MMMU: a new benchmark designed to evaluate multimodal models on\nmassive multi-discipline tasks demanding college-level subject knowledge and\ndeliberate reasoning. MMMU includes 11.5K meticulously collected multimodal\nquestions from college exams, quizzes, and textbooks, covering six core\ndisciplines: Art & Design, Business, Science, Health & Medicine, Humanities &\nSocial Science, and Tech & Engineering. These questions span 30 subjects and\n183 subfields, comprising 30 highly heterogeneous image types, such as charts,\ndiagrams, maps, tables, music sheets, and chemical structures. Unlike existing\nbenchmarks, MMMU focuses on advanced perception and reasoning with\ndomain-specific knowledge, challenging models to perform tasks akin to those\nfaced by experts. The evaluation of 14 open-source LMMs as well as the\nproprietary GPT-4V(ision) and Gemini highlights the substantial challenges\nposed by MMMU. Even the advanced GPT-4V and Gemini Ultra only achieve\naccuracies of 56% and 59% respectively, indicating significant room for\nimprovement. We believe MMMU will stimulate the community to build\nnext-generation multimodal foundation models towards expert artificial general\nintelligence.", + "Astronaut photography, spanning six decades of human spaceflight, presents a\nunique Earth observations dataset with immense value for both scientific\nresearch and disaster response. Despite its significance, accurately localizing\nthe geographical extent of these images, crucial for effective utilization,\nposes substantial challenges. Current manual localization efforts are\ntime-consuming, motivating the need for automated solutions. We propose a novel\napproach - leveraging image retrieval - to address this challenge efficiently.\nWe introduce innovative training techniques, including Year-Wise Data\nAugmentation and a Neutral-Aware Multi-Similarity Loss, which contribute to the\ndevelopment of a high-performance model, EarthLoc. We develop six evaluation\ndatasets and perform a comprehensive benchmark comparing EarthLoc to existing\nmethods, showcasing its superior efficiency and accuracy. Our approach marks a\nsignificant advancement in automating the localization of astronaut\nphotography, which will help bridge a critical gap in Earth observations data.\nCode and datasets are available at https://github.com/gmberton/EarthLoc", + "The field of generative image inpainting and object insertion has made\nsignificant progress with the recent advent of latent diffusion models.\nUtilizing a precise object mask can greatly enhance these applications.\nHowever, due to the challenges users encounter in creating high-fidelity masks,\nthere is a tendency for these methods to rely on more coarse masks (e.g.,\nbounding box) for these applications. This results in limited control and\ncompromised background content preservation. To overcome these limitations, we\nintroduce SmartMask, which allows any novice user to create detailed masks for\nprecise object insertion. Combined with a ControlNet-Inpaint model, our\nexperiments demonstrate that SmartMask achieves superior object insertion\nquality, preserving the background content more effectively than previous\nmethods. Notably, unlike prior works the proposed approach can also be used\neven without user-mask guidance, which allows it to perform mask-free object\ninsertion at diverse positions and scales. Furthermore, we find that when used\niteratively with a novel instruction-tuning based planning model, SmartMask can\nbe used to design detailed layouts from scratch.", + "Furthermore, we find that when used\niteratively with a novel instruction-tuning based planning model, SmartMask can\nbe used to design detailed layouts from scratch. As compared with user-scribble\nbased layout design, we observe that SmartMask allows for better quality\noutputs with layout-to-image generation methods. Project page is available at\nhttps://smartmask-gen.github.io", + "Diffusion models are generative models with impressive text-to-image\nsynthesis capabilities and have spurred a new wave of creative methods for\nclassical machine learning tasks. However, the best way to harness the\nperceptual knowledge of these generative models for visual tasks is still an\nopen question. Specifically, it is unclear how to use the prompting interface\nwhen applying diffusion backbones to vision tasks. We find that automatically\ngenerated captions can improve text-image alignment and significantly enhance a\nmodel's cross-attention maps, leading to better perceptual performance. Our\napproach improves upon the current state-of-the-art (SOTA) in diffusion-based\nsemantic segmentation on ADE20K and the current overall SOTA for depth\nestimation on NYUv2. Furthermore, our method generalizes to the cross-domain\nsetting. We use model personalization and caption modifications to align our\nmodel to the target domain and find improvements over unaligned baselines. Our\ncross-domain object detection model, trained on Pascal VOC, achieves SOTA\nresults on Watercolor2K. Our cross-domain segmentation method, trained on\nCityscapes, achieves SOTA results on Dark Zurich-val and Nighttime Driving.", + "Our\ncross-domain object detection model, trained on Pascal VOC, achieves SOTA\nresults on Watercolor2K. Our cross-domain segmentation method, trained on\nCityscapes, achieves SOTA results on Dark Zurich-val and Nighttime Driving.\nProject page: https://www.vision.caltech.edu/tadp/. Code:\nhttps://github.com/damaggu/TADP.", + "Customizing pre-trained text-to-image generation model has attracted massive\nresearch interest recently, due to its huge potential in real-world\napplications. Although existing methods are able to generate creative content\nfor a novel concept contained in single user-input image, their capability are\nstill far from perfection. Specifically, most existing methods require\nfine-tuning the generative model on testing images. Some existing methods do\nnot require fine-tuning, while their performance are unsatisfactory.\nFurthermore, the interaction between users and models are still limited to\ndirective and descriptive prompts such as instructions and captions. In this\nwork, we build a customization assistant based on pre-trained large language\nmodel and diffusion model, which can not only perform customized generation in\na tuning-free manner, but also enable more user-friendly interactions: users\ncan chat with the assistant and input either ambiguous text or clear\ninstruction. Specifically, we propose a new framework consists of a new model\ndesign and a novel training strategy. The resulting assistant can perform\ncustomized generation in 2-5 seconds without any test time fine-tuning.\nExtensive experiments are conducted, competitive results have been obtained\nacross different domains, illustrating the effectiveness of the proposed\nmethod.", + "Recently, impressive results have been achieved in 3D scene editing with text\ninstructions based on a 2D diffusion model. However, current diffusion models\nprimarily generate images by predicting noise in the latent space, and the\nediting is usually applied to the whole image, which makes it challenging to\nperform delicate, especially localized, editing for 3D scenes. Inspired by\nrecent 3D Gaussian splatting, we propose a systematic framework, named\nGaussianEditor, to edit 3D scenes delicately via 3D Gaussians with text\ninstructions. Benefiting from the explicit property of 3D Gaussians, we design\na series of techniques to achieve delicate editing. Specifically, we first\nextract the region of interest (RoI) corresponding to the text instruction,\naligning it to 3D Gaussians. The Gaussian RoI is further used to control the\nediting process. Our framework can achieve more delicate and precise editing of\n3D scenes than previous methods while enjoying much faster training speed, i.e.\nwithin 20 minutes on a single V100 GPU, more than twice as fast as\nInstruct-NeRF2NeRF (45 minutes -- 2 hours).", + "Optical flow is a classical task that is important to the vision community.\nClassical optical flow estimation uses two frames as input, whilst some recent\nmethods consider multiple frames to explicitly model long-range information.\nThe former ones limit their ability to fully leverage temporal coherence along\nthe video sequence; and the latter ones incur heavy computational overhead,\ntypically not possible for real-time flow estimation. Some multi-frame-based\napproaches even necessitate unseen future frames for current estimation,\ncompromising real-time applicability in safety-critical scenarios. To this end,\nwe present MemFlow, a real-time method for optical flow estimation and\nprediction with memory. Our method enables memory read-out and update modules\nfor aggregating historical motion information in real-time. Furthermore, we\nintegrate resolution-adaptive re-scaling to accommodate diverse video\nresolutions. Besides, our approach seamlessly extends to the future prediction\nof optical flow based on past observations. Leveraging effective historical\nmotion aggregation, our method outperforms VideoFlow with fewer parameters and\nfaster inference speed on Sintel and KITTI-15 datasets in terms of\ngeneralization performance.", + "Besides, our approach seamlessly extends to the future prediction\nof optical flow based on past observations. Leveraging effective historical\nmotion aggregation, our method outperforms VideoFlow with fewer parameters and\nfaster inference speed on Sintel and KITTI-15 datasets in terms of\ngeneralization performance. At the time of submission, MemFlow also leads in\nperformance on the 1080p Spring dataset. Codes and models will be available at:\nhttps://dqiaole.github.io/MemFlow/.", + "Ultra-fine-grained visual categorization (Ultra-FGVC) aims at distinguishing\nhighly similar sub-categories within fine-grained objects, such as different\nsoybean cultivars. Compared to traditional fine-grained visual categorization,\nUltra-FGVC encounters more hurdles due to the small inter-class and large\nintra-class variation. Given these challenges, relying on human annotation for\nUltra-FGVC is impractical. To this end, our work introduces a novel task termed\nUltra-Fine-Grained Novel Class Discovery (UFG-NCD), which leverages partially\nannotated data to identify new categories of unlabeled images for Ultra-FGVC.\nTo tackle this problem, we devise a Region-Aligned Proxy Learning (RAPL)\nframework, which comprises a Channel-wise Region Alignment (CRA) module and a\nSemi-Supervised Proxy Learning (SemiPL) strategy. The CRA module is designed to\nextract and utilize discriminative features from local regions, facilitating\nknowledge transfer from labeled to unlabeled classes. Furthermore, SemiPL\nstrengthens representation learning and knowledge transfer with proxy-guided\nsupervised learning and proxy-guided contrastive learning.", + "The CRA module is designed to\nextract and utilize discriminative features from local regions, facilitating\nknowledge transfer from labeled to unlabeled classes. Furthermore, SemiPL\nstrengthens representation learning and knowledge transfer with proxy-guided\nsupervised learning and proxy-guided contrastive learning. Such techniques\nleverage class distribution information in the embedding space, improving the\nmining of subtle differences between labeled and unlabeled ultra-fine-grained\nclasses. Extensive experiments demonstrate that RAPL significantly outperforms\nbaselines across various datasets, indicating its effectiveness in handling the\nchallenges of UFG-NCD. Code is available at\nhttps://github.com/SSDUT-Caiyq/UFG-NCD.", + "Text-guided domain adaptation and generation of 3D-aware portraits find many\napplications in various fields. However, due to the lack of training data and\nthe challenges in handling the high variety of geometry and appearance, the\nexisting methods for these tasks suffer from issues like inflexibility,\ninstability, and low fidelity. In this paper, we propose a novel framework\nDiffusionGAN3D, which boosts text-guided 3D domain adaptation and generation by\ncombining 3D GANs and diffusion priors. Specifically, we integrate the\npre-trained 3D generative models (e.g., EG3D) and text-to-image diffusion\nmodels. The former provides a strong foundation for stable and high-quality\navatar generation from text. And the diffusion models in turn offer powerful\npriors and guide the 3D generator finetuning with informative direction to\nachieve flexible and efficient text-guided domain adaptation. To enhance the\ndiversity in domain adaptation and the generation capability in text-to-avatar,\nwe introduce the relative distance loss and case-specific learnable triplane\nrespectively.", + "To enhance the\ndiversity in domain adaptation and the generation capability in text-to-avatar,\nwe introduce the relative distance loss and case-specific learnable triplane\nrespectively. Besides, we design a progressive texture refinement module to\nimprove the texture quality for both tasks above. Extensive experiments\ndemonstrate that the proposed framework achieves excellent results in both\ndomain adaptation and text-to-avatar tasks, outperforming existing methods in\nterms of generation quality and efficiency. The project homepage is at\nhttps://younglbw.github.io/DiffusionGAN3D-homepage/.", + "The credibility and practicality of a reconstructed hand-object interaction\nsequence depend largely on its physical plausibility. However, due to high\nocclusions during hand-object interaction, physical plausibility remains a\nchallenging criterion for purely vision-based tracking methods. To address this\nissue and enhance the results of existing hand trackers, this paper proposes a\nnovel physically-aware hand motion de-noising method. Specifically, we\nintroduce two learned loss terms that explicitly capture two crucial aspects of\nphysical plausibility: grasp credibility and manipulation feasibility. These\nterms are used to train a physically-aware de-noising network. Qualitative and\nquantitative experiments demonstrate that our approach significantly improves\nboth fine-grained physical plausibility and overall pose accuracy, surpassing\ncurrent state-of-the-art de-noising methods.", + "Existing NeRF-based methods for large scene reconstruction often have\nlimitations in visual quality and rendering speed. While the recent 3D Gaussian\nSplatting works well on small-scale and object-centric scenes, scaling it up to\nlarge scenes poses challenges due to limited video memory, long optimization\ntime, and noticeable appearance variations. To address these challenges, we\npresent VastGaussian, the first method for high-quality reconstruction and\nreal-time rendering on large scenes based on 3D Gaussian Splatting. We propose\na progressive partitioning strategy to divide a large scene into multiple\ncells, where the training cameras and point cloud are properly distributed with\nan airspace-aware visibility criterion. These cells are merged into a complete\nscene after parallel optimization. We also introduce decoupled appearance\nmodeling into the optimization process to reduce appearance variations in the\nrendered images. Our approach outperforms existing NeRF-based methods and\nachieves state-of-the-art results on multiple large scene datasets, enabling\nfast optimization and high-fidelity real-time rendering.", + "In recent years, image editing has advanced remarkably. With increased human\ncontrol, it is now possible to edit an image in a plethora of ways; from\nspecifying in text what we want to change, to straight up dragging the contents\nof the image in an interactive point-based manner. However, most of the focus\nhas remained on editing single images at a time. Whether and how we can\nsimultaneously edit large batches of images has remained understudied. With the\ngoal of minimizing human supervision in the editing process, this paper\npresents a novel method for interactive batch image editing using StyleGAN as\nthe medium. Given an edit specified by users in an example image (e.g., make\nthe face frontal), our method can automatically transfer that edit to other\ntest images, so that regardless of their initial state (pose), they all arrive\nat the same final state (e.g., all facing front). Extensive experiments\ndemonstrate that edits performed using our method have similar visual quality\nto existing single-image-editing methods, while having more visual consistency\nand saving significant time and human effort.", + "Oriented object detection has been developed rapidly in the past few years,\nwhere rotation equivariance is crucial for detectors to predict rotated boxes.\nIt is expected that the prediction can maintain the corresponding rotation when\nobjects rotate, but severe mutation in angular prediction is sometimes observed\nwhen objects rotate near the boundary angle, which is well-known boundary\ndiscontinuity problem. The problem has been long believed to be caused by the\nsharp loss increase at the angular boundary, and widely used joint-optim\nIoU-like methods deal with this problem by loss-smoothing. However, we\nexperimentally find that even state-of-the-art IoU-like methods actually fail\nto solve the problem. On further analysis, we find that the key to solution\nlies in encoding mode of the smoothing function rather than in joint or\nindependent optimization. In existing IoU-like methods, the model essentially\nattempts to fit the angular relationship between box and object, where the\nbreak point at angular boundary makes the predictions highly unstable.To deal\nwith this issue, we propose a dual-optimization paradigm for angles.", + "In existing IoU-like methods, the model essentially\nattempts to fit the angular relationship between box and object, where the\nbreak point at angular boundary makes the predictions highly unstable.To deal\nwith this issue, we propose a dual-optimization paradigm for angles. We\ndecouple reversibility and joint-optim from single smoothing function into two\ndistinct entities, which for the first time achieves the objectives of both\ncorrecting angular boundary and blending angle with other parameters.Extensive\nexperiments on multiple datasets show that boundary discontinuity problem is\nwell-addressed. Moreover, typical IoU-like methods are improved to the same\nlevel without obvious performance gap. The code is available at\nhttps://github.com/hangxu-cv/cvpr24acm.", + "This paper addresses the complex issue of one-shot face stylization, focusing\non the simultaneous consideration of appearance and structure, where previous\nmethods have fallen short. We explore deformation-aware face stylization that\ndiverges from traditional single-image style reference, opting for a real-style\nimage pair instead. The cornerstone of our method is the utilization of a\nself-supervised vision transformer, specifically DINO-ViT, to establish a\nrobust and consistent facial structure representation across both real and\nstyle domains. Our stylization process begins by adapting the StyleGAN\ngenerator to be deformation-aware through the integration of spatial\ntransformers (STN). We then introduce two innovative constraints for generator\nfine-tuning under the guidance of DINO semantics: i) a directional deformation\nloss that regulates directional vectors in DINO space, and ii) a relative\nstructural consistency constraint based on DINO token self-similarities,\nensuring diverse generation. Additionally, style-mixing is employed to align\nthe color generation with the reference, minimizing inconsistent\ncorrespondences. This framework delivers enhanced deformability for general\none-shot face stylization, achieving notable efficiency with a fine-tuning\nduration of approximately 10 minutes.", + "Additionally, style-mixing is employed to align\nthe color generation with the reference, minimizing inconsistent\ncorrespondences. This framework delivers enhanced deformability for general\none-shot face stylization, achieving notable efficiency with a fine-tuning\nduration of approximately 10 minutes. Extensive qualitative and quantitative\ncomparisons demonstrate our superiority over state-of-the-art one-shot face\nstylization methods. Code is available at https://github.com/zichongc/DoesFS", + "Advances in camera-based physiological monitoring have enabled the robust,\nnon-contact measurement of respiration and the cardiac pulse, which are known\nto be indicative of the sleep stage. This has led to research into camera-based\nsleep monitoring as a promising alternative to \"gold-standard\" polysomnography,\nwhich is cumbersome, expensive to administer, and hence unsuitable for\nlonger-term clinical studies. In this paper, we introduce SleepVST, a\ntransformer model which enables state-of-the-art performance in camera-based\nsleep stage classification (sleep staging). After pre-training on contact\nsensor data, SleepVST outperforms existing methods for cardio-respiratory sleep\nstaging on the SHHS and MESA datasets, achieving total Cohen's kappa scores of\n0.75 and 0.77 respectively. We then show that SleepVST can be successfully\ntransferred to cardio-respiratory waveforms extracted from video, enabling\nfully contact-free sleep staging. Using a video dataset of 50 nights, we\nachieve a total accuracy of 78.8\\% and a Cohen's $\\kappa$ of 0.71 in four-class\nvideo-based sleep staging, setting a new state-of-the-art in the domain.", + "Diffusion model is a promising approach to image generation and has been\nemployed for Pose-Guided Person Image Synthesis (PGPIS) with competitive\nperformance. While existing methods simply align the person appearance to the\ntarget pose, they are prone to overfitting due to the lack of a high-level\nsemantic understanding on the source person image. In this paper, we propose a\nnovel Coarse-to-Fine Latent Diffusion (CFLD) method for PGPIS. In the absence\nof image-caption pairs and textual prompts, we develop a novel training\nparadigm purely based on images to control the generation process of a\npre-trained text-to-image diffusion model. A perception-refined decoder is\ndesigned to progressively refine a set of learnable queries and extract\nsemantic understanding of person images as a coarse-grained prompt. This allows\nfor the decoupling of fine-grained appearance and pose information controls at\ndifferent stages, and thus circumventing the potential overfitting problem. To\ngenerate more realistic texture details, a hybrid-granularity attention module\nis proposed to encode multi-scale fine-grained appearance features as bias\nterms to augment the coarse-grained prompt.", + "To\ngenerate more realistic texture details, a hybrid-granularity attention module\nis proposed to encode multi-scale fine-grained appearance features as bias\nterms to augment the coarse-grained prompt. Both quantitative and qualitative\nexperimental results on the DeepFashion benchmark demonstrate the superiority\nof our method over the state of the arts for PGPIS. Code is available at\nhttps://github.com/YanzuoLu/CFLD.", + "Diffusion Models (DMs) have shown remarkable capabilities in various\nimage-generation tasks. However, there are growing concerns that DMs could be\nused to imitate unauthorized creations and thus raise copyright issues. To\naddress this issue, we propose a novel framework that embeds personal\nwatermarks in the generation of adversarial examples. Such examples can force\nDMs to generate images with visible watermarks and prevent DMs from imitating\nunauthorized images. We construct a generator based on conditional adversarial\nnetworks and design three losses (adversarial loss, GAN loss, and perturbation\nloss) to generate adversarial examples that have subtle perturbation but can\neffectively attack DMs to prevent copyright violations. Training a generator\nfor a personal watermark by our method only requires 5-10 samples within 2-3\nminutes, and once the generator is trained, it can generate adversarial\nexamples with that watermark significantly fast (0.2s per image). We conduct\nextensive experiments in various conditional image-generation scenarios.", + "We conduct\nextensive experiments in various conditional image-generation scenarios.\nCompared to existing methods that generate images with chaotic textures, our\nmethod adds visible watermarks on the generated images, which is a more\nstraightforward way to indicate copyright violations. We also observe that our\nadversarial examples exhibit good transferability across unknown generative\nmodels. Therefore, this work provides a simple yet powerful way to protect\ncopyright from DM-based imitation.", + "Prompt tuning represents a valuable technique for adapting pre-trained\nvisual-language models (VLM) to various downstream tasks. Recent advancements\nin CoOp-based methods propose a set of learnable domain-shared or\nimage-conditional textual tokens to facilitate the generation of task-specific\ntextual classifiers. However, those textual tokens have a limited\ngeneralization ability regarding unseen domains, as they cannot dynamically\nadjust to the distribution of testing classes. To tackle this issue, we present\na novel Textual-based Class-aware Prompt tuning(TCP) that explicitly\nincorporates prior knowledge about classes to enhance their discriminability.\nThe critical concept of TCP involves leveraging Textual Knowledge Embedding\n(TKE) to map the high generalizability of class-level textual knowledge into\nclass-aware textual tokens. By seamlessly integrating these class-aware prompts\ninto the Text Encoder, a dynamic class-aware classifier is generated to enhance\ndiscriminability for unseen domains. During inference, TKE dynamically\ngenerates class-aware prompts related to the unseen classes. Comprehensive\nevaluations demonstrate that TKE serves as a plug-and-play module effortlessly\ncombinable with existing methods. Furthermore, TCP consistently achieves\nsuperior performance while demanding less training time.", + "During inference, TKE dynamically\ngenerates class-aware prompts related to the unseen classes. Comprehensive\nevaluations demonstrate that TKE serves as a plug-and-play module effortlessly\ncombinable with existing methods. Furthermore, TCP consistently achieves\nsuperior performance while demanding less training time.\nCode:https://github.com/htyao89/Textual-based_Class-aware_prompt_tuning/", + "We have recently seen tremendous progress in realistic text-to-motion\ngeneration. Yet, the existing methods often fail or produce implausible motions\nwith unseen text inputs, which limits the applications. In this paper, we\npresent OMG, a novel framework, which enables compelling motion generation from\nzero-shot open-vocabulary text prompts. Our key idea is to carefully tailor the\npretrain-then-finetune paradigm into the text-to-motion generation. At the\npre-training stage, our model improves the generation ability by learning the\nrich out-of-domain inherent motion traits. To this end, we scale up a large\nunconditional diffusion model up to 1B parameters, so as to utilize the massive\nunlabeled motion data up to over 20M motion instances. At the subsequent\nfine-tuning stage, we introduce motion ControlNet, which incorporates text\nprompts as conditioning information, through a trainable copy of the\npre-trained model and the proposed novel Mixture-of-Controllers (MoC) block.\nMoC block adaptively recognizes various ranges of the sub-motions with a\ncross-attention mechanism and processes them separately with the\ntext-token-specific experts.", + "MoC block adaptively recognizes various ranges of the sub-motions with a\ncross-attention mechanism and processes them separately with the\ntext-token-specific experts. Such a design effectively aligns the CLIP token\nembeddings of text prompts to various ranges of compact and expressive motion\nfeatures. Extensive experiments demonstrate that our OMG achieves significant\nimprovements over the state-of-the-art methods on zero-shot text-to-motion\ngeneration. Project page: https://tr3e.github.io/omg-page.", + "This work proposes TimeChat, a time-sensitive multimodal large language model\nspecifically designed for long video understanding. Our model incorporates two\nkey architectural contributions: (1) a timestamp-aware frame encoder that binds\nvisual content with the timestamp of each frame, and (2) a sliding video\nQ-Former that produces a video token sequence of varying lengths to accommodate\nvideos of various durations. Additionally, we construct an instruction-tuning\ndataset, encompassing 6 tasks and a total of 125K instances, to further enhance\nTimeChat's instruction-following performance. Experiment results across various\nvideo understanding tasks, such as dense captioning, temporal grounding, and\nhighlight detection, demonstrate TimeChat's strong zero-shot temporal\nlocalization and reasoning capabilities. For example, it achieves +9.2 F1 score\nand +2.8 CIDEr on YouCook2, +5.8 HIT@1 on QVHighlights, and +27.5 R@1 (IoU=0.5)\non Charades-STA, compared to state-of-the-art video large language models,\nholding the potential to serve as a versatile video assistant for long-form\nvideo comprehension tasks and satisfy realistic user requirements.", + "Text-guided diffusion models have revolutionized image and video generation\nand have also been successfully used for optimization-based 3D object\nsynthesis. Here, we instead focus on the underexplored text-to-4D setting and\nsynthesize dynamic, animated 3D objects using score distillation methods with\nan additional temporal dimension. Compared to previous work, we pursue a novel\ncompositional generation-based approach, and combine text-to-image,\ntext-to-video, and 3D-aware multiview diffusion models to provide feedback\nduring 4D object optimization, thereby simultaneously enforcing temporal\nconsistency, high-quality visual appearance and realistic geometry. Our method,\ncalled Align Your Gaussians (AYG), leverages dynamic 3D Gaussian Splatting with\ndeformation fields as 4D representation. Crucial to AYG is a novel method to\nregularize the distribution of the moving 3D Gaussians and thereby stabilize\nthe optimization and induce motion. We also propose a motion amplification\nmechanism as well as a new autoregressive synthesis scheme to generate and\ncombine multiple 4D sequences for longer generation.", + "We also propose a motion amplification\nmechanism as well as a new autoregressive synthesis scheme to generate and\ncombine multiple 4D sequences for longer generation. These techniques allow us\nto synthesize vivid dynamic scenes, outperform previous work qualitatively and\nquantitatively and achieve state-of-the-art text-to-4D performance. Due to the\nGaussian 4D representation, different 4D animations can be seamlessly combined,\nas we demonstrate. AYG opens up promising avenues for animation, simulation and\ndigital content creation as well as synthetic data generation.", + "Existing point cloud semantic segmentation networks cannot identify unknown\nclasses and update their knowledge, due to a closed-set and static perspective\nof the real world, which would induce the intelligent agent to make bad\ndecisions. To address this problem, we propose a Probability-Driven Framework\n(PDF) for open world semantic segmentation that includes (i) a lightweight\nU-decoder branch to identify unknown classes by estimating the uncertainties,\n(ii) a flexible pseudo-labeling scheme to supply geometry features along with\nprobability distribution features of unknown classes by generating pseudo\nlabels, and (iii) an incremental knowledge distillation strategy to incorporate\nnovel classes into the existing knowledge base gradually. Our framework enables\nthe model to behave like human beings, which could recognize unknown objects\nand incrementally learn them with the corresponding knowledge. Experimental\nresults on the S3DIS and ScanNetv2 datasets demonstrate that the proposed PDF\noutperforms other methods by a large margin in both important tasks of open\nworld semantic segmentation.", + "Face Anti-Spoofing (FAS) is pivotal in safeguarding facial recognition\nsystems against presentation attacks. While domain generalization (DG) methods\nhave been developed to enhance FAS performance, they predominantly focus on\nlearning domain-invariant features during training, which may not guarantee\ngeneralizability to unseen data that differs largely from the source\ndistributions. Our insight is that testing data can serve as a valuable\nresource to enhance the generalizability beyond mere evaluation for DG FAS. In\nthis paper, we introduce a novel Test-Time Domain Generalization (TTDG)\nframework for FAS, which leverages the testing data to boost the model's\ngeneralizability. Our method, consisting of Test-Time Style Projection (TTSP)\nand Diverse Style Shifts Simulation (DSSS), effectively projects the unseen\ndata to the seen domain space. In particular, we first introduce the innovative\nTTSP to project the styles of the arbitrarily unseen samples of the testing\ndistribution to the known source space of the training distributions. We then\ndesign the efficient DSSS to synthesize diverse style shifts via learnable\nstyle bases with two specifically designed losses in a hyperspherical feature\nspace.", + "We then\ndesign the efficient DSSS to synthesize diverse style shifts via learnable\nstyle bases with two specifically designed losses in a hyperspherical feature\nspace. Our method eliminates the need for model updates at the test time and\ncan be seamlessly integrated into not only the CNN but also ViT backbones.\nComprehensive experiments on widely used cross-domain FAS benchmarks\ndemonstrate our method's state-of-the-art performance and effectiveness.", + "Recently, there has been an increased interest in the practical problem of\nlearning multiple dense scene understanding tasks from partially annotated\ndata, where each training sample is only labeled for a subset of the tasks. The\nmissing of task labels in training leads to low-quality and noisy predictions,\nas can be observed from state-of-the-art methods. To tackle this issue, we\nreformulate the partially-labeled multi-task dense prediction as a pixel-level\ndenoising problem, and propose a novel multi-task denoising diffusion framework\ncoined as DiffusionMTL. It designs a joint diffusion and denoising paradigm to\nmodel a potential noisy distribution in the task prediction or feature maps and\ngenerate rectified outputs for different tasks. To exploit multi-task\nconsistency in denoising, we further introduce a Multi-Task Conditioning\nstrategy, which can implicitly utilize the complementary nature of the tasks to\nhelp learn the unlabeled tasks, leading to an improvement in the denoising\nperformance of the different tasks.", + "To exploit multi-task\nconsistency in denoising, we further introduce a Multi-Task Conditioning\nstrategy, which can implicitly utilize the complementary nature of the tasks to\nhelp learn the unlabeled tasks, leading to an improvement in the denoising\nperformance of the different tasks. Extensive quantitative and qualitative\nexperiments demonstrate that the proposed multi-task denoising diffusion model\ncan significantly improve multi-task prediction maps, and outperform the\nstate-of-the-art methods on three challenging multi-task benchmarks, under two\ndifferent partial-labeling evaluation settings. The code is available at\nhttps://prismformore.github.io/diffusionmtl/.", + "Recent works have shown that generative models leave traces of their\nunderlying generative process on the generated samples, broadly referred to as\nfingerprints of a generative model, and have studied their utility in detecting\nsynthetic images from real ones. However, the extend to which these\nfingerprints can distinguish between various types of synthetic image and help\nidentify the underlying generative process remain under-explored. In\nparticular, the very definition of a fingerprint remains unclear, to our\nknowledge. To that end, in this work, we formalize the definition of artifact\nand fingerprint in generative models, propose an algorithm for computing them\nin practice, and finally study its effectiveness in distinguishing a large\narray of different generative models. We find that using our proposed\ndefinition can significantly improve the performance on the task of identifying\nthe underlying generative process from samples (model attribution) compared to\nexisting methods. Additionally, we study the structure of the fingerprints, and\nobserve that it is very predictive of the effect of different design choices on\nthe generative process.", + "Traffic scene perception in computer vision is a critically important task to\nachieve intelligent cities. To date, most existing datasets focus on autonomous\ndriving scenes. We observe that the models trained on those driving datasets\noften yield unsatisfactory results on traffic monitoring scenes. However,\nlittle effort has been put into improving the traffic monitoring scene\nunderstanding, mainly due to the lack of specific datasets. To fill this gap,\nwe introduce a specialized traffic monitoring dataset, termed TSP6K, containing\nimages from the traffic monitoring scenario, with high-quality pixel-level and\ninstance-level annotations. The TSP6K dataset captures more crowded traffic\nscenes with several times more traffic participants than the existing driving\nscenes. We perform a detailed analysis of the dataset and comprehensively\nevaluate previous popular scene parsing methods, instance segmentation methods\nand unsupervised domain adaption methods. Furthermore, considering the vast\ndifference in instance sizes, we propose a detail refining decoder for scene\nparsing, which recovers the details of different semantic regions in traffic\nscenes owing to the proposed TSP6K dataset. Experiments show its effectiveness\nin parsing the traffic monitoring scenes.", + "Furthermore, considering the vast\ndifference in instance sizes, we propose a detail refining decoder for scene\nparsing, which recovers the details of different semantic regions in traffic\nscenes owing to the proposed TSP6K dataset. Experiments show its effectiveness\nin parsing the traffic monitoring scenes. Code and dataset are available at\nhttps://github.com/PengtaoJiang/TSP6K.", + "Large-scale Text-to-Image (T2I) models have rapidly gained prominence across\ncreative fields, generating visually compelling outputs from textual prompts.\nHowever, controlling these models to ensure consistent style remains\nchallenging, with existing methods necessitating fine-tuning and manual\nintervention to disentangle content and style. In this paper, we introduce\nStyleAligned, a novel technique designed to establish style alignment among a\nseries of generated images. By employing minimal `attention sharing' during the\ndiffusion process, our method maintains style consistency across images within\nT2I models. This approach allows for the creation of style-consistent images\nusing a reference style through a straightforward inversion operation. Our\nmethod's evaluation across diverse styles and text prompts demonstrates\nhigh-quality synthesis and fidelity, underscoring its efficacy in achieving\nconsistent style across various inputs.", + "With the immense growth of dataset sizes and computing resources in recent\nyears, so-called foundation models have become popular in NLP and vision tasks.\nIn this work, we propose to explore foundation models for the task of keypoint\ndetection on 3D shapes. A unique characteristic of keypoint detection is that\nit requires semantic and geometric awareness while demanding high localization\naccuracy. To address this problem, we propose, first, to back-project features\nfrom large pre-trained 2D vision models onto 3D shapes and employ them for this\ntask. We show that we obtain robust 3D features that contain rich semantic\ninformation and analyze multiple candidate features stemming from different 2D\nfoundation models. Second, we employ a keypoint candidate optimization module\nwhich aims to match the average observed distribution of keypoints on the shape\nand is guided by the back-projected features. The resulting approach achieves a\nnew state of the art for few-shot keypoint detection on the KeyPointNet\ndataset, almost doubling the performance of the previous best methods.", + "Stereo matching is a core task for many computer vision and robotics\napplications. Despite their dominance in traditional stereo methods, the\nhand-crafted Markov Random Field (MRF) models lack sufficient modeling accuracy\ncompared to end-to-end deep models. While deep learning representations have\ngreatly improved the unary terms of the MRF models, the overall accuracy is\nstill severely limited by the hand-crafted pairwise terms and message passing.\nTo address these issues, we propose a neural MRF model, where both potential\nfunctions and message passing are designed using data-driven neural networks.\nOur fully data-driven model is built on the foundation of variational inference\ntheory, to prevent convergence issues and retain stereo MRF's graph inductive\nbias. To make the inference tractable and scale well to high-resolution images,\nwe also propose a Disparity Proposal Network (DPN) to adaptively prune the\nsearch space of disparity. The proposed approach ranks $1^{st}$ on both KITTI\n2012 and 2015 leaderboards among all published methods while running faster\nthan 100 ms.", + "The proposed approach ranks $1^{st}$ on both KITTI\n2012 and 2015 leaderboards among all published methods while running faster\nthan 100 ms. This approach significantly outperforms prior global methods,\ne.g., lowering D1 metric by more than 50% on KITTI 2015. In addition, our\nmethod exhibits strong cross-domain generalization and can recover sharp edges.\nThe codes at https://github.com/aeolusguan/NMRF", + "In autonomous driving, predicting future events in advance and evaluating the\nforeseeable risks empowers autonomous vehicles to better plan their actions,\nenhancing safety and efficiency on the road. To this end, we propose Drive-WM,\nthe first driving world model compatible with existing end-to-end planning\nmodels. Through a joint spatial-temporal modeling facilitated by view\nfactorization, our model generates high-fidelity multiview videos in driving\nscenes. Building on its powerful generation ability, we showcase the potential\nof applying the world model for safe driving planning for the first time.\nParticularly, our Drive-WM enables driving into multiple futures based on\ndistinct driving maneuvers, and determines the optimal trajectory according to\nthe image-based rewards. Evaluation on real-world driving datasets verifies\nthat our method could generate high-quality, consistent, and controllable\nmultiview videos, opening up possibilities for real-world simulations and safe\nplanning.", + "Event-based semantic segmentation (ESS) is a fundamental yet challenging task\nfor event camera sensing. The difficulties in interpreting and annotating event\ndata limit its scalability. While domain adaptation from images to event data\ncan help to mitigate this issue, there exist data representational differences\nthat require additional effort to resolve. In this work, for the first time, we\nsynergize information from image, text, and event-data domains and introduce\nOpenESS to enable scalable ESS in an open-world, annotation-efficient manner.\nWe achieve this goal by transferring the semantically rich CLIP knowledge from\nimage-text pairs to event streams. To pursue better cross-modality adaptation,\nwe propose a frame-to-event contrastive distillation and a text-to-event\nsemantic consistency regularization. Experimental results on popular ESS\nbenchmarks showed our approach outperforms existing methods. Notably, we\nachieve 53.93% and 43.31% mIoU on DDD17 and DSEC-Semantic without using either\nevent or frame labels.", + "Aligned text-image encoders such as CLIP have become the de facto model for\nvision-language tasks. Furthermore, modality-specific encoders achieve\nimpressive performances in their respective domains. This raises a central\nquestion: does an alignment exist between uni-modal vision and language\nencoders since they fundamentally represent the same physical world? Analyzing\nthe latent spaces structure of vision and language models on image-caption\nbenchmarks using the Centered Kernel Alignment (CKA), we find that the\nrepresentation spaces of unaligned and aligned encoders are semantically\nsimilar. In the absence of statistical similarity in aligned encoders like\nCLIP, we show that a possible matching of unaligned encoders exists without any\ntraining. We frame this as a seeded graph-matching problem exploiting the\nsemantic similarity between graphs and propose two methods - a Fast Quadratic\nAssignment Problem optimization, and a novel localized CKA metric-based\nmatching/retrieval. We demonstrate the effectiveness of this on several\ndownstream tasks including cross-lingual, cross-domain caption matching and\nimage classification. Code available at github.com/mayug/0-shot-llm-vision.", + "Currently, high-definition (HD) map construction leans towards a lightweight\nonline generation tendency, which aims to preserve timely and reliable road\nscene information. However, map elements contain strong shape priors. Subtle\nand sparse annotations make current detection-based frameworks ambiguous in\nlocating relevant feature scopes and cause the loss of detailed structures in\nprediction. To alleviate these problems, we propose MGMap, a mask-guided\napproach that effectively highlights the informative regions and achieves\nprecise map element localization by introducing the learned masks.\nSpecifically, MGMap employs learned masks based on the enhanced multi-scale BEV\nfeatures from two perspectives. At the instance level, we propose the\nMask-activated instance (MAI) decoder, which incorporates global instance and\nstructural information into instance queries by the activation of instance\nmasks. At the point level, a novel position-guided mask patch refinement\n(PG-MPR) module is designed to refine point locations from a finer-grained\nperspective, enabling the extraction of point-specific patch information.\nCompared to the baselines, our proposed MGMap achieves a notable improvement of\naround 10 mAP for different input modalities.", + "Compared to the baselines, our proposed MGMap achieves a notable improvement of\naround 10 mAP for different input modalities. Extensive experiments also\ndemonstrate that our approach showcases strong robustness and generalization\ncapabilities. Our code can be found at https://github.com/xiaolul2/MGMap.", + "We introduce SUPIR (Scaling-UP Image Restoration), a groundbreaking image\nrestoration method that harnesses generative prior and the power of model\nscaling up. Leveraging multi-modal techniques and advanced generative prior,\nSUPIR marks a significant advance in intelligent and realistic image\nrestoration. As a pivotal catalyst within SUPIR, model scaling dramatically\nenhances its capabilities and demonstrates new potential for image restoration.\nWe collect a dataset comprising 20 million high-resolution, high-quality images\nfor model training, each enriched with descriptive text annotations. SUPIR\nprovides the capability to restore images guided by textual prompts, broadening\nits application scope and potential. Moreover, we introduce negative-quality\nprompts to further improve perceptual quality. We also develop a\nrestoration-guided sampling method to suppress the fidelity issue encountered\nin generative-based restoration. Experiments demonstrate SUPIR's exceptional\nrestoration effects and its novel capacity to manipulate restoration through\ntextual prompts.", + "In this paper, we propose VidLA, an approach for video-language alignment at\nscale. There are two major limitations of previous video-language alignment\napproaches. First, they do not capture both short-range and long-range temporal\ndependencies and typically employ complex hierarchical deep network\narchitectures that are hard to integrate with existing pretrained image-text\nfoundation models. To effectively address this limitation, we instead keep the\nnetwork architecture simple and use a set of data tokens that operate at\ndifferent temporal resolutions in a hierarchical manner, accounting for the\ntemporally hierarchical nature of videos. By employing a simple two-tower\narchitecture, we are able to initialize our video-language model with\npretrained image-text foundation models, thereby boosting the final\nperformance. Second, existing video-language alignment works struggle due to\nthe lack of semantically aligned large-scale training data. To overcome it, we\nleverage recent LLMs to curate the largest video-language dataset to date with\nbetter visual grounding. Furthermore, unlike existing video-text datasets which\nonly contain short clips, our dataset is enriched with video clips of varying\ndurations to aid our temporally hierarchical data tokens in extracting better\nrepresentations at varying temporal scales.", + "Furthermore, unlike existing video-text datasets which\nonly contain short clips, our dataset is enriched with video clips of varying\ndurations to aid our temporally hierarchical data tokens in extracting better\nrepresentations at varying temporal scales. Overall, empirical results show\nthat our proposed approach surpasses state-of-the-art methods on multiple\nretrieval benchmarks, especially on longer videos, and performs competitively\non classification benchmarks.", + "Self-Supervised Learning (SSL) has demonstrated promising results in 3D\nmedical image analysis. However, the lack of high-level semantics in\npre-training still heavily hinders the performance of downstream tasks. We\nobserve that 3D medical images contain relatively consistent contextual\nposition information, i.e., consistent geometric relations between different\norgans, which leads to a potential way for us to learn consistent semantic\nrepresentations in pre-training. In this paper, we propose a\nsimple-yet-effective Volume Contrast (VoCo) framework to leverage the\ncontextual position priors for pre-training. Specifically, we first generate a\ngroup of base crops from different regions while enforcing feature discrepancy\namong them, where we employ them as class assignments of different regions.\nThen, we randomly crop sub-volumes and predict them belonging to which class\n(located at which region) by contrasting their similarity to different base\ncrops, which can be seen as predicting contextual positions of different\nsub-volumes. Through this pretext task, VoCo implicitly encodes the contextual\nposition priors into model representations without the guidance of annotations,\nenabling us to effectively improve the performance of downstream tasks that\nrequire high-level semantics.", + "Through this pretext task, VoCo implicitly encodes the contextual\nposition priors into model representations without the guidance of annotations,\nenabling us to effectively improve the performance of downstream tasks that\nrequire high-level semantics. Extensive experimental results on six downstream\ntasks demonstrate the superior effectiveness of VoCo. Code will be available at\nhttps://github.com/Luffy03/VoCo.", + "In this paper, we present CCEdit, a versatile generative video editing\nframework based on diffusion models. Our approach employs a novel trident\nnetwork structure that separates structure and appearance control, ensuring\nprecise and creative editing capabilities. Utilizing the foundational\nControlNet architecture, we maintain the structural integrity of the video\nduring editing. The incorporation of an additional appearance branch enables\nusers to exert fine-grained control over the edited key frame. These two side\nbranches seamlessly integrate into the main branch, which is constructed upon\nexisting text-to-image (T2I) generation models, through learnable temporal\nlayers. The versatility of our framework is demonstrated through a diverse\nrange of choices in both structure representations and personalized T2I models,\nas well as the option to provide the edited key frame. To facilitate\ncomprehensive evaluation, we introduce the BalanceCC benchmark dataset,\ncomprising 100 videos and 4 target prompts for each video. Our extensive user\nstudies compare CCEdit with eight state-of-the-art video editing methods. The\noutcomes demonstrate CCEdit's substantial superiority over all other methods.", + "Diffusion models have achieved remarkable image generation quality surpassing\nprevious generative models. However, a notable limitation of diffusion models,\nin comparison to GANs, is their difficulty in smoothly interpolating between\ntwo image samples, due to their highly unstructured latent space. Such a smooth\ninterpolation is intriguing as it naturally serves as a solution for the image\nmorphing task with many applications. In this work, we present DiffMorpher, the\nfirst approach enabling smooth and natural image interpolation using diffusion\nmodels. Our key idea is to capture the semantics of the two images by fitting\ntwo LoRAs to them respectively, and interpolate between both the LoRA\nparameters and the latent noises to ensure a smooth semantic transition, where\ncorrespondence automatically emerges without the need for annotation. In\naddition, we propose an attention interpolation and injection technique and a\nnew sampling schedule to further enhance the smoothness between consecutive\nimages. Extensive experiments demonstrate that DiffMorpher achieves starkly\nbetter image morphing effects than previous methods across a variety of object\ncategories, bridging a critical functional gap that distinguished diffusion\nmodels from GANs.", + "As an important and practical way to obtain high dynamic range (HDR) video,\nHDR video reconstruction from sequences with alternating exposures is still\nless explored, mainly due to the lack of large-scale real-world datasets.\nExisting methods are mostly trained on synthetic datasets, which perform poorly\nin real scenes. In this work, to facilitate the development of real-world HDR\nvideo reconstruction, we present Real-HDRV, a large-scale real-world benchmark\ndataset for HDR video reconstruction, featuring various scenes, diverse motion\npatterns, and high-quality labels. Specifically, our dataset contains 500\nLDRs-HDRs video pairs, comprising about 28,000 LDR frames and 4,000 HDR labels,\ncovering daytime, nighttime, indoor, and outdoor scenes. To our best knowledge,\nour dataset is the largest real-world HDR video reconstruction dataset.\nCorrespondingly, we propose an end-to-end network for HDR video reconstruction,\nwhere a novel two-stage strategy is designed to perform alignment sequentially.\nSpecifically, the first stage performs global alignment with the adaptively\nestimated global offsets, reducing the difficulty of subsequent alignment.", + "Correspondingly, we propose an end-to-end network for HDR video reconstruction,\nwhere a novel two-stage strategy is designed to perform alignment sequentially.\nSpecifically, the first stage performs global alignment with the adaptively\nestimated global offsets, reducing the difficulty of subsequent alignment. The\nsecond stage implicitly performs local alignment in a coarse-to-fine manner at\nthe feature level using the adaptive separable convolution. Extensive\nexperiments demonstrate that: (1) models trained on our dataset can achieve\nbetter performance on real scenes than those trained on synthetic datasets; (2)\nour method outperforms previous state-of-the-art methods. Our dataset is\navailable at https://github.com/yungsyu99/Real-HDRV.", + "3D head avatars built with neural implicit volumetric representations have\nachieved unprecedented levels of photorealism. However, the computational cost\nof these methods remains a significant barrier to their widespread adoption,\nparticularly in real-time applications such as virtual reality and\nteleconferencing. While attempts have been made to develop fast neural\nrendering approaches for static scenes, these methods cannot be simply employed\nto support realistic facial expressions, such as in the case of a dynamic\nfacial performance. To address these challenges, we propose a novel fast 3D\nneural implicit head avatar model that achieves real-time rendering while\nmaintaining fine-grained controllability and high rendering quality. Our key\nidea lies in the introduction of local hash table blendshapes, which are\nlearned and attached to the vertices of an underlying face parametric model.\nThese per-vertex hash-tables are linearly merged with weights predicted via a\nCNN, resulting in expression dependent embeddings. Our novel representation\nenables efficient density and color predictions using a lightweight MLP, which\nis further accelerated by a hierarchical nearest neighbor search method.", + "These per-vertex hash-tables are linearly merged with weights predicted via a\nCNN, resulting in expression dependent embeddings. Our novel representation\nenables efficient density and color predictions using a lightweight MLP, which\nis further accelerated by a hierarchical nearest neighbor search method.\nExtensive experiments show that our approach runs in real-time while achieving\ncomparable rendering quality to state-of-the-arts and decent results on\nchallenging expressions.", + "Low-precision quantization is recognized for its efficacy in neural network\noptimization. Our analysis reveals that non-quantized elementwise operations\nwhich are prevalent in layers such as parameterized activation functions, batch\nnormalization, and quantization scaling dominate the inference cost of\nlow-precision models. These non-quantized elementwise operations are commonly\noverlooked in SOTA efficiency metrics such as Arithmetic Computation Effort\n(ACE). In this paper, we propose ACEv2 - an extended version of ACE which\noffers a better alignment with the inference cost of quantized models and their\nenergy consumption on ML hardware. Moreover, we introduce PikeLPN, a model that\naddresses these efficiency issues by applying quantization to both elementwise\noperations and multiply-accumulate operations. In particular, we present a\nnovel quantization technique for batch normalization layers named QuantNorm\nwhich allows for quantizing the batch normalization parameters without\ncompromising the model performance. Additionally, we propose applying Double\nQuantization where the quantization scaling parameters are quantized.", + "In particular, we present a\nnovel quantization technique for batch normalization layers named QuantNorm\nwhich allows for quantizing the batch normalization parameters without\ncompromising the model performance. Additionally, we propose applying Double\nQuantization where the quantization scaling parameters are quantized.\nFurthermore, we recognize and resolve the issue of distribution mismatch in\nSeparable Convolution layers by introducing Distribution-Heterogeneous\nQuantization which enables quantizing them to low-precision. PikeLPN achieves\nPareto-optimality in efficiency-accuracy trade-off with up to 3X efficiency\nimprovement compared to SOTA low-precision models.", + "Modern depth sensors such as LiDAR operate by sweeping laser-beams across the\nscene, resulting in a point cloud with notable 1D curve-like structures. In\nthis work, we introduce a new point cloud processing scheme and backbone,\ncalled CurveCloudNet, which takes advantage of the curve-like structure\ninherent to these sensors. While existing backbones discard the rich 1D\ntraversal patterns and rely on generic 3D operations, CurveCloudNet\nparameterizes the point cloud as a collection of polylines (dubbed a \"curve\ncloud\"), establishing a local surface-aware ordering on the points. By\nreasoning along curves, CurveCloudNet captures lightweight curve-aware priors\nto efficiently and accurately reason in several diverse 3D environments. We\nevaluate CurveCloudNet on multiple synthetic and real datasets that exhibit\ndistinct 3D size and structure. We demonstrate that CurveCloudNet outperforms\nboth point-based and sparse-voxel backbones in various segmentation settings,\nnotably scaling to large scenes better than point-based alternatives while\nexhibiting improved single-object performance over sparse-voxel alternatives.", + "We demonstrate that CurveCloudNet outperforms\nboth point-based and sparse-voxel backbones in various segmentation settings,\nnotably scaling to large scenes better than point-based alternatives while\nexhibiting improved single-object performance over sparse-voxel alternatives.\nIn all, CurveCloudNet is an efficient and accurate backbone that can handle a\nlarger variety of 3D environments than past works.", + "We address the challenge of generating 3D articulated objects in a\ncontrollable fashion. Currently, modeling articulated 3D objects is either\nachieved through laborious manual authoring, or using methods from prior work\nthat are hard to scale and control directly. We leverage the interplay between\npart shape, connectivity, and motion using a denoising diffusion-based method\nwith attention modules designed to extract correlations between part\nattributes. Our method takes an object category label and a part connectivity\ngraph as input and generates an object's geometry and motion parameters. The\ngenerated objects conform to user-specified constraints on the object category,\npart shape, and part articulation. Our experiments show that our method\noutperforms the state-of-the-art in articulated object generation, producing\nmore realistic objects while conforming better to user constraints.\n Video Summary at: http://youtu.be/cH_rbKbyTpE", + "To reduce the reliance on large-scale datasets, recent works in 3D\nsegmentation resort to few-shot learning. Current 3D few-shot segmentation\nmethods first pre-train models on 'seen' classes, and then evaluate their\ngeneralization performance on 'unseen' classes. However, the prior pre-training\nstage not only introduces excessive time overhead but also incurs a significant\ndomain gap on 'unseen' classes. To tackle these issues, we propose a\nNon-parametric Network for few-shot 3D Segmentation, Seg-NN, and its Parametric\nvariant, Seg-PN. Without training, Seg-NN extracts dense representations by\nhand-crafted filters and achieves comparable performance to existing parametric\nmodels. Due to the elimination of pre-training, Seg-NN can alleviate the domain\ngap issue and save a substantial amount of time. Based on Seg-NN, Seg-PN only\nrequires training a lightweight QUEry-Support Transferring (QUEST) module,\nwhich enhances the interaction between the support set and query set.", + "Based on Seg-NN, Seg-PN only\nrequires training a lightweight QUEry-Support Transferring (QUEST) module,\nwhich enhances the interaction between the support set and query set.\nExperiments suggest that Seg-PN outperforms previous state-of-the-art method by\n+4.19% and +7.71% mIoU on S3DIS and ScanNet datasets respectively, while\nreducing training time by -90%, indicating its effectiveness and efficiency.", + "We introduce PhysGaussian, a new method that seamlessly integrates physically\ngrounded Newtonian dynamics within 3D Gaussians to achieve high-quality novel\nmotion synthesis. Employing a custom Material Point Method (MPM), our approach\nenriches 3D Gaussian kernels with physically meaningful kinematic deformation\nand mechanical stress attributes, all evolved in line with continuum mechanics\nprinciples. A defining characteristic of our method is the seamless integration\nbetween physical simulation and visual rendering: both components utilize the\nsame 3D Gaussian kernels as their discrete representations. This negates the\nnecessity for triangle/tetrahedron meshing, marching cubes, \"cage meshes,\" or\nany other geometry embedding, highlighting the principle of \"what you see is\nwhat you simulate (WS$^2$).\" Our method demonstrates exceptional versatility\nacross a wide variety of materials--including elastic entities, metals,\nnon-Newtonian fluids, and granular materials--showcasing its strong\ncapabilities in creating diverse visual content with novel viewpoints and\nmovements. Our project page is at: https://xpandora.github.io/PhysGaussian/", + "Recovering images distorted by atmospheric turbulence is a challenging\ninverse problem due to the stochastic nature of turbulence. Although numerous\nturbulence mitigation (TM) algorithms have been proposed, their efficiency and\ngeneralization to real-world dynamic scenarios remain severely limited.\nBuilding upon the intuitions of classical TM algorithms, we present the Deep\nAtmospheric TUrbulence Mitigation network (DATUM). DATUM aims to overcome major\nchallenges when transitioning from classical to deep learning approaches. By\ncarefully integrating the merits of classical multi-frame TM methods into a\ndeep network structure, we demonstrate that DATUM can efficiently perform\nlong-range temporal aggregation using a recurrent fashion, while deformable\nattention and temporal-channel attention seamlessly facilitate pixel\nregistration and lucky imaging. With additional supervision, tilt and blur\ndegradation can be jointly mitigated. These inductive biases empower DATUM to\nsignificantly outperform existing methods while delivering a tenfold increase\nin processing speed. A large-scale training dataset, ATSyn, is presented as a\nco-invention to enable generalization in real turbulence. Our code and datasets\nare available at https://xg416.github.io/DATUM.", + "In recent years, automated Gallbladder Cancer (GBC) detection has gained the\nattention of researchers. Current state-of-the-art (SOTA) methodologies relying\non ultrasound sonography (US) images exhibit limited generalization,\nemphasizing the need for transformative approaches. We observe that individual\nUS frames may lack sufficient information to capture disease manifestation.\nThis study advocates for a paradigm shift towards video-based GBC detection,\nleveraging the inherent advantages of spatiotemporal representations. Employing\nthe Masked Autoencoder (MAE) for representation learning, we address\nshortcomings in conventional image-based methods. We propose a novel design\ncalled FocusMAE to systematically bias the selection of masking tokens from\nhigh-information regions, fostering a more refined representation of\nmalignancy. Additionally, we contribute the most extensive US video dataset for\nGBC detection. We also note that, this is the first study on US video-based GBC\ndetection.", + "Additionally, we contribute the most extensive US video dataset for\nGBC detection. We also note that, this is the first study on US video-based GBC\ndetection. We validate the proposed methods on the curated dataset, and report\na new state-of-the-art (SOTA) accuracy of 96.4% for the GBC detection problem,\nagainst an accuracy of 84% by current Image-based SOTA - GBCNet, and RadFormer,\nand 94.7% by Video-based SOTA - AdaMAE. We further demonstrate the generality\nof the proposed FocusMAE on a public CT-based Covid detection dataset,\nreporting an improvement in accuracy by 3.3% over current baselines. The source\ncode and pretrained models are available at:\nhttps://gbc-iitd.github.io/focusmae", + "Driven by the scalable diffusion models trained on large-scale datasets,\ntext-to-image synthesis methods have shown compelling results. However, these\nmodels still fail to precisely follow the text prompt involving multiple\nobjects, attributes, or spatial compositions. In this paper, we reveal the\npotential causes in the diffusion model's cross-attention and self-attention\nlayers. We propose two novel losses to refocus attention maps according to a\ngiven spatial layout during sampling. Creating the layouts manually requires\nadditional effort and can be tedious. Therefore, we explore using large\nlanguage models (LLM) to produce these layouts for our method. We conduct\nextensive experiments on the DrawBench, HRS, and TIFA benchmarks to evaluate\nour proposed method. We show that our proposed attention refocusing effectively\nimproves the controllability of existing approaches.", + "Understanding what deep network models capture in their learned\nrepresentations is a fundamental challenge in computer vision. We present a new\nmethodology to understanding such vision models, the Visual Concept Connectome\n(VCC), which discovers human interpretable concepts and their interlayer\nconnections in a fully unsupervised manner. Our approach simultaneously reveals\nfine-grained concepts at a layer, connection weightings across all layers and\nis amendable to global analysis of network structure (e.g., branching pattern\nof hierarchical concept assemblies). Previous work yielded ways to extract\ninterpretable concepts from single layers and examine their impact on\nclassification, but did not afford multilayer concept analysis across an entire\nnetwork architecture. Quantitative and qualitative empirical results show the\neffectiveness of VCCs in the domain of image classification. Also, we leverage\nVCCs for the application of failure mode debugging to reveal where mistakes\narise in deep networks.", + "The increasing use of transformer-based large language models brings forward\nthe challenge of processing long sequences. In document visual question\nanswering (DocVQA), leading methods focus on the single-page setting, while\ndocuments can span hundreds of pages. We present GRAM, a method that seamlessly\nextends pre-trained single-page models to the multi-page setting, without\nrequiring computationally-heavy pretraining. To do so, we leverage a\nsingle-page encoder for local page-level understanding, and enhance it with\ndocument-level designated layers and learnable tokens, facilitating the flow of\ninformation across pages for global reasoning. To enforce our model to utilize\nthe newly introduced document tokens, we propose a tailored bias adaptation\nmethod. For additional computational savings during decoding, we introduce an\noptional compression stage using our compression-transformer\n(C-Former),reducing the encoded sequence length, thereby allowing a tradeoff\nbetween quality and latency. Extensive experiments showcase GRAM's\nstate-of-the-art performance on the benchmarks for multi-page DocVQA,\ndemonstrating the effectiveness of our approach.", + "This study addresses the challenge of performing visual localization in\ndemanding conditions such as night-time scenarios, adverse weather, and\nseasonal changes. While many prior studies have focused on improving\nimage-matching performance to facilitate reliable dense keypoint matching\nbetween images, existing methods often heavily rely on predefined feature\npoints on a reconstructed 3D model. Consequently, they tend to overlook\nunobserved keypoints during the matching process. Therefore, dense keypoint\nmatches are not fully exploited, leading to a notable reduction in accuracy,\nparticularly in noisy scenes. To tackle this issue, we propose a novel\nlocalization method that extracts reliable semi-dense 2D-3D matching points\nbased on dense keypoint matches. This approach involves regressing semi-dense\n2D keypoints into 3D scene coordinates using a point inference network. The\nnetwork utilizes both geometric and visual cues to effectively infer 3D\ncoordinates for unobserved keypoints from the observed ones. The abundance of\nmatching information significantly enhances the accuracy of camera pose\nestimation, even in scenarios involving noisy or sparse 3D models.", + "The\nnetwork utilizes both geometric and visual cues to effectively infer 3D\ncoordinates for unobserved keypoints from the observed ones. The abundance of\nmatching information significantly enhances the accuracy of camera pose\nestimation, even in scenarios involving noisy or sparse 3D models.\nComprehensive evaluations demonstrate that the proposed method outperforms\nother methods in challenging scenes and achieves competitive results in\nlarge-scale visual localization benchmarks. The code will be available.", + "This paper studies amodal image segmentation: predicting entire object\nsegmentation masks including both visible and invisible (occluded) parts. In\nprevious work, the amodal segmentation ground truth on real images is usually\npredicted by manual annotaton and thus is subjective. In contrast, we use 3D\ndata to establish an automatic pipeline to determine authentic ground truth\namodal masks for partially occluded objects in real images. This pipeline is\nused to construct an amodal completion evaluation benchmark, MP3D-Amodal,\nconsisting of a variety of object categories and labels. To better handle the\namodal completion task in the wild, we explore two architecture variants: a\ntwo-stage model that first infers the occluder, followed by amodal mask\ncompletion; and a one-stage model that exploits the representation power of\nStable Diffusion for amodal segmentation across many categories. Without bells\nand whistles, our method achieves a new state-of-the-art performance on Amodal\nsegmentation datasets that cover a large variety of objects, including COCOA\nand our new MP3D-Amodal dataset.", + "Without bells\nand whistles, our method achieves a new state-of-the-art performance on Amodal\nsegmentation datasets that cover a large variety of objects, including COCOA\nand our new MP3D-Amodal dataset. The dataset, model, and code are available at\nhttps://www.robots.ox.ac.uk/~vgg/research/amodal/.", + "While pre-trained large-scale vision models have shown significant promise\nfor semantic correspondence, their features often struggle to grasp the\ngeometry and orientation of instances. This paper identifies the importance of\nbeing geometry-aware for semantic correspondence and reveals a limitation of\nthe features of current foundation models under simple post-processing. We show\nthat incorporating this information can markedly enhance semantic\ncorrespondence performance with simple but effective solutions in both\nzero-shot and supervised settings. We also construct a new challenging\nbenchmark for semantic correspondence built from an existing animal pose\nestimation dataset, for both pre-training validating models. Our method\nachieves a PCK@0.10 score of 65.4 (zero-shot) and 85.6 (supervised) on the\nchallenging SPair-71k dataset, outperforming the state of the art by 5.5p and\n11.0p absolute gains, respectively. Our code and datasets are publicly\navailable at: https://telling-left-from-right.github.io/.", + "Human avatar has become a novel type of 3D asset with various applications.\nIdeally, a human avatar should be fully customizable to accommodate different\nsettings and environments. In this work, we introduce NECA, an approach capable\nof learning versatile human representation from monocular or sparse-view\nvideos, enabling granular customization across aspects such as pose, shadow,\nshape, lighting and texture. The core of our approach is to represent humans in\ncomplementary dual spaces and predict disentangled neural fields of geometry,\nalbedo, shadow, as well as an external lighting, from which we are able to\nderive realistic rendering with high-frequency details via volumetric\nrendering. Extensive experiments demonstrate the advantage of our method over\nthe state-of-the-art methods in photorealistic rendering, as well as various\nediting tasks such as novel pose synthesis and relighting. The code is\navailable at https://github.com/iSEE-Laboratory/NECA.", + "Adversarial examples mislead deep neural networks with imperceptible\nperturbations and have brought significant threats to deep learning. An\nimportant aspect is their transferability, which refers to their ability to\ndeceive other models, thus enabling attacks in the black-box setting. Though\nvarious methods have been proposed to boost transferability, the performance\nstill falls short compared with white-box attacks. In this work, we observe\nthat existing input transformation based attacks, one of the mainstream\ntransfer-based attacks, result in different attention heatmaps on various\nmodels, which might limit the transferability. We also find that breaking the\nintrinsic relation of the image can disrupt the attention heatmap of the\noriginal image. Based on this finding, we propose a novel input transformation\nbased attack called block shuffle and rotation (BSR). Specifically, BSR splits\nthe input image into several blocks, then randomly shuffles and rotates these\nblocks to construct a set of new images for gradient calculation. Empirical\nevaluations on the ImageNet dataset demonstrate that BSR could achieve\nsignificantly better transferability than the existing input transformation\nbased methods under single-model and ensemble-model settings.", + "Empirical\nevaluations on the ImageNet dataset demonstrate that BSR could achieve\nsignificantly better transferability than the existing input transformation\nbased methods under single-model and ensemble-model settings. Combining BSR\nwith the current input transformation method can further improve the\ntransferability, which significantly outperforms the state-of-the-art methods.\nCode is available at https://github.com/Trustworthy-AI-Group/BSR", + "Vision-centric autonomous driving has recently raised wide attention due to\nits lower cost. Pre-training is essential for extracting a universal\nrepresentation. However, current vision-centric pre-training typically relies\non either 2D or 3D pre-text tasks, overlooking the temporal characteristics of\nautonomous driving as a 4D scene understanding task. In this paper, we address\nthis challenge by introducing a world model-based autonomous driving 4D\nrepresentation learning framework, dubbed \\emph{DriveWorld}, which is capable\nof pre-training from multi-camera driving videos in a spatio-temporal fashion.\nSpecifically, we propose a Memory State-Space Model for spatio-temporal\nmodelling, which consists of a Dynamic Memory Bank module for learning\ntemporal-aware latent dynamics to predict future changes and a Static Scene\nPropagation module for learning spatial-aware latent statics to offer\ncomprehensive scene contexts. We additionally introduce a Task Prompt to\ndecouple task-aware features for various downstream tasks. The experiments\ndemonstrate that DriveWorld delivers promising results on various autonomous\ndriving tasks.", + "We additionally introduce a Task Prompt to\ndecouple task-aware features for various downstream tasks. The experiments\ndemonstrate that DriveWorld delivers promising results on various autonomous\ndriving tasks. When pre-trained with the OpenScene dataset, DriveWorld achieves\na 7.5% increase in mAP for 3D object detection, a 3.0% increase in IoU for\nonline mapping, a 5.0% increase in AMOTA for multi-object tracking, a 0.1m\ndecrease in minADE for motion forecasting, a 3.0% increase in IoU for occupancy\nprediction, and a 0.34m reduction in average L2 error for planning.", + "Modularity plays a crucial role in the development and maintenance of complex\nsystems. While end-to-end text spotting efficiently mitigates the issues of\nerror accumulation and sub-optimal performance seen in traditional two-step\nmethodologies, the two-step methods continue to be favored in many competitions\nand practical settings due to their superior modularity. In this paper, we\nintroduce Bridging Text Spotting, a novel approach that resolves the error\naccumulation and suboptimal performance issues in two-step methods while\nretaining modularity. To achieve this, we adopt a well-trained detector and\nrecognizer that are developed and trained independently and then lock their\nparameters to preserve their already acquired capabilities. Subsequently, we\nintroduce a Bridge that connects the locked detector and recognizer through a\nzero-initialized neural network. This zero-initialized neural network,\ninitialized with weights set to zeros, ensures seamless integration of the\nlarge receptive field features in detection into the locked recognizer.\nFurthermore, since the fixed detector and recognizer cannot naturally acquire\nend-to-end optimization features, we adopt the Adapter to facilitate their\nefficient learning of these features.", + "This zero-initialized neural network,\ninitialized with weights set to zeros, ensures seamless integration of the\nlarge receptive field features in detection into the locked recognizer.\nFurthermore, since the fixed detector and recognizer cannot naturally acquire\nend-to-end optimization features, we adopt the Adapter to facilitate their\nefficient learning of these features. We demonstrate the effectiveness of the\nproposed method through extensive experiments: Connecting the latest detector\nand recognizer through Bridging Text Spotting, we achieved an accuracy of 83.3%\non Total-Text, 69.8% on CTW1500, and 89.5% on ICDAR 2015. The code is available\nat https://github.com/mxin262/Bridging-Text-Spotting.", + "Learning generalizable visual representations from Internet data has yielded\npromising results for robotics. Yet, prevailing approaches focus on\npre-training 2D representations, being sub-optimal to deal with occlusions and\naccurately localize objects in complex 3D scenes. Meanwhile, 3D representation\nlearning has been limited to single-object understanding. To address these\nlimitations, we introduce a novel 3D pre-training framework for robotics named\nSUGAR that captures semantic, geometric and affordance properties of objects\nthrough 3D point clouds. We underscore the importance of cluttered scenes in 3D\nrepresentation learning, and automatically construct a multi-object dataset\nbenefiting from cost-free supervision in simulation. SUGAR employs a versatile\ntransformer-based model to jointly address five pre-training tasks, namely\ncross-modal knowledge distillation for semantic learning, masked point modeling\nto understand geometry structures, grasping pose synthesis for object\naffordance, 3D instance segmentation and referring expression grounding to\nanalyze cluttered scenes. We evaluate our learned representation on three\nrobotic-related tasks, namely, zero-shot 3D object recognition, referring\nexpression grounding, and language-driven robotic manipulation.", + "We evaluate our learned representation on three\nrobotic-related tasks, namely, zero-shot 3D object recognition, referring\nexpression grounding, and language-driven robotic manipulation. Experimental\nresults show that SUGAR's 3D representation outperforms state-of-the-art 2D and\n3D representations.", + "Photorealistic simulation plays a crucial role in applications such as\nautonomous driving, where advances in neural radiance fields (NeRFs) may allow\nbetter scalability through the automatic creation of digital 3D assets.\nHowever, reconstruction quality suffers on street scenes due to largely\ncollinear camera motions and sparser samplings at higher speeds. On the other\nhand, the application often demands rendering from camera views that deviate\nfrom the inputs to accurately simulate behaviors like lane changes. In this\npaper, we propose several insights that allow a better utilization of Lidar\ndata to improve NeRF quality on street scenes. First, our framework learns a\ngeometric scene representation from Lidar, which is fused with the implicit\ngrid-based representation for radiance decoding, thereby supplying stronger\ngeometric information offered by explicit point cloud. Second, we put forth a\nrobust occlusion-aware depth supervision scheme, which allows utilizing\ndensified Lidar points by accumulation. Third, we generate augmented training\nviews from Lidar points for further improvement. Our insights translate to\nlargely improved novel view synthesis under real driving scenes.", + "Current vision-language pre-training (VLP) methodologies predominantly depend\non paired image-text datasets, a resource that is challenging to acquire in\nradiology due to privacy considerations and labelling complexities. Data\naugmentation provides a practical solution to overcome the issue of data\nscarcity, however, most augmentation methods exhibit a limited focus,\nprioritising either image or text augmentation exclusively. Acknowledging this\nlimitation, our objective is to devise a framework capable of concurrently\naugmenting medical image and text data. We design a Pairwise Augmentation\n(PairAug) approach that contains an Inter-patient Augmentation (InterAug)\nbranch and an Intra-patient Augmentation (IntraAug) branch. Specifically, the\nInterAug branch of our approach generates radiology images using synthesised\nyet plausible reports derived from a Large Language Model (LLM). The generated\npairs can be considered a collection of new patient cases since they are\nartificially created and may not exist in the original dataset. In contrast,\nthe IntraAug branch uses newly generated reports to manipulate images. This\nprocess allows us to create new paired data for each individual with diverse\nmedical conditions.", + "In contrast,\nthe IntraAug branch uses newly generated reports to manipulate images. This\nprocess allows us to create new paired data for each individual with diverse\nmedical conditions. Our extensive experiments on various downstream tasks\ncovering medical image classification zero-shot and fine-tuning analysis\ndemonstrate that our PairAug, concurrently expanding both image and text data,\nsubstantially outperforms image-/text-only expansion baselines and advanced\nmedical VLP baselines. Our code is released at\n\\url{https://github.com/YtongXie/PairAug}.", + "Implicit Neural Representation (INR), which utilizes a neural network to map\ncoordinate inputs to corresponding attributes, is causing a revolution in the\nfield of signal processing. However, current INR techniques suffer from a\nrestricted capability to tune their supported frequency set, resulting in\nimperfect performance when representing complex signals with multiple\nfrequencies. We have identified that this frequency-related problem can be\ngreatly alleviated by introducing variable-periodic activation functions, for\nwhich we propose FINER. By initializing the bias of the neural network within\ndifferent ranges, sub-functions with various frequencies in the\nvariable-periodic function are selected for activation. Consequently, the\nsupported frequency set of FINER can be flexibly tuned, leading to improved\nperformance in signal representation. We demonstrate the capabilities of FINER\nin the contexts of 2D image fitting, 3D signed distance field representation,\nand 5D neural radiance fields optimization, and we show that it outperforms\nexisting INRs.", + "Video anomaly detection (VAD) aims to temporally locate abnormal events in a\nvideo. Existing works mostly rely on training deep models to learn the\ndistribution of normality with either video-level supervision, one-class\nsupervision, or in an unsupervised setting. Training-based methods are prone to\nbe domain-specific, thus being costly for practical deployment as any domain\nchange will involve data collection and model training. In this paper, we\nradically depart from previous efforts and propose LAnguage-based VAD (LAVAD),\na method tackling VAD in a novel, training-free paradigm, exploiting the\ncapabilities of pre-trained large language models (LLMs) and existing\nvision-language models (VLMs). We leverage VLM-based captioning models to\ngenerate textual descriptions for each frame of any test video. With the\ntextual scene description, we then devise a prompting mechanism to unlock the\ncapability of LLMs in terms of temporal aggregation and anomaly score\nestimation, turning LLMs into an effective video anomaly detector.", + "With the\ntextual scene description, we then devise a prompting mechanism to unlock the\ncapability of LLMs in terms of temporal aggregation and anomaly score\nestimation, turning LLMs into an effective video anomaly detector. We further\nleverage modality-aligned VLMs and propose effective techniques based on\ncross-modal similarity for cleaning noisy captions and refining the LLM-based\nanomaly scores. We evaluate LAVAD on two large datasets featuring real-world\nsurveillance scenarios (UCF-Crime and XD-Violence), showing that it outperforms\nboth unsupervised and one-class methods without requiring any training or data\ncollection.", + "Diffusion-based text-to-image generative models, e.g., Stable Diffusion, have\nrevolutionized the field of content generation, enabling significant\nadvancements in areas like image editing and video synthesis. Despite their\nformidable capabilities, these models are not without their limitations. It is\nstill challenging to synthesize an image that aligns well with the input text,\nand multiple runs with carefully crafted prompts are required to achieve\nsatisfactory results. To mitigate these limitations, numerous studies have\nendeavored to fine-tune the pre-trained diffusion models, i.e., UNet, utilizing\nvarious technologies. Yet, amidst these efforts, a pivotal question of\ntext-to-image diffusion model training has remained largely unexplored: Is it\npossible and feasible to fine-tune the text encoder to improve the performance\nof text-to-image diffusion models? Our findings reveal that, instead of\nreplacing the CLIP text encoder used in Stable Diffusion with other large\nlanguage models, we can enhance it through our proposed fine-tuning approach,\nTextCraftor, leading to substantial improvements in quantitative benchmarks and\nhuman assessments.", + "Our findings reveal that, instead of\nreplacing the CLIP text encoder used in Stable Diffusion with other large\nlanguage models, we can enhance it through our proposed fine-tuning approach,\nTextCraftor, leading to substantial improvements in quantitative benchmarks and\nhuman assessments. Interestingly, our technique also empowers controllable\nimage generation through the interpolation of different text encoders\nfine-tuned with various rewards. We also demonstrate that TextCraftor is\northogonal to UNet finetuning, and can be combined to further improve\ngenerative quality.", + "Existing action quality assessment (AQA) methods mainly learn deep\nrepresentations at the video level for scoring diverse actions. Due to the lack\nof a fine-grained understanding of actions in videos, they harshly suffer from\nlow credibility and interpretability, thus insufficient for stringent\napplications, such as Olympic diving events. We argue that a fine-grained\nunderstanding of actions requires the model to perceive and parse actions in\nboth time and space, which is also the key to the credibility and\ninterpretability of the AQA technique. Based on this insight, we propose a new\nfine-grained spatial-temporal action parser named \\textbf{FineParser}. It\nlearns human-centric foreground action representations by focusing on target\naction regions within each frame and exploiting their fine-grained alignments\nin time and space to minimize the impact of invalid backgrounds during the\nassessment. In addition, we construct fine-grained annotations of human-centric\nforeground action masks for the FineDiving dataset, called\n\\textbf{FineDiving-HM}. With refined annotations on diverse target action\nprocedures, FineDiving-HM can promote the development of real-world AQA\nsystems.", + "With refined annotations on diverse target action\nprocedures, FineDiving-HM can promote the development of real-world AQA\nsystems. Through extensive experiments, we demonstrate the effectiveness of\nFineParser, which outperforms state-of-the-art methods while supporting more\ntasks of fine-grained action understanding. Data and code are available at\n\\url{https://github.com/PKU-ICST-MIPL/FineParser_CVPR2024}.", + "The creation of new datasets often presents new challenges for video\nrecognition and can inspire novel ideas while addressing these challenges.\nWhile existing datasets mainly comprise landscape mode videos, our paper seeks\nto introduce portrait mode videos to the research community and highlight the\nunique challenges associated with this video format. With the growing\npopularity of smartphones and social media applications, recognizing portrait\nmode videos is becoming increasingly important. To this end, we have developed\nthe first dataset dedicated to portrait mode video recognition, namely\nPortraitMode-400. The taxonomy of PortraitMode-400 was constructed in a\ndata-driven manner, comprising 400 fine-grained categories, and rigorous\nquality assurance was implemented to ensure the accuracy of human annotations.\nIn addition to the new dataset, we conducted a comprehensive analysis of the\nimpact of video format (portrait mode versus landscape mode) on recognition\naccuracy and spatial bias due to the different formats. Furthermore, we\ndesigned extensive experiments to explore key aspects of portrait mode video\nrecognition, including the choice of data augmentation, evaluation procedure,\nthe importance of temporal information, and the role of audio modality.", + "Furthermore, we\ndesigned extensive experiments to explore key aspects of portrait mode video\nrecognition, including the choice of data augmentation, evaluation procedure,\nthe importance of temporal information, and the role of audio modality.\nBuilding on the insights from our experimental results and the introduction of\nPortraitMode-400, our paper aims to inspire further research efforts in this\nemerging research area.", + "Universal image restoration is a practical and potential computer vision task\nfor real-world applications. The main challenge of this task is handling the\ndifferent degradation distributions at once. Existing methods mainly utilize\ntask-specific conditions (e.g., prompt) to guide the model to learn different\ndistributions separately, named multi-partite mapping. However, it is not\nsuitable for universal model learning as it ignores the shared information\nbetween different tasks. In this work, we propose an advanced selective\nhourglass mapping strategy based on diffusion model, termed DiffUIR. Two novel\nconsiderations make our DiffUIR non-trivial. Firstly, we equip the model with\nstrong condition guidance to obtain accurate generation direction of diffusion\nmodel (selective). More importantly, DiffUIR integrates a flexible shared\ndistribution term (SDT) into the diffusion algorithm elegantly and naturally,\nwhich gradually maps different distributions into a shared one. In the reverse\nprocess, combined with SDT and strong condition guidance, DiffUIR iteratively\nguides the shared distribution to the task-specific distribution with high\nimage quality (hourglass).", + "In the reverse\nprocess, combined with SDT and strong condition guidance, DiffUIR iteratively\nguides the shared distribution to the task-specific distribution with high\nimage quality (hourglass). Without bells and whistles, by only modifying the\nmapping strategy, we achieve state-of-the-art performance on five image\nrestoration tasks, 22 benchmarks in the universal setting and zero-shot\ngeneralization setting. Surprisingly, by only using a lightweight model (only\n0.89M), we could achieve outstanding performance. The source code and\npre-trained models are available at https://github.com/iSEE-Laboratory/DiffUIR", + "Vision-language models (VLMs) pre-trained on web-scale datasets have\ndemonstrated remarkable capabilities on downstream tasks when fine-tuned with\nminimal data. However, many VLMs rely on proprietary data and are not\nopen-source, which restricts the use of white-box approaches for fine-tuning.\nAs such, we aim to develop a black-box approach to optimize VLMs through\nnatural language prompts, thereby avoiding the need to access model parameters,\nfeature embeddings, or even output logits. We propose employing chat-based LLMs\nto search for the best text prompt for VLMs. Specifically, we adopt an\nautomatic hill-climbing procedure that converges to an effective prompt by\nevaluating the performance of current prompts and asking LLMs to refine them\nbased on textual feedback, all within a conversational process without\nhuman-in-the-loop. In a challenging 1-shot image classification setup, our\nsimple approach surpasses the white-box continuous prompting method (CoOp) by\nan average of 1.5% across 11 datasets including ImageNet. Our approach also\noutperforms both human-engineered and LLM-generated prompts.", + "In a challenging 1-shot image classification setup, our\nsimple approach surpasses the white-box continuous prompting method (CoOp) by\nan average of 1.5% across 11 datasets including ImageNet. Our approach also\noutperforms both human-engineered and LLM-generated prompts. We highlight the\nadvantage of conversational feedback that incorporates both positive and\nnegative prompts, suggesting that LLMs can utilize the implicit gradient\ndirection in textual feedback for a more efficient search. In addition, we find\nthat the text prompts generated through our strategy are not only more\ninterpretable but also transfer well across different VLM architectures in a\nblack-box manner. Lastly, we apply our framework to optimize the\nstate-of-the-art black-box VLM (DALL-E 3) for text-to-image generation, prompt\ninversion, and personalization.", + "Large Vision-Language Models (LVLMs) have advanced considerably, intertwining\nvisual recognition and language understanding to generate content that is not\nonly coherent but also contextually attuned. Despite their success, LVLMs still\nsuffer from the issue of object hallucinations, where models generate plausible\nyet incorrect outputs that include objects that do not exist in the images. To\nmitigate this issue, we introduce Visual Contrastive Decoding (VCD), a simple\nand training-free method that contrasts output distributions derived from\noriginal and distorted visual inputs. The proposed VCD effectively reduces the\nover-reliance on statistical bias and unimodal priors, two essential causes of\nobject hallucinations. This adjustment ensures the generated content is closely\ngrounded to visual inputs, resulting in contextually accurate outputs. Our\nexperiments show that VCD, without either additional training or the usage of\nexternal tools, significantly mitigates the object hallucination issue across\ndifferent LVLM families. Beyond mitigating object hallucinations, VCD also\nexcels in general LVLM benchmarks, highlighting its wide-ranging applicability.", + "Generative object compositing emerges as a promising new avenue for\ncompositional image editing. However, the requirement of object identity\npreservation poses a significant challenge, limiting practical usage of most\nexisting methods. In response, this paper introduces IMPRINT, a novel\ndiffusion-based generative model trained with a two-stage learning framework\nthat decouples learning of identity preservation from that of compositing. The\nfirst stage is targeted for context-agnostic, identity-preserving pretraining\nof the object encoder, enabling the encoder to learn an embedding that is both\nview-invariant and conducive to enhanced detail preservation. The subsequent\nstage leverages this representation to learn seamless harmonization of the\nobject composited to the background. In addition, IMPRINT incorporates a\nshape-guidance mechanism offering user-directed control over the compositing\nprocess. Extensive experiments demonstrate that IMPRINT significantly\noutperforms existing methods and various baselines on identity preservation and\ncomposition quality.", + "Audio-visual segmentation (AVS) aims to segment the sounding objects in video\nframes. Although great progress has been witnessed, we experimentally reveal\nthat current methods reach marginal performance gain within the use of the\nunlabeled frames, leading to the underutilization issue. To fully explore the\npotential of the unlabeled frames for AVS, we explicitly divide them into two\ncategories based on their temporal characteristics, i.e., neighboring frame\n(NF) and distant frame (DF). NFs, temporally adjacent to the labeled frame,\noften contain rich motion information that assists in the accurate localization\nof sounding objects. Contrary to NFs, DFs have long temporal distances from the\nlabeled frame, which share semantic-similar objects with appearance variations.\nConsidering their unique characteristics, we propose a versatile framework that\neffectively leverages them to tackle AVS. Specifically, for NFs, we exploit the\nmotion cues as the dynamic guidance to improve the objectness localization.\nBesides, we exploit the semantic cues in DFs by treating them as valid\naugmentations to the labeled frames, which are then used to enrich data\ndiversity in a self-training manner.", + "Besides, we exploit the semantic cues in DFs by treating them as valid\naugmentations to the labeled frames, which are then used to enrich data\ndiversity in a self-training manner. Extensive experimental results demonstrate\nthe versatility and superiority of our method, unleashing the power of the\nabundant unlabeled frames.", + "This paper presents DriveTrack, a new benchmark and data generation framework\nfor long-range keypoint tracking in real-world videos. DriveTrack is motivated\nby the observation that the accuracy of state-of-the-art trackers depends\nstrongly on visual attributes around the selected keypoints, such as texture\nand lighting. The problem is that these artifacts are especially pronounced in\nreal-world videos, but these trackers are unable to train on such scenes due to\na dearth of annotations. DriveTrack bridges this gap by building a framework to\nautomatically annotate point tracks on autonomous driving datasets. We release\na dataset consisting of 1 billion point tracks across 24 hours of video, which\nis seven orders of magnitude greater than prior real-world benchmarks and on\npar with the scale of synthetic benchmarks. DriveTrack unlocks new use cases\nfor point tracking in real-world videos. First, we show that fine-tuning\nkeypoint trackers on DriveTrack improves accuracy on real-world scenes by up to\n7%. Second, we analyze the sensitivity of trackers to visual artifacts in real\nscenes and motivate the idea of running assistive keypoint selectors alongside\ntrackers.", + "Infrared physical adversarial examples are of great significance for studying\nthe security of infrared AI systems that are widely used in our lives such as\nautonomous driving. Previous infrared physical attacks mainly focused on 2D\ninfrared pedestrian detection which may not fully manifest its destructiveness\nto AI systems. In this work, we propose a physical attack method against\ninfrared detectors based on 3D modeling, which is applied to a real car. The\ngoal is to design a set of infrared adversarial stickers to make cars invisible\nto infrared detectors at various viewing angles, distances, and scenes. We\nbuild a 3D infrared car model with real infrared characteristics and propose an\ninfrared adversarial pattern generation method based on 3D mesh shadow. We\npropose a 3D control points-based mesh smoothing algorithm and use a set of\nsmoothness loss functions to enhance the smoothness of adversarial meshes and\nfacilitate the sticker implementation. Besides, We designed the aluminum\nstickers and conducted physical experiments on two real Mercedes-Benz A200L\ncars. Our adversarial stickers hid the cars from Faster RCNN, an object\ndetector, at various viewing angles, distances, and scenes.", + "Besides, We designed the aluminum\nstickers and conducted physical experiments on two real Mercedes-Benz A200L\ncars. Our adversarial stickers hid the cars from Faster RCNN, an object\ndetector, at various viewing angles, distances, and scenes. The attack success\nrate (ASR) was 91.49% for real cars. In comparison, the ASRs of random stickers\nand no sticker were only 6.21% and 0.66%, respectively. In addition, the ASRs\nof the designed stickers against six unseen object detectors such as YOLOv3 and\nDeformable DETR were between 73.35%-95.80%, showing good transferability of the\nattack performance across detectors.", + "Recent works on text-to-3d generation show that using only 2D diffusion\nsupervision for 3D generation tends to produce results with inconsistent\nappearances (e.g., faces on the back view) and inaccurate shapes (e.g., animals\nwith extra legs). Existing methods mainly address this issue by retraining\ndiffusion models with images rendered from 3D data to ensure multi-view\nconsistency while struggling to balance 2D generation quality with 3D\nconsistency. In this paper, we present a new framework Sculpt3D that equips the\ncurrent pipeline with explicit injection of 3D priors from retrieved reference\nobjects without re-training the 2D diffusion model. Specifically, we\ndemonstrate that high-quality and diverse 3D geometry can be guaranteed by\nkeypoints supervision through a sparse ray sampling approach. Moreover, to\nensure accurate appearances of different views, we further modulate the output\nof the 2D diffusion model to the correct patterns of the template views without\naltering the generated object's style.", + "Moreover, to\nensure accurate appearances of different views, we further modulate the output\nof the 2D diffusion model to the correct patterns of the template views without\naltering the generated object's style. These two decoupled designs effectively\nharness 3D information from reference objects to generate 3D objects while\npreserving the generation quality of the 2D diffusion model. Extensive\nexperiments show our method can largely improve the multi-view consistency\nwhile retaining fidelity and diversity. Our project page is available at:\nhttps://stellarcheng.github.io/Sculpt3D/.", + "Estimating the 3D structure of the human body from natural scenes is a\nfundamental aspect of visual perception. 3D human pose estimation is a vital\nstep in advancing fields like AIGC and human-robot interaction, serving as a\ncrucial technique for understanding and interacting with human actions in\nreal-world settings. However, the current datasets, often collected under\nsingle laboratory conditions using complex motion capture equipment and\nunvarying backgrounds, are insufficient. The absence of datasets on variable\nconditions is stalling the progress of this crucial task. To facilitate the\ndevelopment of 3D pose estimation, we present FreeMan, the first large-scale,\nmulti-view dataset collected under the real-world conditions. FreeMan was\ncaptured by synchronizing 8 smartphones across diverse scenarios. It comprises\n11M frames from 8000 sequences, viewed from different perspectives. These\nsequences cover 40 subjects across 10 different scenarios, each with varying\nlighting conditions. We have also established an semi-automated pipeline\ncontaining error detection to reduce the workload of manual check and ensure\nprecise annotation.", + "These\nsequences cover 40 subjects across 10 different scenarios, each with varying\nlighting conditions. We have also established an semi-automated pipeline\ncontaining error detection to reduce the workload of manual check and ensure\nprecise annotation. We provide comprehensive evaluation baselines for a range\nof tasks, underlining the significant challenges posed by FreeMan. Further\nevaluations of standard indoor/outdoor human sensing datasets reveal that\nFreeMan offers robust representation transferability in real and complex\nscenes. Code and data are available at https://wangjiongw.github.io/freeman.", + "Model Inversion (MI) attacks aim to reconstruct private training data by\nabusing access to machine learning models. Contemporary MI attacks have\nachieved impressive attack performance, posing serious threats to privacy.\nMeanwhile, all existing MI defense methods rely on regularization that is in\ndirect conflict with the training objective, resulting in noticeable\ndegradation in model utility. In this work, we take a different perspective,\nand propose a novel and simple Transfer Learning-based Defense against Model\nInversion (TL-DMI) to render MI-robust models. Particularly, by leveraging TL,\nwe limit the number of layers encoding sensitive information from private\ntraining dataset, thereby degrading the performance of MI attack. We conduct an\nanalysis using Fisher Information to justify our method. Our defense is\nremarkably simple to implement. Without bells and whistles, we show in\nextensive experiments that TL-DMI achieves state-of-the-art (SOTA) MI\nrobustness. Our code, pre-trained models, demo and inverted data are available\nat: https://hosytuyen.github.io/projects/TL-DMI", + "Machine learning models struggle with generalization when encountering\nout-of-distribution (OOD) samples with unexpected distribution shifts. For\nvision tasks, recent studies have shown that test-time adaptation employing\ndiffusion models can achieve state-of-the-art accuracy improvements on OOD\nsamples by generating new samples that align with the model's domain without\nthe need to modify the model's weights. Unfortunately, those studies have\nprimarily focused on pixel-level corruptions, thereby lacking the\ngeneralization to adapt to a broader range of OOD types. We introduce\nGeneralized Diffusion Adaptation (GDA), a novel diffusion-based test-time\nadaptation method robust against diverse OOD types. Specifically, GDA\niteratively guides the diffusion by applying a marginal entropy loss derived\nfrom the model, in conjunction with style and content preservation losses\nduring the reverse sampling process. In other words, GDA considers the model's\noutput behavior with the semantic information of the samples as a whole, which\ncan reduce ambiguity in downstream tasks during the generation process.\nEvaluation across various popular model architectures and OOD benchmarks shows\nthat GDA consistently outperforms prior work on diffusion-driven adaptation.", + "In other words, GDA considers the model's\noutput behavior with the semantic information of the samples as a whole, which\ncan reduce ambiguity in downstream tasks during the generation process.\nEvaluation across various popular model architectures and OOD benchmarks shows\nthat GDA consistently outperforms prior work on diffusion-driven adaptation.\nNotably, it achieves the highest classification accuracy improvements, ranging\nfrom 4.4\\% to 5.02\\% on ImageNet-C and 2.5\\% to 7.4\\% on Rendition, Sketch, and\nStylized benchmarks. This performance highlights GDA's generalization to a\nbroader range of OOD benchmarks.", + "Gestures play a key role in human communication. Recent methods for co-speech\ngesture generation, while managing to generate beat-aligned motions, struggle\ngenerating gestures that are semantically aligned with the utterance. Compared\nto beat gestures that align naturally to the audio signal, semantically\ncoherent gestures require modeling the complex interactions between the\nlanguage and human motion, and can be controlled by focusing on certain words.\nTherefore, we present ConvoFusion, a diffusion-based approach for multi-modal\ngesture synthesis, which can not only generate gestures based on multi-modal\nspeech inputs, but can also facilitate controllability in gesture synthesis.\nOur method proposes two guidance objectives that allow the users to modulate\nthe impact of different conditioning modalities (e.g. audio vs text) as well as\nto choose certain words to be emphasized during gesturing. Our method is\nversatile in that it can be trained either for generating monologue gestures or\neven the conversational gestures. To further advance the research on\nmulti-party interactive gestures, the DnD Group Gesture dataset is released,\nwhich contains 6 hours of gesture data showing 5 people interacting with one\nanother.", + "To further advance the research on\nmulti-party interactive gestures, the DnD Group Gesture dataset is released,\nwhich contains 6 hours of gesture data showing 5 people interacting with one\nanother. We compare our method with several recent works and demonstrate\neffectiveness of our method on a variety of tasks. We urge the reader to watch\nour supplementary video at our website.", + "We study the problem of single-image zero-shot 3D shape reconstruction.\nRecent works learn zero-shot shape reconstruction through generative modeling\nof 3D assets, but these models are computationally expensive at train and\ninference time. In contrast, the traditional approach to this problem is\nregression-based, where deterministic models are trained to directly regress\nthe object shape. Such regression methods possess much higher computational\nefficiency than generative methods. This raises a natural question: is\ngenerative modeling necessary for high performance, or conversely, are\nregression-based approaches still competitive? To answer this, we design a\nstrong regression-based model, called ZeroShape, based on the converging\nfindings in this field and a novel insight. We also curate a large real-world\nevaluation benchmark, with objects from three different real-world 3D datasets.\nThis evaluation benchmark is more diverse and an order of magnitude larger than\nwhat prior works use to quantitatively evaluate their models, aiming at\nreducing the evaluation variance in our field. We show that ZeroShape not only\nachieves superior performance over state-of-the-art methods, but also\ndemonstrates significantly higher computational and data efficiency.", + "Analyzing and forecasting trajectories of agents like pedestrians and cars in\ncomplex scenes has become more and more significant in many intelligent systems\nand applications. The diversity and uncertainty in socially interactive\nbehaviors among a rich variety of agents make this task more challenging than\nother deterministic computer vision tasks. Researchers have made a lot of\nefforts to quantify the effects of these interactions on future trajectories\nthrough different mathematical models and network structures, but this problem\nhas not been well solved. Inspired by marine animals that localize the\npositions of their companions underwater through echoes, we build a new\nanglebased trainable social interaction representation, named SocialCircle, for\ncontinuously reflecting the context of social interactions at different angular\norientations relative to the target agent. We validate the effect of the\nproposed SocialCircle by training it along with several newly released\ntrajectory prediction models, and experiments show that the SocialCircle not\nonly quantitatively improves the prediction performance, but also qualitatively\nhelps better simulate social interactions when forecasting pedestrian\ntrajectories in a way that is consistent with human intuitions.", + "Implicit neural representations (INRs) have emerged as a promising approach\nfor video storage and processing, showing remarkable versatility across various\nvideo tasks. However, existing methods often fail to fully leverage their\nrepresentation capabilities, primarily due to inadequate alignment of\nintermediate features during target frame decoding. This paper introduces a\nuniversal boosting framework for current implicit video representation\napproaches. Specifically, we utilize a conditional decoder with a\ntemporal-aware affine transform module, which uses the frame index as a prior\ncondition to effectively align intermediate features with target frames.\nBesides, we introduce a sinusoidal NeRV-like block to generate diverse\nintermediate features and achieve a more balanced parameter distribution,\nthereby enhancing the model's capacity. With a high-frequency\ninformation-preserving reconstruction loss, our approach successfully boosts\nmultiple baseline INRs in the reconstruction quality and convergence speed for\nvideo regression, and exhibits superior inpainting and interpolation results.\nFurther, we integrate a consistent entropy minimization technique and develop\nvideo codecs based on these boosted INRs. Experiments on the UVG dataset\nconfirm that our enhanced codecs significantly outperform baseline INRs and\noffer competitive rate-distortion performance compared to traditional and\nlearning-based codecs.", + "Further, we integrate a consistent entropy minimization technique and develop\nvideo codecs based on these boosted INRs. Experiments on the UVG dataset\nconfirm that our enhanced codecs significantly outperform baseline INRs and\noffer competitive rate-distortion performance compared to traditional and\nlearning-based codecs. Code is available at\nhttps://github.com/Xinjie-Q/Boosting-NeRV.", + "We present a framework for generating full-bodied photorealistic avatars that\ngesture according to the conversational dynamics of a dyadic interaction. Given\nspeech audio, we output multiple possibilities of gestural motion for an\nindividual, including face, body, and hands. The key behind our method is in\ncombining the benefits of sample diversity from vector quantization with the\nhigh-frequency details obtained through diffusion to generate more dynamic,\nexpressive motion. We visualize the generated motion using highly\nphotorealistic avatars that can express crucial nuances in gestures (e.g.\nsneers and smirks). To facilitate this line of research, we introduce a\nfirst-of-its-kind multi-view conversational dataset that allows for\nphotorealistic reconstruction. Experiments show our model generates appropriate\nand diverse gestures, outperforming both diffusion- and VQ-only methods.\nFurthermore, our perceptual evaluation highlights the importance of\nphotorealism (vs. meshes) in accurately assessing subtle motion details in\nconversational gestures. Code and dataset available online.", + "In this work, we explore a novel task of generating human grasps based on\nsingle-view scene point clouds, which more accurately mirrors the typical\nreal-world situation of observing objects from a single viewpoint. Due to the\nincompleteness of object point clouds and the presence of numerous scene\npoints, the generated hand is prone to penetrating into the invisible parts of\nthe object and the model is easily affected by scene points. Thus, we introduce\nS2HGrasp, a framework composed of two key modules: the Global Perception module\nthat globally perceives partial object point clouds, and the DiffuGrasp module\ndesigned to generate high-quality human grasps based on complex inputs that\ninclude scene points. Additionally, we introduce S2HGD dataset, which comprises\napproximately 99,000 single-object single-view scene point clouds of 1,668\nunique objects, each annotated with one human grasp. Our extensive experiments\ndemonstrate that S2HGrasp can not only generate natural human grasps regardless\nof scene points, but also effectively prevent penetration between the hand and\ninvisible parts of the object. Moreover, our model showcases strong\ngeneralization capability when applied to unseen objects.", + "Our extensive experiments\ndemonstrate that S2HGrasp can not only generate natural human grasps regardless\nof scene points, but also effectively prevent penetration between the hand and\ninvisible parts of the object. Moreover, our model showcases strong\ngeneralization capability when applied to unseen objects. Our code and dataset\nare available at https://github.com/iSEE-Laboratory/S2HGrasp.", + "Diffusion models generate high-quality images but require dozens of forward\npasses. We introduce Distribution Matching Distillation (DMD), a procedure to\ntransform a diffusion model into a one-step image generator with minimal impact\non image quality. We enforce the one-step image generator match the diffusion\nmodel at distribution level, by minimizing an approximate KL divergence whose\ngradient can be expressed as the difference between 2 score functions, one of\nthe target distribution and the other of the synthetic distribution being\nproduced by our one-step generator. The score functions are parameterized as\ntwo diffusion models trained separately on each distribution. Combined with a\nsimple regression loss matching the large-scale structure of the multi-step\ndiffusion outputs, our method outperforms all published few-step diffusion\napproaches, reaching 2.62 FID on ImageNet 64x64 and 11.49 FID on zero-shot\nCOCO-30k, comparable to Stable Diffusion but orders of magnitude faster.\nUtilizing FP16 inference, our model generates images at 20 FPS on modern\nhardware.", + "This paper, for the first time, explores text-to-image diffusion models for\nZero-Shot Sketch-based Image Retrieval (ZS-SBIR). We highlight a pivotal\ndiscovery: the capacity of text-to-image diffusion models to seamlessly bridge\nthe gap between sketches and photos. This proficiency is underpinned by their\nrobust cross-modal capabilities and shape bias, findings that are substantiated\nthrough our pilot studies. In order to harness pre-trained diffusion models\neffectively, we introduce a straightforward yet powerful strategy focused on\ntwo key aspects: selecting optimal feature layers and utilising visual and\ntextual prompts. For the former, we identify which layers are most enriched\nwith information and are best suited for the specific retrieval requirements\n(category-level or fine-grained). Then we employ visual and textual prompts to\nguide the model's feature extraction process, enabling it to generate more\ndiscriminative and contextually relevant cross-modal representations. Extensive\nexperiments on several benchmark datasets validate significant performance\nimprovements.", + "Recently, numerous approaches have achieved notable success in compressed\nvideo quality enhancement (VQE). However, these methods usually ignore the\nutilization of valuable coding priors inherently embedded in compressed videos,\nsuch as motion vectors and residual frames, which carry abundant temporal and\nspatial information. To remedy this problem, we propose the Coding\nPriors-Guided Aggregation (CPGA) network to utilize temporal and spatial\ninformation from coding priors. The CPGA mainly consists of an inter-frame\ntemporal aggregation (ITA) module and a multi-scale non-local aggregation (MNA)\nmodule. Specifically, the ITA module aggregates temporal information from\nconsecutive frames and coding priors, while the MNA module globally captures\nspatial information guided by residual frames. In addition, to facilitate\nresearch in VQE task, we newly construct the Video Coding Priors (VCP) dataset,\ncomprising 300 videos with various coding priors extracted from corresponding\nbitstreams. It remedies the shortage of previous datasets on the lack of coding\ninformation. Experimental results demonstrate the superiority of our method\ncompared to existing state-of-the-art methods. The code and dataset will be\nreleased at https://github.com/CPGA/CPGA.git.", + "We present MicroCinema, a straightforward yet effective framework for\nhigh-quality and coherent text-to-video generation. Unlike existing approaches\nthat align text prompts with video directly, MicroCinema introduces a\nDivide-and-Conquer strategy which divides the text-to-video into a two-stage\nprocess: text-to-image generation and image\\&text-to-video generation. This\nstrategy offers two significant advantages. a) It allows us to take full\nadvantage of the recent advances in text-to-image models, such as Stable\nDiffusion, Midjourney, and DALLE, to generate photorealistic and highly\ndetailed images. b) Leveraging the generated image, the model can allocate less\nfocus to fine-grained appearance details, prioritizing the efficient learning\nof motion dynamics. To implement this strategy effectively, we introduce two\ncore designs. First, we propose the Appearance Injection Network, enhancing the\npreservation of the appearance of the given image. Second, we introduce the\nAppearance Noise Prior, a novel mechanism aimed at maintaining the capabilities\nof pre-trained 2D diffusion models.", + "First, we propose the Appearance Injection Network, enhancing the\npreservation of the appearance of the given image. Second, we introduce the\nAppearance Noise Prior, a novel mechanism aimed at maintaining the capabilities\nof pre-trained 2D diffusion models. These design elements empower MicroCinema\nto generate high-quality videos with precise motion, guided by the provided\ntext prompts. Extensive experiments demonstrate the superiority of the proposed\nframework. Concretely, MicroCinema achieves SOTA zero-shot FVD of 342.86 on\nUCF-101 and 377.40 on MSR-VTT. See\nhttps://wangyanhui666.github.io/MicroCinema.github.io/ for video samples.", + "Multi-instance point cloud registration estimates the poses of multiple\ninstances of a model point cloud in a scene point cloud. Extracting accurate\npoint correspondence is to the center of the problem. Existing approaches\nusually treat the scene point cloud as a whole, overlooking the separation of\ninstances. Therefore, point features could be easily polluted by other points\nfrom the background or different instances, leading to inaccurate\ncorrespondences oblivious to separate instances, especially in cluttered\nscenes. In this work, we propose MIRETR, Multi-Instance REgistration\nTRansformer, a coarse-to-fine approach to the extraction of instance-aware\ncorrespondences. At the coarse level, it jointly learns instance-aware\nsuperpoint features and predicts per-instance masks. With instance masks, the\ninfluence from outside of the instance being concerned is minimized, such that\nhighly reliable superpoint correspondences can be extracted. The superpoint\ncorrespondences are then extended to instance candidates at the fine level\naccording to the instance masks. At last, an efficient candidate selection and\nrefinement algorithm is devised to obtain the final registrations.", + "The superpoint\ncorrespondences are then extended to instance candidates at the fine level\naccording to the instance masks. At last, an efficient candidate selection and\nrefinement algorithm is devised to obtain the final registrations. Extensive\nexperiments on three public benchmarks demonstrate the efficacy of our\napproach. In particular, MIRETR outperforms the state of the arts by 16.6\npoints on F1 score on the challenging ROBI benchmark. Code and models are\navailable at https://github.com/zhiyuanYU134/MIRETR.", + "Denoising diffusion probabilistic models for image inpainting aim to add the\nnoise to the texture of image during the forward process and recover masked\nregions with unmasked ones of the texture via the reverse denoising process.\nDespite the meaningful semantics generation, the existing arts suffer from the\nsemantic discrepancy between masked and unmasked regions, since the\nsemantically dense unmasked texture fails to be completely degraded while the\nmasked regions turn to the pure noise in diffusion process, leading to the\nlarge discrepancy between them. In this paper, we aim to answer how unmasked\nsemantics guide texture denoising process;together with how to tackle the\nsemantic discrepancy, to facilitate the consistent and meaningful semantics\ngeneration.", + "In this paper, we aim to answer how unmasked\nsemantics guide texture denoising process;together with how to tackle the\nsemantic discrepancy, to facilitate the consistent and meaningful semantics\ngeneration. To this end, we propose a novel structure-guided diffusion model\nnamed StrDiffusion, to reformulate the conventional texture denoising process\nunder structure guidance to derive a simplified denoising objective for image\ninpainting, while revealing: 1) the semantically sparse structure is beneficial\nto tackle semantic discrepancy in early stage, while dense texture generates\nreasonable semantics in late stage; 2) the semantics from unmasked regions\nessentially offer the time-dependent structure guidance for the texture\ndenoising process, benefiting from the time-dependent sparsity of the structure\nsemantics. For the denoising process, a structure-guided neural network is\ntrained to estimate the simplified denoising objective by exploiting the\nconsistency of the denoised structure between masked and unmasked regions.\nBesides, we devise an adaptive resampling strategy as a formal criterion as\nwhether structure is competent to guide the texture denoising process, while\nregulate their semantic correlations.", + "Besides, we devise an adaptive resampling strategy as a formal criterion as\nwhether structure is competent to guide the texture denoising process, while\nregulate their semantic correlations. Extensive experiments validate the merits\nof StrDiffusion over the state-of-the-arts. Our code is available at\nhttps://github.com/htyjers/StrDiffusion.", + "Understanding social interactions involving both verbal and non-verbal cues\nis essential for effectively interpreting social situations. However, most\nprior works on multimodal social cues focus predominantly on single-person\nbehaviors or rely on holistic visual representations that are not aligned to\nutterances in multi-party environments. Consequently, they are limited in\nmodeling the intricate dynamics of multi-party interactions. In this paper, we\nintroduce three new challenging tasks to model the fine-grained dynamics\nbetween multiple people: speaking target identification, pronoun coreference\nresolution, and mentioned player prediction. We contribute extensive data\nannotations to curate these new challenges in social deduction game settings.\nFurthermore, we propose a novel multimodal baseline that leverages densely\naligned language-visual representations by synchronizing visual features with\ntheir corresponding utterances. This facilitates concurrently capturing verbal\nand non-verbal cues pertinent to social reasoning. Experiments demonstrate the\neffectiveness of the proposed approach with densely aligned multimodal\nrepresentations in modeling fine-grained social interactions. Project website:\nhttps://sangmin-git.github.io/projects/MMSI.", + "In recent decades, the vision community has witnessed remarkable progress in\nvisual recognition, partially owing to advancements in dataset benchmarks.\nNotably, the established COCO benchmark has propelled the development of modern\ndetection and segmentation systems. However, the COCO segmentation benchmark\nhas seen comparatively slow improvement over the last decade. Originally\nequipped with coarse polygon annotations for thing instances, it gradually\nincorporated coarse superpixel annotations for stuff regions, which were\nsubsequently heuristically amalgamated to yield panoptic segmentation\nannotations. These annotations, executed by different groups of raters, have\nresulted not only in coarse segmentation masks but also in inconsistencies\nbetween segmentation types. In this study, we undertake a comprehensive\nreevaluation of the COCO segmentation annotations. By enhancing the annotation\nquality and expanding the dataset to encompass 383K images with more than 5.18M\npanoptic masks, we introduce COCONut, the COCO Next Universal segmenTation\ndataset. COCONut harmonizes segmentation annotations across semantic, instance,\nand panoptic segmentation with meticulously crafted high-quality masks, and\nestablishes a robust benchmark for all segmentation tasks.", + "COCONut harmonizes segmentation annotations across semantic, instance,\nand panoptic segmentation with meticulously crafted high-quality masks, and\nestablishes a robust benchmark for all segmentation tasks. To our knowledge,\nCOCONut stands as the inaugural large-scale universal segmentation dataset,\nverified by human raters. We anticipate that the release of COCONut will\nsignificantly contribute to the community's ability to assess the progress of\nnovel neural networks.", + "A novel algorithm, called semantic line combination detector (SLCD), to find\nan optimal combination of semantic lines is proposed in this paper. It\nprocesses all lines in each line combination at once to assess the overall\nharmony of the lines. First, we generate various line combinations from\nreliable lines. Second, we estimate the score of each line combination and\ndetermine the best one. Experimental results demonstrate that the proposed SLCD\noutperforms existing semantic line detectors on various datasets. Moreover, it\nis shown that SLCD can be applied effectively to three vision tasks of\nvanishing point detection, symmetry axis detection, and composition-based image\nretrieval. Our codes are available at https://github.com/Jinwon-Ko/SLCD.", + "Single-domain generalization aims to learn a model from single source domain\ndata to achieve generalized performance on other unseen target domains.\nExisting works primarily focus on improving the generalization ability of\nstatic networks. However, static networks are unable to dynamically adapt to\nthe diverse variations in different image scenes, leading to limited\ngeneralization capability. Different scenes exhibit varying levels of\ncomplexity, and the complexity of images further varies significantly in\ncross-domain scenarios. In this paper, we propose a dynamic object-centric\nperception network based on prompt learning, aiming to adapt to the variations\nin image complexity. Specifically, we propose an object-centric gating module\nbased on prompt learning to focus attention on the object-centric features\nguided by the various scene prompts. Then, with the object-centric gating\nmasks, the dynamic selective module dynamically selects highly correlated\nfeature regions in both spatial and channel dimensions enabling the model to\nadaptively perceive object-centric relevant features, thereby enhancing the\ngeneralization capability. Extensive experiments were conducted on\nsingle-domain generalization tasks in image classification and object\ndetection. The experimental results demonstrate that our approach outperforms\nstate-of-the-art methods, which validates the effectiveness and generally of\nour proposed method.", + "In the context of pose-invariant object recognition and retrieval, we\ndemonstrate that it is possible to achieve significant improvements in\nperformance if both the category-based and the object-identity-based embeddings\nare learned simultaneously during training. In hindsight, that sounds intuitive\nbecause learning about the categories is more fundamental than learning about\nthe individual objects that correspond to those categories. However, to the\nbest of what we know, no prior work in pose-invariant learning has demonstrated\nthis effect. This paper presents an attention-based dual-encoder architecture\nwith specially designed loss functions that optimize the inter- and intra-class\ndistances simultaneously in two different embedding spaces, one for the\ncategory embeddings and the other for the object-level embeddings. The loss\nfunctions we have proposed are pose-invariant ranking losses that are designed\nto minimize the intra-class distances and maximize the inter-class distances in\nthe dual representation spaces. We demonstrate the power of our approach with\nthree challenging multi-view datasets, ModelNet-40, ObjectPI, and FG3D.", + "We demonstrate the power of our approach with\nthree challenging multi-view datasets, ModelNet-40, ObjectPI, and FG3D. With\nour dual approach, for single-view object recognition, we outperform the\nprevious best by 20.0% on ModelNet40, 2.0% on ObjectPI, and 46.5% on FG3D. On\nthe other hand, for single-view object retrieval, we outperform the previous\nbest by 33.7% on ModelNet40, 18.8% on ObjectPI, and 56.9% on FG3D.", + "We present DRESS, a large vision language model (LVLM) that innovatively\nexploits Natural Language feedback (NLF) from Large Language Models to enhance\nits alignment and interactions by addressing two key limitations in the\nstate-of-the-art LVLMs. First, prior LVLMs generally rely only on the\ninstruction finetuning stage to enhance alignment with human preferences.\nWithout incorporating extra feedback, they are still prone to generate\nunhelpful, hallucinated, or harmful responses. Second, while the visual\ninstruction tuning data is generally structured in a multi-turn dialogue\nformat, the connections and dependencies among consecutive conversational turns\nare weak. This reduces the capacity for effective multi-turn interactions. To\ntackle these, we propose a novel categorization of the NLF into two key types:\ncritique and refinement. The critique NLF identifies the strengths and\nweaknesses of the responses and is used to align the LVLMs with human\npreferences. The refinement NLF offers concrete suggestions for improvement and\nis adopted to improve the interaction ability of the LVLMs-- which focuses on\nLVLMs' ability to refine responses by incorporating feedback in multi-turn\ninteractions.", + "The refinement NLF offers concrete suggestions for improvement and\nis adopted to improve the interaction ability of the LVLMs-- which focuses on\nLVLMs' ability to refine responses by incorporating feedback in multi-turn\ninteractions. To address the non-differentiable nature of NLF, we generalize\nconditional reinforcement learning for training. Our experimental results\ndemonstrate that DRESS can generate more helpful (9.76%), honest (11.52%), and\nharmless (21.03%) responses, and more effectively learn from feedback during\nmulti-turn interactions compared to SOTA LVMLs.", + "In this work, we introduce two types of makeup prior models to extend\nexisting 3D face prior models: PCA-based and StyleGAN2-based priors. The\nPCA-based prior model is a linear model that is easy to construct and is\ncomputationally efficient. However, it retains only low-frequency information.\nConversely, the StyleGAN2-based model can represent high-frequency information\nwith relatively higher computational cost than the PCA-based model. Although\nthere is a trade-off between the two models, both are applicable to 3D facial\nmakeup estimation and related applications. By leveraging makeup prior models\nand designing a makeup consistency module, we effectively address the\nchallenges that previous methods faced in robustly estimating makeup,\nparticularly in the context of handling self-occluded faces. In experiments, we\ndemonstrate that our approach reduces computational costs by several orders of\nmagnitude, achieving speeds up to 180 times faster. In addition, by improving\nthe accuracy of the estimated makeup, we confirm that our methods are highly\nadvantageous for various 3D facial makeup applications such as 3D makeup face\nreconstruction, user-friendly makeup editing, makeup transfer, and\ninterpolation.", + "DETR-like methods have significantly increased detection performance in an\nend-to-end manner. The mainstream two-stage frameworks of them perform dense\nself-attention and select a fraction of queries for sparse cross-attention,\nwhich is proven effective for improving performance but also introduces a heavy\ncomputational burden and high dependence on stable query selection. This paper\ndemonstrates that suboptimal two-stage selection strategies result in scale\nbias and redundancy due to the mismatch between selected queries and objects in\ntwo-stage initialization. To address these issues, we propose hierarchical\nsalience filtering refinement, which performs transformer encoding only on\nfiltered discriminative queries, for a better trade-off between computational\nefficiency and precision. The filtering process overcomes scale bias through a\nnovel scale-independent salience supervision. To compensate for the semantic\nmisalignment among queries, we introduce elaborate query refinement modules for\nstable two-stage initialization.", + "The filtering process overcomes scale bias through a\nnovel scale-independent salience supervision. To compensate for the semantic\nmisalignment among queries, we introduce elaborate query refinement modules for\nstable two-stage initialization. Based on above improvements, the proposed\nSalience DETR achieves significant improvements of +4.0% AP, +0.2% AP, +4.4% AP\non three challenging task-specific detection datasets, as well as 49.2% AP on\nCOCO 2017 with less FLOPs. The code is available at\nhttps://github.com/xiuqhou/Salience-DETR.", + "The rapid advancement of large language models (LLMs) has accelerated the\nemergence of in-context learning (ICL) as a cutting-edge approach in the\nnatural language processing domain. Recently, ICL has been employed in visual\nunderstanding tasks, such as semantic segmentation and image captioning,\nyielding promising results. However, existing visual ICL framework can not\nenable producing content across multiple modalities, which limits their\npotential usage scenarios. To address this issue, we present a new ICL\nframework for visual understanding with multi-modal output enabled. First, we\nquantize and embed both text and visual prompt into a unified representational\nspace, structured as interleaved in-context sequences. Then a decoder-only\nsparse transformer architecture is employed to perform generative modeling on\nthem, facilitating in-context learning. Thanks to this design, the model is\ncapable of handling in-context vision understanding tasks with multimodal\noutput in a unified pipeline.Experimental results demonstrate that our model\nachieves competitive performance compared with specialized models and previous\nICL baselines. Overall, our research takes a further step toward unified\nmultimodal in-context learning.", + "3D reconstruction methods such as Neural Radiance Fields (NeRFs) excel at\nrendering photorealistic novel views of complex scenes. However, recovering a\nhigh-quality NeRF typically requires tens to hundreds of input images,\nresulting in a time-consuming capture process. We present ReconFusion to\nreconstruct real-world scenes using only a few photos. Our approach leverages a\ndiffusion prior for novel view synthesis, trained on synthetic and multiview\ndatasets, which regularizes a NeRF-based 3D reconstruction pipeline at novel\ncamera poses beyond those captured by the set of input images. Our method\nsynthesizes realistic geometry and texture in underconstrained regions while\npreserving the appearance of observed regions. We perform an extensive\nevaluation across various real-world datasets, including forward-facing and\n360-degree scenes, demonstrating significant performance improvements over\nprevious few-view NeRF reconstruction approaches.", + "Multi-Instance Learning (MIL) has shown impressive performance for\nhistopathology whole slide image (WSI) analysis using bags or pseudo-bags. It\ninvolves instance sampling, feature representation, and decision-making.\nHowever, existing MIL-based technologies at least suffer from one or more of\nthe following problems: 1) requiring high storage and intensive pre-processing\nfor numerous instances (sampling); 2) potential over-fitting with limited\nknowledge to predict bag labels (feature representation); 3) pseudo-bag counts\nand prior biases affect model robustness and generalizability\n(decision-making). Inspired by clinical diagnostics, using the past sampling\ninstances can facilitate the final WSI analysis, but it is barely explored in\nprior technologies. To break free these limitations, we integrate the dynamic\ninstance sampling and reinforcement learning into a unified framework to\nimprove the instance selection and feature aggregation, forming a novel Dynamic\nPolicy Instance Selection (DPIS) scheme for better and more credible\ndecision-making. Specifically, the measurement of feature distance and reward\nfunction are employed to boost continuous instance sampling.", + "Specifically, the measurement of feature distance and reward\nfunction are employed to boost continuous instance sampling. To alleviate the\nover-fitting, we explore the latent global relations among instances for more\nrobust and discriminative feature representation while establishing reward and\npunishment mechanisms to correct biases in pseudo-bags using contrastive\nlearning. These strategies form the final Dynamic Policy-Driven Adaptive\nMulti-Instance Learning (PAMIL) method for WSI tasks. Extensive experiments\nreveal that our PAMIL method outperforms the state-of-the-art by 3.8\\% on\nCAMELYON16 and 4.4\\% on TCGA lung cancer datasets.", + "The exponential growth of large language models (LLMs) has opened up numerous\npossibilities for multimodal AGI systems. However, the progress in vision and\nvision-language foundation models, which are also critical elements of\nmulti-modal AGI, has not kept pace with LLMs. In this work, we design a\nlarge-scale vision-language foundation model (InternVL), which scales up the\nvision foundation model to 6 billion parameters and progressively aligns it\nwith the LLM, using web-scale image-text data from various sources. This model\ncan be broadly applied to and achieve state-of-the-art performance on 32\ngeneric visual-linguistic benchmarks including visual perception tasks such as\nimage-level or pixel-level recognition, vision-language tasks such as zero-shot\nimage/video classification, zero-shot image/video-text retrieval, and link with\nLLMs to create multi-modal dialogue systems. It has powerful visual\ncapabilities and can be a good alternative to the ViT-22B. We hope that our\nresearch could contribute to the development of multi-modal large models. Code\nand models are available at https://github.com/OpenGVLab/InternVL.", + "We present Multi-View Attentive Contextualization (MvACon), a simple yet\neffective method for improving 2D-to-3D feature lifting in query-based\nmulti-view 3D (MV3D) object detection. Despite remarkable progress witnessed in\nthe field of query-based MV3D object detection, prior art often suffers from\neither the lack of exploiting high-resolution 2D features in dense\nattention-based lifting, due to high computational costs, or from\ninsufficiently dense grounding of 3D queries to multi-scale 2D features in\nsparse attention-based lifting. Our proposed MvACon hits the two birds with one\nstone using a representationally dense yet computationally sparse attentive\nfeature contextualization scheme that is agnostic to specific 2D-to-3D feature\nlifting approaches. In experiments, the proposed MvACon is thoroughly tested on\nthe nuScenes benchmark, using both the BEVFormer and its recent 3D deformable\nattention (DFA3D) variant, as well as the PETR, showing consistent detection\nperformance improvement, especially in enhancing performance in location,\norientation, and velocity prediction.", + "It is also tested on the Waymo-mini\nbenchmark using BEVFormer with similar improvement. We qualitatively and\nquantitatively show that global cluster-based contexts effectively encode dense\nscene-level contexts for MV3D object detection. The promising results of our\nproposed MvACon reinforces the adage in computer vision -- ``(contextualized)\nfeature matters\".", + "Although neural radiance fields (NeRFs) have achieved triumphs in image novel\nview synthesis (NVS), LiDAR NVS remains largely unexplored. Previous LiDAR NVS\nmethods employ a simple shift from image NVS methods while ignoring the dynamic\nnature and the large-scale reconstruction problem of LiDAR point clouds. In\nlight of this, we propose LiDAR4D, a differentiable LiDAR-only framework for\nnovel space-time LiDAR view synthesis. In consideration of the sparsity and\nlarge-scale characteristics, we design a 4D hybrid representation combined with\nmulti-planar and grid features to achieve effective reconstruction in a\ncoarse-to-fine manner. Furthermore, we introduce geometric constraints derived\nfrom point clouds to improve temporal consistency. For the realistic synthesis\nof LiDAR point clouds, we incorporate the global optimization of ray-drop\nprobability to preserve cross-region patterns. Extensive experiments on\nKITTI-360 and NuScenes datasets demonstrate the superiority of our method in\naccomplishing geometry-aware and time-consistent dynamic reconstruction. Codes\nare available at https://github.com/ispc-lab/LiDAR4D.", + "Contents generated by recent advanced Text-to-Image (T2I) diffusion models\nare sometimes too imaginative for existing off-the-shelf dense predictors to\nestimate due to the immitigable domain gap. We introduce DMP, a pipeline\nutilizing pre-trained T2I models as a prior for dense prediction tasks. To\naddress the misalignment between deterministic prediction tasks and stochastic\nT2I models, we reformulate the diffusion process through a sequence of\ninterpolations, establishing a deterministic mapping between input RGB images\nand output prediction distributions. To preserve generalizability, we use\nlow-rank adaptation to fine-tune pre-trained models. Extensive experiments\nacross five tasks, including 3D property estimation, semantic segmentation, and\nintrinsic image decomposition, showcase the efficacy of the proposed method.\nDespite limited-domain training data, the approach yields faithful estimations\nfor arbitrary images, surpassing existing state-of-the-art algorithms.", + "Diffusion models trained on large-scale text-image datasets have demonstrated\na strong capability of controllable high-quality image generation from\narbitrary text prompts. However, the generation quality and generalization\nability of 3D diffusion models is hindered by the scarcity of high-quality and\nlarge-scale 3D datasets. In this paper, we present PI3D, a framework that fully\nleverages the pre-trained text-to-image diffusion models' ability to generate\nhigh-quality 3D shapes from text prompts in minutes. The core idea is to\nconnect the 2D and 3D domains by representing a 3D shape as a set of Pseudo RGB\nImages. We fine-tune an existing text-to-image diffusion model to produce such\npseudo-images using a small number of text-3D pairs. Surprisingly, we find that\nit can already generate meaningful and consistent 3D shapes given complex text\ndescriptions. We further take the generated shapes as the starting point for a\nlightweight iterative refinement using score distillation sampling to achieve\nhigh-quality generation under a low budget.", + "Surprisingly, we find that\nit can already generate meaningful and consistent 3D shapes given complex text\ndescriptions. We further take the generated shapes as the starting point for a\nlightweight iterative refinement using score distillation sampling to achieve\nhigh-quality generation under a low budget. PI3D generates a single 3D shape\nfrom text in only 3 minutes and the quality is validated to outperform existing\n3D generative models by a large margin.", + "Customization techniques for text-to-image models have paved the way for a\nwide range of previously unattainable applications, enabling the generation of\nspecific concepts across diverse contexts and styles. While existing methods\nfacilitate high-fidelity customization for individual concepts or a limited,\npre-defined set of them, they fall short of achieving scalability, where a\nsingle model can seamlessly render countless concepts. In this paper, we\naddress a new problem called Modular Customization, with the goal of\nefficiently merging customized models that were fine-tuned independently for\nindividual concepts. This allows the merged model to jointly synthesize\nconcepts in one image without compromising fidelity or incurring any additional\ncomputational costs.\n To address this problem, we introduce Orthogonal Adaptation, a method\ndesigned to encourage the customized models, which do not have access to each\nother during fine-tuning, to have orthogonal residual weights. This ensures\nthat during inference time, the customized models can be summed with minimal\ninterference.\n Our proposed method is both simple and versatile, applicable to nearly all\noptimizable weights in the model architecture.", + "This ensures\nthat during inference time, the customized models can be summed with minimal\ninterference.\n Our proposed method is both simple and versatile, applicable to nearly all\noptimizable weights in the model architecture. Through an extensive set of\nquantitative and qualitative evaluations, our method consistently outperforms\nrelevant baselines in terms of efficiency and identity preservation,\ndemonstrating a significant leap toward scalable customization of diffusion\nmodels.", + "We introduce pixelSplat, a feed-forward model that learns to reconstruct 3D\nradiance fields parameterized by 3D Gaussian primitives from pairs of images.\nOur model features real-time and memory-efficient rendering for scalable\ntraining as well as fast 3D reconstruction at inference time. To overcome local\nminima inherent to sparse and locally supported representations, we predict a\ndense probability distribution over 3D and sample Gaussian means from that\nprobability distribution. We make this sampling operation differentiable via a\nreparameterization trick, allowing us to back-propagate gradients through the\nGaussian splatting representation. We benchmark our method on wide-baseline\nnovel view synthesis on the real-world RealEstate10k and ACID datasets, where\nwe outperform state-of-the-art light field transformers and accelerate\nrendering by 2.5 orders of magnitude while reconstructing an interpretable and\neditable 3D radiance field.", + "Video generation has witnessed significant advancements, yet evaluating these\nmodels remains a challenge. A comprehensive evaluation benchmark for video\ngeneration is indispensable for two reasons: 1) Existing metrics do not fully\nalign with human perceptions; 2) An ideal evaluation system should provide\ninsights to inform future developments of video generation. To this end, we\npresent VBench, a comprehensive benchmark suite that dissects \"video generation\nquality\" into specific, hierarchical, and disentangled dimensions, each with\ntailored prompts and evaluation methods. VBench has three appealing properties:\n1) Comprehensive Dimensions: VBench comprises 16 dimensions in video generation\n(e.g., subject identity inconsistency, motion smoothness, temporal flickering,\nand spatial relationship, etc). The evaluation metrics with fine-grained levels\nreveal individual models' strengths and weaknesses. 2) Human Alignment: We also\nprovide a dataset of human preference annotations to validate our benchmarks'\nalignment with human perception, for each evaluation dimension respectively. 3)\nValuable Insights: We look into current models' ability across various\nevaluation dimensions, and various content types. We also investigate the gaps\nbetween video and image generation models.", + "3)\nValuable Insights: We look into current models' ability across various\nevaluation dimensions, and various content types. We also investigate the gaps\nbetween video and image generation models. We will open-source VBench,\nincluding all prompts, evaluation methods, generated videos, and human\npreference annotations, and also include more video generation models in VBench\nto drive forward the field of video generation.", + "We propose Diffusion Noise Optimization (DNO), a new method that effectively\nleverages existing motion diffusion models as motion priors for a wide range of\nmotion-related tasks. Instead of training a task-specific diffusion model for\neach new task, DNO operates by optimizing the diffusion latent noise of an\nexisting pre-trained text-to-motion model. Given the corresponding latent noise\nof a human motion, it propagates the gradient from the target criteria defined\non the motion space through the whole denoising process to update the diffusion\nlatent noise. As a result, DNO supports any use cases where criteria can be\ndefined as a function of motion. In particular, we show that, for motion\nediting and control, DNO outperforms existing methods in both achieving the\nobjective and preserving the motion content. DNO accommodates a diverse range\nof editing modes, including changing trajectory, pose, joint locations, or\navoiding newly added obstacles. In addition, DNO is effective in motion\ndenoising and completion, producing smooth and realistic motion from noisy and\npartial inputs. DNO achieves these results at inference time without the need\nfor model retraining, offering great versatility for any defined reward or loss\nfunction on the motion representation.", + "Deep learning has achieved remarkable progress in various applications,\nheightening the importance of safeguarding the intellectual property (IP) of\nwell-trained models. It entails not only authorizing usage but also ensuring\nthe deployment of models in authorized data domains, i.e., making models\nexclusive to certain target domains. Previous methods necessitate concurrent\naccess to source training data and target unauthorized data when performing IP\nprotection, making them risky and inefficient for decentralized private data.\nIn this paper, we target a practical setting where only a well-trained source\nmodel is available and investigate how we can realize IP protection. To achieve\nthis, we propose a novel MAsk Pruning (MAP) framework. MAP stems from an\nintuitive hypothesis, i.e., there are target-related parameters in a\nwell-trained model, locating and pruning them is the key to IP protection.\nTechnically, MAP freezes the source model and learns a target-specific binary\nmask to prevent unauthorized data usage while minimizing performance\ndegradation on authorized data. Moreover, we introduce a new metric aimed at\nachieving a better balance between source and target performance degradation.", + "Technically, MAP freezes the source model and learns a target-specific binary\nmask to prevent unauthorized data usage while minimizing performance\ndegradation on authorized data. Moreover, we introduce a new metric aimed at\nachieving a better balance between source and target performance degradation.\nTo verify the effectiveness and versatility, we have evaluated MAP in a variety\nof scenarios, including vanilla source-available, practical source-free, and\nchallenging data-free. Extensive experiments indicate that MAP yields new\nstate-of-the-art performance.", + "In this work, we tackle the problem of domain generalization for object\ndetection, specifically focusing on the scenario where only a single source\ndomain is available. We propose an effective approach that involves two key\nsteps: diversifying the source domain and aligning detections based on class\nprediction confidence and localization. Firstly, we demonstrate that by\ncarefully selecting a set of augmentations, a base detector can outperform\nexisting methods for single domain generalization by a good margin. This\nhighlights the importance of domain diversification in improving the\nperformance of object detectors. Secondly, we introduce a method to align\ndetections from multiple views, considering both classification and\nlocalization outputs. This alignment procedure leads to better generalized and\nwell-calibrated object detector models, which are crucial for accurate\ndecision-making in safety-critical applications. Our approach is\ndetector-agnostic and can be seamlessly applied to both single-stage and\ntwo-stage detectors. To validate the effectiveness of our proposed methods, we\nconduct extensive experiments and ablations on challenging domain-shift\nscenarios. The results consistently demonstrate the superiority of our approach\ncompared to existing methods. Our code and models are available at:\nhttps://github.com/msohaildanish/DivAlign", + "In the realm of food computing, segmenting ingredients from images poses\nsubstantial challenges due to the large intra-class variance among the same\ningredients, the emergence of new ingredients, and the high annotation costs\nassociated with large food segmentation datasets. Existing approaches primarily\nutilize a closed-vocabulary and static text embeddings setting. These methods\noften fall short in effectively handling the ingredients, particularly new and\ndiverse ones. In response to these limitations, we introduce OVFoodSeg, a\nframework that adopts an open-vocabulary setting and enhances text embeddings\nwith visual context. By integrating vision-language models (VLMs), our approach\nenriches text embedding with image-specific information through two innovative\nmodules, eg, an image-to-text learner FoodLearner and an Image-Informed Text\nEncoder. The training process of OVFoodSeg is divided into two stages: the\npre-training of FoodLearner and the subsequent learning phase for segmentation.\nThe pre-training phase equips FoodLearner with the capability to align visual\ninformation with corresponding textual representations that are specifically\nrelated to food, while the second phase adapts both the FoodLearner and the\nImage-Informed Text Encoder for the segmentation task.", + "The pre-training phase equips FoodLearner with the capability to align visual\ninformation with corresponding textual representations that are specifically\nrelated to food, while the second phase adapts both the FoodLearner and the\nImage-Informed Text Encoder for the segmentation task. By addressing the\ndeficiencies of previous models, OVFoodSeg demonstrates a significant\nimprovement, achieving an 4.9\\% increase in mean Intersection over Union (mIoU)\non the FoodSeg103 dataset, setting a new milestone for food image segmentation.", + "We introduce a lightweight and accurate architecture for resource-efficient\nvisual correspondence. Our method, dubbed XFeat (Accelerated Features),\nrevisits fundamental design choices in convolutional neural networks for\ndetecting, extracting, and matching local features. Our new model satisfies a\ncritical need for fast and robust algorithms suitable to resource-limited\ndevices. In particular, accurate image matching requires sufficiently large\nimage resolutions - for this reason, we keep the resolution as large as\npossible while limiting the number of channels in the network. Besides, our\nmodel is designed to offer the choice of matching at the sparse or semi-dense\nlevels, each of which may be more suitable for different downstream\napplications, such as visual navigation and augmented reality. Our model is the\nfirst to offer semi-dense matching efficiently, leveraging a novel match\nrefinement module that relies on coarse local descriptors. XFeat is versatile\nand hardware-independent, surpassing current deep learning-based local features\nin speed (up to 5x faster) with comparable or better accuracy, proven in pose\nestimation and visual localization. We showcase it running in real-time on an\ninexpensive laptop CPU without specialized hardware optimizations.", + "We showcase it running in real-time on an\ninexpensive laptop CPU without specialized hardware optimizations. Code and\nweights are available at www.verlab.dcc.ufmg.br/descriptors/xfeat_cvpr24.", + "The emergence of attention-based transformer models has led to their\nextensive use in various tasks, due to their superior generalization and\ntransfer properties. Recent research has demonstrated that such models, when\nprompted appropriately, are excellent for few-shot inference. However, such\ntechniques are under-explored for dense prediction tasks like semantic\nsegmentation. In this work, we examine the effectiveness of prompting a\ntransformer-decoder with learned visual prompts for the generalized few-shot\nsegmentation (GFSS) task. Our goal is to achieve strong performance not only on\nnovel categories with limited examples, but also to retain performance on base\ncategories. We propose an approach to learn visual prompts with limited\nexamples. These learned visual prompts are used to prompt a multiscale\ntransformer decoder to facilitate accurate dense predictions. Additionally, we\nintroduce a unidirectional causal attention mechanism between the novel\nprompts, learned with limited examples, and the base prompts, learned with\nabundant data. This mechanism enriches the novel prompts without deteriorating\nthe base class performance.", + "Additionally, we\nintroduce a unidirectional causal attention mechanism between the novel\nprompts, learned with limited examples, and the base prompts, learned with\nabundant data. This mechanism enriches the novel prompts without deteriorating\nthe base class performance. Overall, this form of prompting helps us achieve\nstate-of-the-art performance for GFSS on two different benchmark datasets:\nCOCO-$20^i$ and Pascal-$5^i$, without the need for test-time optimization (or\ntransduction). Furthermore, test-time optimization leveraging unlabelled test\ndata can be used to improve the prompts, which we refer to as transductive\nprompt tuning.", + "We present ARTrackV2, which integrates two pivotal aspects of tracking:\ndetermining where to look (localization) and how to describe (appearance\nanalysis) the target object across video frames. Building on the foundation of\nits predecessor, ARTrackV2 extends the concept by introducing a unified\ngenerative framework to \"read out\" object's trajectory and \"retell\" its\nappearance in an autoregressive manner. This approach fosters a time-continuous\nmethodology that models the joint evolution of motion and visual features,\nguided by previous estimates. Furthermore, ARTrackV2 stands out for its\nefficiency and simplicity, obviating the less efficient intra-frame\nautoregression and hand-tuned parameters for appearance updates. Despite its\nsimplicity, ARTrackV2 achieves state-of-the-art performance on prevailing\nbenchmark datasets while demonstrating remarkable efficiency improvement. In\nparticular, ARTrackV2 achieves AO score of 79.5\\% on GOT-10k, and AUC of 86.1\\%\non TrackingNet while being $3.6 \\times$ faster than ARTrack. The code will be\nreleased.", + "What does learning to model relationships between strings teach large\nlanguage models (LLMs) about the visual world? We systematically evaluate LLMs'\nabilities to generate and recognize an assortment of visual concepts of\nincreasing complexity and then demonstrate how a preliminary visual\nrepresentation learning system can be trained using models of text. As language\nmodels lack the ability to consume or output visual information as pixels, we\nuse code to represent images in our study. Although LLM-generated images do not\nlook like natural images, results on image generation and the ability of models\nto correct these generated images indicate that precise modeling of strings can\nteach language models about numerous aspects of the visual world. Furthermore,\nexperiments on self-supervised visual representation learning, utilizing images\ngenerated with text models, highlight the potential to train vision models\ncapable of making semantic assessments of natural images using just LLMs.", + "In this paper, we propose a new framework for online 3D scene perception.\nConventional 3D scene perception methods are offline, i.e., take an already\nreconstructed 3D scene geometry as input, which is not applicable in robotic\napplications where the input data is streaming RGB-D videos rather than a\ncomplete 3D scene reconstructed from pre-collected RGB-D videos. To deal with\nonline 3D scene perception tasks where data collection and perception should be\nperformed simultaneously, the model should be able to process 3D scenes frame\nby frame and make use of the temporal information. To this end, we propose an\nadapter-based plug-and-play module for the backbone of 3D scene perception\nmodel, which constructs memory to cache and aggregate the extracted RGB-D\nfeatures to empower offline models with temporal learning ability.\nSpecifically, we propose a queued memory mechanism to cache the supporting\npoint cloud and image features. Then we devise aggregation modules which\ndirectly perform on the memory and pass temporal information to current frame.\nWe further propose 3D-to-2D adapter to enhance image features with strong\nglobal context.", + "Specifically, we propose a queued memory mechanism to cache the supporting\npoint cloud and image features. Then we devise aggregation modules which\ndirectly perform on the memory and pass temporal information to current frame.\nWe further propose 3D-to-2D adapter to enhance image features with strong\nglobal context. Our adapters can be easily inserted into mainstream offline\narchitectures of different tasks and significantly boost their performance on\nonline tasks. Extensive experiments on ScanNet and SceneNN datasets demonstrate\nour approach achieves leading performance on three 3D scene perception tasks\ncompared with state-of-the-art online methods by simply finetuning existing\noffline models, without any model and task-specific designs.\n\\href{https://xuxw98.github.io/Online3D/}{Project page}.", + "Vision-language models (VLMs) have made significant strides in cross-modal\nunderstanding through large-scale paired datasets. However, in fashion domain,\ndatasets often exhibit a disparity between the information conveyed in image\nand text. This issue stems from datasets containing multiple images of a single\nfashion item all paired with one text, leading to cases where some textual\ndetails are not visible in individual images. This mismatch, particularly when\nnon-co-occurring elements are masked, undermines the training of conventional\nVLM objectives like Masked Language Modeling and Masked Image Modeling, thereby\nhindering the model's ability to accurately align fine-grained visual and\ntextual features. Addressing this problem, we propose Synchronized attentional\nMasking (SyncMask), which generate masks that pinpoint the image patches and\nword tokens where the information co-occur in both image and text. This\nsynchronization is accomplished by harnessing cross-attentional features\nobtained from a momentum model, ensuring a precise alignment between the two\nmodalities.", + "This\nsynchronization is accomplished by harnessing cross-attentional features\nobtained from a momentum model, ensuring a precise alignment between the two\nmodalities. Additionally, we enhance grouped batch sampling with semi-hard\nnegatives, effectively mitigating false negative issues in Image-Text Matching\nand Image-Text Contrastive learning objectives within fashion datasets. Our\nexperiments demonstrate the effectiveness of the proposed approach,\noutperforming existing methods in three downstream tasks.", + "Advanced Audio-Visual Speech Recognition (AVSR) systems have been observed to\nbe sensitive to missing video frames, performing even worse than\nsingle-modality models. While applying the dropout technique to the video\nmodality enhances robustness to missing frames, it simultaneously results in a\nperformance loss when dealing with complete data input. In this paper, we\ninvestigate this contrasting phenomenon from the perspective of modality bias\nand reveal that an excessive modality bias on the audio caused by dropout is\nthe underlying reason. Moreover, we present the Modality Bias Hypothesis (MBH)\nto systematically describe the relationship between modality bias and\nrobustness against missing modality in multimodal systems. Building on these\nfindings, we propose a novel Multimodal Distribution Approximation with\nKnowledge Distillation (MDA-KD) framework to reduce over-reliance on the audio\nmodality and to maintain performance and robustness simultaneously. Finally, to\naddress an entirely missing modality, we adopt adapters to dynamically switch\ndecision strategies. The effectiveness of our proposed approach is evaluated\nand validated through a series of comprehensive experiments using the MISP2021\nand MISP2022 datasets.", + "Finally, to\naddress an entirely missing modality, we adopt adapters to dynamically switch\ndecision strategies. The effectiveness of our proposed approach is evaluated\nand validated through a series of comprehensive experiments using the MISP2021\nand MISP2022 datasets. Our code is available at\nhttps://github.com/dalision/ModalBiasAVSR", + "Point cloud upsampling (PCU) enriches the representation of raw point clouds,\nsignificantly improving the performance in downstream tasks such as\nclassification and reconstruction. Most of the existing point cloud upsampling\nmethods focus on sparse point cloud feature extraction and upsampling module\ndesign. In a different way, we dive deeper into directly modelling the gradient\nof data distribution from dense point clouds. In this paper, we proposed a\nconditional denoising diffusion probability model (DDPM) for point cloud\nupsampling, called PUDM. Specifically, PUDM treats the sparse point cloud as a\ncondition, and iteratively learns the transformation relationship between the\ndense point cloud and the noise. Simultaneously, PUDM aligns with a dual\nmapping paradigm to further improve the discernment of point features. In this\ncontext, PUDM enables learning complex geometry details in the ground truth\nthrough the dominant features, while avoiding an additional upsampling module\ndesign. Furthermore, to generate high-quality arbitrary-scale point clouds\nduring inference, PUDM exploits the prior knowledge of the scale between sparse\npoint clouds and dense point clouds during training by parameterizing a rate\nfactor.", + "Furthermore, to generate high-quality arbitrary-scale point clouds\nduring inference, PUDM exploits the prior knowledge of the scale between sparse\npoint clouds and dense point clouds during training by parameterizing a rate\nfactor. Moreover, PUDM exhibits strong noise robustness in experimental\nresults. In the quantitative and qualitative evaluations on PU1K and PUGAN,\nPUDM significantly outperformed existing methods in terms of Chamfer Distance\n(CD) and Hausdorff Distance (HD), achieving state of the art (SOTA)\nperformance.", + "Neural Radiance Fields (NeRFs) excel in photorealistically rendering static\nscenes. However, rendering dynamic, long-duration radiance fields on ubiquitous\ndevices remains challenging, due to data storage and computational constraints.\nIn this paper, we introduce VideoRF, the first approach to enable real-time\nstreaming and rendering of dynamic radiance fields on mobile platforms. At the\ncore is a serialized 2D feature image stream representing the 4D radiance field\nall in one. We introduce a tailored training scheme directly applied to this 2D\ndomain to impose the temporal and spatial redundancy of the feature image\nstream. By leveraging the redundancy, we show that the feature image stream can\nbe efficiently compressed by 2D video codecs, which allows us to exploit video\nhardware accelerators to achieve real-time decoding. On the other hand, based\non the feature image stream, we propose a novel rendering pipeline for VideoRF,\nwhich has specialized space mappings to query radiance properties efficiently.\nPaired with a deferred shading model, VideoRF has the capability of real-time\nrendering on mobile devices thanks to its efficiency.", + "On the other hand, based\non the feature image stream, we propose a novel rendering pipeline for VideoRF,\nwhich has specialized space mappings to query radiance properties efficiently.\nPaired with a deferred shading model, VideoRF has the capability of real-time\nrendering on mobile devices thanks to its efficiency. We have developed a\nreal-time interactive player that enables online streaming and rendering of\ndynamic scenes, offering a seamless and immersive free-viewpoint experience\nacross a range of devices, from desktops to mobile phones.", + "We introduce Diffusion Parametric Head Models (DPHMs), a generative model\nthat enables robust volumetric head reconstruction and tracking from monocular\ndepth sequences. While recent volumetric head models, such as NPHMs, can now\nexcel in representing high-fidelity head geometries, tracking and\nreconstructing heads from real-world single-view depth sequences remains very\nchallenging, as the fitting to partial and noisy observations is\nunderconstrained. To tackle these challenges, we propose a latent\ndiffusion-based prior to regularize volumetric head reconstruction and\ntracking. This prior-based regularizer effectively constrains the identity and\nexpression codes to lie on the underlying latent manifold which represents\nplausible head shapes. To evaluate the effectiveness of the diffusion-based\nprior, we collect a dataset of monocular Kinect sequences consisting of various\ncomplex facial expression motions and rapid transitions. We compare our method\nto state-of-the-art tracking methods and demonstrate improved head identity\nreconstruction as well as robust expression tracking.", + "Current perceptive models heavily depend on resource-intensive datasets,\nprompting the need for innovative solutions. Leveraging recent advances in\ndiffusion models, synthetic data, by constructing image inputs from various\nannotations, proves beneficial for downstream tasks. While prior methods have\nseparately addressed generative and perceptive models, DetDiffusion, for the\nfirst time, harmonizes both, tackling the challenges in generating effective\ndata for perceptive models. To enhance image generation with perceptive models,\nwe introduce perception-aware loss (P.A. loss) through segmentation, improving\nboth quality and controllability. To boost the performance of specific\nperceptive models, our method customizes data augmentation by extracting and\nutilizing perception-aware attribute (P.A. Attr) during generation.\nExperimental results from the object detection task highlight DetDiffusion's\nsuperior performance, establishing a new state-of-the-art in layout-guided\ngeneration. Furthermore, image syntheses from DetDiffusion can effectively\naugment training data, significantly enhancing downstream detection\nperformance.", + "Previous methods for Video Frame Interpolation (VFI) have encountered\nchallenges, notably the manifestation of blur and ghosting effects. These\nissues can be traced back to two pivotal factors: unavoidable motion errors and\nmisalignment in supervision. In practice, motion estimates often prove to be\nerror-prone, resulting in misaligned features. Furthermore, the reconstruction\nloss tends to bring blurry results, particularly in misaligned regions. To\nmitigate these challenges, we propose a new paradigm called PerVFI\n(Perception-oriented Video Frame Interpolation). Our approach incorporates an\nAsymmetric Synergistic Blending module (ASB) that utilizes features from both\nsides to synergistically blend intermediate features. One reference frame\nemphasizes primary content, while the other contributes complementary\ninformation. To impose a stringent constraint on the blending process, we\nintroduce a self-learned sparse quasi-binary mask which effectively mitigates\nghosting and blur artifacts in the output. Additionally, we employ a\nnormalizing flow-based generator and utilize the negative log-likelihood loss\nto learn the conditional distribution of the output, which further facilitates\nthe generation of clear and fine details.", + "Additionally, we employ a\nnormalizing flow-based generator and utilize the negative log-likelihood loss\nto learn the conditional distribution of the output, which further facilitates\nthe generation of clear and fine details. Experimental results validate the\nsuperiority of PerVFI, demonstrating significant improvements in perceptual\nquality compared to existing methods. Codes are available at\n\\url{https://github.com/mulns/PerVFI}", + "In recent years, there has been a growing interest in training Neural\nNetworks to approximate Unsigned Distance Fields (UDFs) for representing open\nsurfaces in the context of 3D reconstruction. However, UDFs are\nnon-differentiable at the zero level set which leads to significant errors in\ndistances and gradients, generally resulting in fragmented and discontinuous\nsurfaces. In this paper, we propose to learn a hyperbolic scaling of the\nunsigned distance field, which defines a new Eikonal problem with distinct\nboundary conditions. This allows our formulation to integrate seamlessly with\nstate-of-the-art continuously differentiable implicit neural representation\nnetworks, largely applied in the literature to represent signed distance\nfields. Our approach not only addresses the challenge of open surface\nrepresentation but also demonstrates significant improvement in reconstruction\nquality and training performance. Moreover, the unlocked field's\ndifferentiability allows the accurate computation of essential topological\nproperties such as normal directions and curvatures, pervasive in downstream\ntasks such as rendering. Through extensive experiments, we validate our\napproach across various data sets and against competitive baselines. The\nresults demonstrate enhanced accuracy and up to an order of magnitude increase\nin speed compared to previous methods.", + "The vision-language model has brought great improvement to few-shot\nindustrial anomaly detection, which usually needs to design of hundreds of\nprompts through prompt engineering. For automated scenarios, we first use\nconventional prompt learning with many-class paradigm as the baseline to\nautomatically learn prompts but found that it can not work well in one-class\nanomaly detection. To address the above problem, this paper proposes a\none-class prompt learning method for few-shot anomaly detection, termed\nPromptAD. First, we propose semantic concatenation which can transpose normal\nprompts into anomaly prompts by concatenating normal prompts with anomaly\nsuffixes, thus constructing a large number of negative samples used to guide\nprompt learning in one-class setting. Furthermore, to mitigate the training\nchallenge caused by the absence of anomaly images, we introduce the concept of\nexplicit anomaly margin, which is used to explicitly control the margin between\nnormal prompt features and anomaly prompt features through a hyper-parameter.\nFor image-level/pixel-level anomaly detection, PromptAD achieves first place in\n11/12 few-shot settings on MVTec and VisA.", + "In the absence of parallax cues, a learning-based single image depth\nestimation (SIDE) model relies heavily on shading and contextual cues in the\nimage. While this simplicity is attractive, it is necessary to train such\nmodels on large and varied datasets, which are difficult to capture. It has\nbeen shown that using embeddings from pre-trained foundational models, such as\nCLIP, improves zero shot transfer in several applications. Taking inspiration\nfrom this, in our paper we explore the use of global image priors generated\nfrom a pre-trained ViT model to provide more detailed contextual information.\nWe argue that the embedding vector from a ViT model, pre-trained on a large\ndataset, captures greater relevant information for SIDE than the usual route of\ngenerating pseudo image captions, followed by CLIP based text embeddings. Based\non this idea, we propose a new SIDE model using a diffusion backbone which is\nconditioned on ViT embeddings. Our proposed design establishes a new\nstate-of-the-art (SOTA) for SIDE on NYUv2 dataset, achieving Abs Rel error of\n0.059 (14% improvement) compared to 0.069 by the current SOTA (VPD).", + "Our proposed design establishes a new\nstate-of-the-art (SOTA) for SIDE on NYUv2 dataset, achieving Abs Rel error of\n0.059 (14% improvement) compared to 0.069 by the current SOTA (VPD). And on\nKITTI dataset, achieving Sq Rel error of 0.139 (2% improvement) compared to\n0.142 by the current SOTA (GEDepth). For zero-shot transfer with a model\ntrained on NYUv2, we report mean relative improvement of (20%, 23%, 81%, 25%)\nover NeWCRFs on (Sun-RGBD, iBims1, DIODE, HyperSim) datasets, compared to (16%,\n18%, 45%, 9%) by ZoeDepth. The project page is available at\nhttps://ecodepth-iitd.github.io", + "The YOLO series has become the most popular framework for real-time object\ndetection due to its reasonable trade-off between speed and accuracy. However,\nwe observe that the speed and accuracy of YOLOs are negatively affected by the\nNMS. Recently, end-to-end Transformer-based detectors (DETRs) have provided an\nalternative to eliminating NMS. Nevertheless, the high computational cost\nlimits their practicality and hinders them from fully exploiting the advantage\nof excluding NMS. In this paper, we propose the Real-Time DEtection TRansformer\n(RT-DETR), the first real-time end-to-end object detector to our best knowledge\nthat addresses the above dilemma. We build RT-DETR in two steps, drawing on the\nadvanced DETR: first we focus on maintaining accuracy while improving speed,\nfollowed by maintaining speed while improving accuracy. Specifically, we design\nan efficient hybrid encoder to expeditiously process multi-scale features by\ndecoupling intra-scale interaction and cross-scale fusion to improve speed.\nThen, we propose the uncertainty-minimal query selection to provide\nhigh-quality initial queries to the decoder, thereby improving accuracy.", + "Specifically, we design\nan efficient hybrid encoder to expeditiously process multi-scale features by\ndecoupling intra-scale interaction and cross-scale fusion to improve speed.\nThen, we propose the uncertainty-minimal query selection to provide\nhigh-quality initial queries to the decoder, thereby improving accuracy. In\naddition, RT-DETR supports flexible speed tuning by adjusting the number of\ndecoder layers to adapt to various scenarios without retraining. Our\nRT-DETR-R50 / R101 achieves 53.1% / 54.3% AP on COCO and 108 / 74 FPS on T4\nGPU, outperforming previously advanced YOLOs in both speed and accuracy. We\nalso develop scaled RT-DETRs that outperform the lighter YOLO detectors (S and\nM models). Furthermore, RT-DETR-R50 outperforms DINO-R50 by 2.2% AP in accuracy\nand about 21 times in FPS. After pre-training with Objects365, RT-DETR-R50 /\nR101 achieves 55.3% / 56.2% AP. The project page:\nhttps://zhao-yian.github.io/RTDETR.", + "Despite the recent advances in unified image segmentation (IS), developing a\nunified video segmentation (VS) model remains a challenge. This is mainly\nbecause generic category-specified VS tasks need to detect all objects and\ntrack them across consecutive frames, while prompt-guided VS tasks require\nre-identifying the target with visual/text prompts throughout the entire video,\nmaking it hard to handle the different tasks with the same architecture. We\nmake an attempt to address these issues and present a novel unified VS\narchitecture, namely UniVS, by using prompts as queries. UniVS averages the\nprompt features of the target from previous frames as its initial query to\nexplicitly decode masks, and introduces a target-wise prompt cross-attention\nlayer in the mask decoder to integrate prompt features in the memory pool. By\ntaking the predicted masks of entities from previous frames as their visual\nprompts, UniVS converts different VS tasks into prompt-guided target\nsegmentation, eliminating the heuristic inter-frame matching process. Our\nframework not only unifies the different VS tasks but also naturally achieves\nuniversal training and testing, ensuring robust performance across different\nscenarios.", + "Our\nframework not only unifies the different VS tasks but also naturally achieves\nuniversal training and testing, ensuring robust performance across different\nscenarios. UniVS shows a commendable balance between performance and\nuniversality on 10 challenging VS benchmarks, covering video instance,\nsemantic, panoptic, object, and referring segmentation tasks. Code can be found\nat \\url{https://github.com/MinghanLi/UniVS}.", + "Domain shift is a formidable issue in Machine Learning that causes a model to\nsuffer from performance degradation when tested on unseen domains. Federated\nDomain Generalization (FedDG) attempts to train a global model using\ncollaborative clients in a privacy-preserving manner that can generalize well\nto unseen clients possibly with domain shift. However, most existing FedDG\nmethods either cause additional privacy risks of data leakage or induce\nsignificant costs in client communication and computation, which are major\nconcerns in the Federated Learning paradigm. To circumvent these challenges,\nhere we introduce a novel architectural method for FedDG, namely gPerXAN, which\nrelies on a normalization scheme working with a guiding regularizer. In\nparticular, we carefully design Personalized eXplicitly Assembled Normalization\nto enforce client models selectively filtering domain-specific features that\nare biased towards local data while retaining discrimination of those features.\nThen, we incorporate a simple yet effective regularizer to guide these models\nin directly capturing domain-invariant representations that the global model's\nclassifier can leverage.", + "Then, we incorporate a simple yet effective regularizer to guide these models\nin directly capturing domain-invariant representations that the global model's\nclassifier can leverage. Extensive experimental results on two benchmark\ndatasets, i.e., PACS and Office-Home, and a real-world medical dataset,\nCamelyon17, indicate that our proposed method outperforms other existing\nmethods in addressing this particular problem.", + "Recovering a clear image from a single hazy image is an open inverse problem.\nAlthough significant research progress has been made, most existing methods\nignore the effect that downstream tasks play in promoting upstream dehazing.\nFrom the perspective of the haze generation mechanism, there is a potential\nrelationship between the depth information of the scene and the hazy image.\nBased on this, we propose a dual-task collaborative mutual promotion framework\nto achieve the dehazing of a single image. This framework integrates depth\nestimation and dehazing by a dual-task interaction mechanism and achieves\nmutual enhancement of their performance. To realize the joint optimization of\nthe two tasks, an alternative implementation mechanism with the difference\nperception is developed. On the one hand, the difference perception between the\ndepth maps of the dehazing result and the ideal image is proposed to promote\nthe dehazing network to pay attention to the non-ideal areas of the dehazing.\nOn the other hand, by improving the depth estimation performance in the\ndifficult-to-recover areas of the hazy image, the dehazing network can\nexplicitly use the depth information of the hazy image to assist the clear\nimage recovery.", + "On the other hand, by improving the depth estimation performance in the\ndifficult-to-recover areas of the hazy image, the dehazing network can\nexplicitly use the depth information of the hazy image to assist the clear\nimage recovery. To promote the depth estimation, we propose to use the\ndifference between the dehazed image and the ground truth to guide the depth\nestimation network to focus on the dehazed unideal areas. It allows dehazing\nand depth estimation to leverage their strengths in a mutually reinforcing\nmanner. Experimental results show that the proposed method can achieve better\nperformance than that of the state-of-the-art approaches.", + "Listening head generation aims to synthesize a non-verbal responsive listener\nhead by modeling the correlation between the speaker and the listener in\ndynamic conversion.The applications of listener agent generation in virtual\ninteraction have promoted many works achieving the diverse and fine-grained\nmotion generation. However, they can only manipulate motions through simple\nemotional labels, but cannot freely control the listener's motions. Since\nlistener agents should have human-like attributes (e.g. identity, personality)\nwhich can be freely customized by users, this limits their realism. In this\npaper, we propose a user-friendly framework called CustomListener to realize\nthe free-form text prior guided listener generation. To achieve\nspeaker-listener coordination, we design a Static to Dynamic Portrait module\n(SDP), which interacts with speaker information to transform static text into\ndynamic portrait token with completion rhythm and amplitude information. To\nachieve coherence between segments, we design a Past Guided Generation Module\n(PGG) to maintain the consistency of customized listener attributes through the\nmotion prior, and utilize a diffusion-based structure conditioned on the\nportrait token and the motion prior to realize the controllable generation.", + "To\nachieve coherence between segments, we design a Past Guided Generation Module\n(PGG) to maintain the consistency of customized listener attributes through the\nmotion prior, and utilize a diffusion-based structure conditioned on the\nportrait token and the motion prior to realize the controllable generation. To\ntrain and evaluate our model, we have constructed two text-annotated listening\nhead datasets based on ViCo and RealTalk, which provide text-video paired\nlabels. Extensive experiments have verified the effectiveness of our model.", + "Principal component analysis (PCA), along with its extensions to manifolds\nand outlier contaminated data, have been indispensable in computer vision and\nmachine learning. In this work, we present a unifying formalism for PCA and its\nvariants, and introduce a framework based on the flags of linear subspaces, ie\na hierarchy of nested linear subspaces of increasing dimension, which not only\nallows for a common implementation but also yields novel variants, not explored\npreviously. We begin by generalizing traditional PCA methods that either\nmaximize variance or minimize reconstruction error. We expand these\ninterpretations to develop a wide array of new dimensionality reduction\nalgorithms by accounting for outliers and the data manifold. To devise a common\ncomputational approach, we recast robust and dual forms of PCA as optimization\nproblems on flag manifolds. We then integrate tangent space approximations of\nprincipal geodesic analysis (tangent-PCA) into this flag-based framework,\ncreating novel robust and dual geodesic PCA variations. The remarkable\nflexibility offered by the 'flagification' introduced here enables even more\nalgorithmic variants identified by specific flag types.", + "The remarkable\nflexibility offered by the 'flagification' introduced here enables even more\nalgorithmic variants identified by specific flag types. Last but not least, we\npropose an effective convergent solver for these flag-formulations employing\nthe Stiefel manifold. Our empirical results on both real-world and synthetic\nscenarios, demonstrate the superiority of our novel algorithms, especially in\nterms of robustness to outliers on manifolds.", + "This paper addresses the challenge of example-based non-stationary texture\nsynthesis. We introduce a novel twostep approach wherein users first modify a\nreference texture using standard image editing tools, yielding an initial rough\ntarget for the synthesis. Subsequently, our proposed method, termed\n\"self-rectification\", automatically refines this target into a coherent,\nseamless texture, while faithfully preserving the distinct visual\ncharacteristics of the reference exemplar. Our method leverages a pre-trained\ndiffusion network, and uses self-attention mechanisms, to gradually align the\nsynthesized texture with the reference, ensuring the retention of the\nstructures in the provided target. Through experimental validation, our\napproach exhibits exceptional proficiency in handling non-stationary textures,\ndemonstrating significant advancements in texture synthesis when compared to\nexisting state-of-the-art techniques. Code is available at\nhttps://github.com/xiaorongjun000/Self-Rectification", + "Contemporary models for generating images show remarkable quality and\nversatility. Swayed by these advantages, the research community repurposes them\nto generate videos. Since video content is highly redundant, we argue that\nnaively bringing advances of image models to the video generation domain\nreduces motion fidelity, visual quality and impairs scalability. In this work,\nwe build Snap Video, a video-first model that systematically addresses these\nchallenges. To do that, we first extend the EDM framework to take into account\nspatially and temporally redundant pixels and naturally support video\ngeneration. Second, we show that a U-Net - a workhorse behind image generation\n- scales poorly when generating videos, requiring significant computational\noverhead. Hence, we propose a new transformer-based architecture that trains\n3.31 times faster than U-Nets (and is ~4.5 faster at inference). This allows us\nto efficiently train a text-to-video model with billions of parameters for the\nfirst time, reach state-of-the-art results on a number of benchmarks, and\ngenerate videos with substantially higher quality, temporal consistency, and\nmotion complexity.", + "This allows us\nto efficiently train a text-to-video model with billions of parameters for the\nfirst time, reach state-of-the-art results on a number of benchmarks, and\ngenerate videos with substantially higher quality, temporal consistency, and\nmotion complexity. The user studies showed that our model was favored by a\nlarge margin over the most recent methods. See our website at\nhttps://snap-research.github.io/snapvideo/.", + "Human-centric Point Cloud Video Understanding (PVU) is an emerging field\nfocused on extracting and interpreting human-related features from sequences of\nhuman point clouds, further advancing downstream human-centric tasks and\napplications. Previous works usually focus on tackling one specific task and\nrely on huge labeled data, which has poor generalization capability.\nConsidering that human has specific characteristics, including the structural\nsemantics of human body and the dynamics of human motions, we propose a unified\nframework to make full use of the prior knowledge and explore the inherent\nfeatures in the data itself for generalized human-centric point cloud video\nunderstanding. Extensive experiments demonstrate that our method achieves\nstate-of-the-art performance on various human-related tasks, including action\nrecognition and 3D pose estimation. All datasets and code will be released\nsoon.", + "Collaborative perception in automated vehicles leverages the exchange of\ninformation between agents, aiming to elevate perception results. Previous\ncamera-based collaborative 3D perception methods typically employ 3D bounding\nboxes or bird's eye views as representations of the environment. However, these\napproaches fall short in offering a comprehensive 3D environmental prediction.\nTo bridge this gap, we introduce the first method for collaborative 3D semantic\noccupancy prediction. Particularly, it improves local 3D semantic occupancy\npredictions by hybrid fusion of (i) semantic and occupancy task features, and\n(ii) compressed orthogonal attention features shared between vehicles.\nAdditionally, due to the lack of a collaborative perception dataset designed\nfor semantic occupancy prediction, we augment a current collaborative\nperception dataset to include 3D collaborative semantic occupancy labels for a\nmore robust evaluation. The experimental findings highlight that: (i) our\ncollaborative semantic occupancy predictions excel above the results from\nsingle vehicles by over 30%, and (ii) models anchored on semantic occupancy\noutpace state-of-the-art collaborative 3D detection techniques in subsequent\nperception applications, showcasing enhanced accuracy and enriched\nsemantic-awareness in road environments.", + "In class-incremental learning (CIL) scenarios, the phenomenon of catastrophic\nforgetting caused by the classifier's bias towards the current task has long\nposed a significant challenge. It is mainly caused by the characteristic of\ndiscriminative models. With the growing popularity of the generative\nmulti-modal models, we would explore replacing discriminative models with\ngenerative ones for CIL. However, transitioning from discriminative to\ngenerative models requires addressing two key challenges. The primary challenge\nlies in transferring the generated textual information into the classification\nof distinct categories. Additionally, it requires formulating the task of CIL\nwithin a generative framework. To this end, we propose a novel generative\nmulti-modal model (GMM) framework for class-incremental learning. Our approach\ndirectly generates labels for images using an adapted generative model. After\nobtaining the detailed text, we use a text encoder to extract text features and\nemploy feature matching to determine the most similar label as the\nclassification prediction. In the conventional CIL settings, we achieve\nsignificantly better results in long-sequence task scenarios.", + "After\nobtaining the detailed text, we use a text encoder to extract text features and\nemploy feature matching to determine the most similar label as the\nclassification prediction. In the conventional CIL settings, we achieve\nsignificantly better results in long-sequence task scenarios. Under the\nFew-shot CIL setting, we have improved by at least 14\\% accuracy over all the\ncurrent state-of-the-art methods with significantly less forgetting. Our code\nis available at \\url{https://github.com/DoubleClass/GMM}.", + "Low-resource settings are well-established in natural language processing,\nwhere many languages lack sufficient data for deep learning at scale. However,\nlow-resource problems are under-explored in computer vision. In this paper, we\naddress this gap and explore the challenges of low-resource image tasks with\nvision foundation models. We first collect a benchmark of genuinely\nlow-resource image data, covering historic maps, circuit diagrams, and\nmechanical drawings. These low-resource settings all share three challenges:\ndata scarcity, fine-grained differences, and the distribution shift from\nnatural images to the specialized domain of interest. While existing foundation\nmodels have shown impressive generalizability, we find they cannot transfer\nwell to our low-resource tasks. To begin to tackle the challenges of\nlow-resource vision, we introduce one simple baseline per challenge.\nSpecifically, we i) enlarge the data space by generative models, ii) adopt the\nbest sub-kernels to encode local regions for fine-grained difference discovery\nand iii) learn attention for specialized domains. Experiments on our three\nlow-resource tasks demonstrate our proposals already provide a better baseline\nthan transfer learning, data augmentation, and fine-grained methods.", + "Experiments on our three\nlow-resource tasks demonstrate our proposals already provide a better baseline\nthan transfer learning, data augmentation, and fine-grained methods. This\nhighlights the unique characteristics and challenges of low-resource vision for\nfoundation models that warrant further investigation. Project page:\nhttps://xiaobai1217.github.io/Low-Resource-Vision/.", + "Tumor synthesis enables the creation of artificial tumors in medical images,\nfacilitating the training of AI models for tumor detection and segmentation.\nHowever, success in tumor synthesis hinges on creating visually realistic\ntumors that are generalizable across multiple organs and, furthermore, the\nresulting AI models being capable of detecting real tumors in images sourced\nfrom different domains (e.g., hospitals). This paper made a progressive stride\ntoward generalizable tumor synthesis by leveraging a critical observation:\nearly-stage tumors (< 2cm) tend to have similar imaging characteristics in\ncomputed tomography (CT), whether they originate in the liver, pancreas, or\nkidneys. We have ascertained that generative AI models, e.g., Diffusion Models,\ncan create realistic tumors generalized to a range of organs even when trained\non a limited number of tumor examples from only one organ. Moreover, we have\nshown that AI models trained on these synthetic tumors can be generalized to\ndetect and segment real tumors from CT volumes, encompassing a broad spectrum\nof patient demographics, imaging protocols, and healthcare facilities.", + "For image super-resolution (SR), bridging the gap between the performance on\nsynthetic datasets and real-world degradation scenarios remains a challenge.\nThis work introduces a novel \"Low-Res Leads the Way\" (LWay) training framework,\nmerging Supervised Pre-training with Self-supervised Learning to enhance the\nadaptability of SR models to real-world images. Our approach utilizes a\nlow-resolution (LR) reconstruction network to extract degradation embeddings\nfrom LR images, merging them with super-resolved outputs for LR reconstruction.\nLeveraging unseen LR images for self-supervised learning guides the model to\nadapt its modeling space to the target domain, facilitating fine-tuning of SR\nmodels without requiring paired high-resolution (HR) images. The integration of\nDiscrete Wavelet Transform (DWT) further refines the focus on high-frequency\ndetails. Extensive evaluations show that our method significantly improves the\ngeneralization and detail restoration capabilities of SR models on unseen\nreal-world datasets, outperforming existing methods. Our training regime is\nuniversally compatible, requiring no network architecture modifications, making\nit a practical solution for real-world SR applications.", + "The recently emerging text-to-motion advances have spired numerous attempts\nfor convenient and interactive human motion generation. Yet, existing methods\nare largely limited to generating body motions only without considering the\nrich two-hand motions, let alone handling various conditions like body dynamics\nor texts. To break the data bottleneck, we propose BOTH57M, a novel multi-modal\ndataset for two-hand motion generation. Our dataset includes accurate motion\ntracking for the human body and hands and provides pair-wised finger-level hand\nannotations and body descriptions. We further provide a strong baseline method,\nBOTH2Hands, for the novel task: generating vivid two-hand motions from both\nimplicit body dynamics and explicit text prompts. We first warm up two parallel\nbody-to-hand and text-to-hand diffusion models and then utilize the\ncross-attention transformer for motion blending. Extensive experiments and\ncross-validations demonstrate the effectiveness of our approach and dataset for\ngenerating convincing two-hand motions from the hybrid body-and-textual\nconditions. Our dataset and code will be disseminated to the community for\nfuture research.", + "Generating multiview images from a single view facilitates the rapid\ngeneration of a 3D mesh conditioned on a single image. Recent methods that\nintroduce 3D global representation into diffusion models have shown the\npotential to generate consistent multiviews, but they have reduced generation\nspeed and face challenges in maintaining generalizability and quality. To\naddress this issue, we propose EpiDiff, a localized interactive multiview\ndiffusion model. At the core of the proposed approach is to insert a\nlightweight epipolar attention block into the frozen diffusion model,\nleveraging epipolar constraints to enable cross-view interaction among feature\nmaps of neighboring views. The newly initialized 3D modeling module preserves\nthe original feature distribution of the diffusion model, exhibiting\ncompatibility with a variety of base diffusion models. Experiments show that\nEpiDiff generates 16 multiview images in just 12 seconds, and it surpasses\nprevious methods in quality evaluation metrics, including PSNR, SSIM and LPIPS.\nAdditionally, EpiDiff can generate a more diverse distribution of views,\nimproving the reconstruction quality from generated multiviews.", + "Additionally, EpiDiff can generate a more diverse distribution of views,\nimproving the reconstruction quality from generated multiviews. Please see our\nproject page at https://huanngzh.github.io/EpiDiff/.", + "To interpret Vision Transformers, post-hoc explanations assign salience\nscores to input pixels, providing human-understandable heatmaps. However,\nwhether these interpretations reflect true rationales behind the model's output\nis still underexplored. To address this gap, we study the faithfulness\ncriterion of explanations: the assigned salience scores should represent the\ninfluence of the corresponding input pixels on the model's predictions. To\nevaluate faithfulness, we introduce Salience-guided Faithfulness Coefficient\n(SaCo), a novel evaluation metric leveraging essential information of salience\ndistribution. Specifically, we conduct pair-wise comparisons among distinct\npixel groups and then aggregate the differences in their salience scores,\nresulting in a coefficient that indicates the explanation's degree of\nfaithfulness. Our explorations reveal that current metrics struggle to\ndifferentiate between advanced explanation methods and Random Attribution,\nthereby failing to capture the faithfulness property. In contrast, our proposed\nSaCo offers a reliable faithfulness measurement, establishing a robust metric\nfor interpretations. Furthermore, our SaCo demonstrates that the use of\ngradient and multi-layer aggregation can markedly enhance the faithfulness of\nattention-based explanation, shedding light on potential paths for advancing\nVision Transformer explainability.", + "We propose a self-supervised method for learning representations based on\nspatial audio-visual correspondences in egocentric videos. Our method uses a\nmasked auto-encoding framework to synthesize masked binaural (multi-channel)\naudio through the synergy of audio and vision, thereby learning useful spatial\nrelationships between the two modalities. We use our pretrained features to\ntackle two downstream video tasks requiring spatial understanding in social\nscenarios: active speaker detection and spatial audio denoising. Through\nextensive experiments, we show that our features are generic enough to improve\nover multiple state-of-the-art baselines on both tasks on two challenging\negocentric video datasets that offer binaural audio, EgoCom and EasyCom.\nProject: http://vision.cs.utexas.edu/projects/ego_av_corr.", + "We present DreamAvatar, a text-and-shape guided framework for generating\nhigh-quality 3D human avatars with controllable poses. While encouraging\nresults have been reported by recent methods on text-guided 3D common object\ngeneration, generating high-quality human avatars remains an open challenge due\nto the complexity of the human body's shape, pose, and appearance. We propose\nDreamAvatar to tackle this challenge, which utilizes a trainable NeRF for\npredicting density and color for 3D points and pretrained text-to-image\ndiffusion models for providing 2D self-supervision. Specifically, we leverage\nthe SMPL model to provide shape and pose guidance for the generation. We\nintroduce a dual-observation-space design that involves the joint optimization\nof a canonical space and a posed space that are related by a learnable\ndeformation field. This facilitates the generation of more complete textures\nand geometry faithful to the target pose. We also jointly optimize the losses\ncomputed from the full body and from the zoomed-in 3D head to alleviate the\ncommon multi-face ''Janus'' problem and improve facial details in the generated\navatars.", + "This facilitates the generation of more complete textures\nand geometry faithful to the target pose. We also jointly optimize the losses\ncomputed from the full body and from the zoomed-in 3D head to alleviate the\ncommon multi-face ''Janus'' problem and improve facial details in the generated\navatars. Extensive evaluations demonstrate that DreamAvatar significantly\noutperforms existing methods, establishing a new state-of-the-art for\ntext-and-shape guided 3D human avatar generation.", + "Histopathological whole slide images (WSIs) classification has become a\nfoundation task in medical microscopic imaging processing. Prevailing\napproaches involve learning WSIs as instance-bag representations, emphasizing\nsignificant instances but struggling to capture the interactions between\ninstances. Additionally, conventional graph representation methods utilize\nexplicit spatial positions to construct topological structures but restrict the\nflexible interaction capabilities between instances at arbitrary locations,\nparticularly when spatially distant. In response, we propose a novel dynamic\ngraph representation algorithm that conceptualizes WSIs as a form of the\nknowledge graph structure. Specifically, we dynamically construct neighbors and\ndirected edge embeddings based on the head and tail relationships between\ninstances. Then, we devise a knowledge-aware attention mechanism that can\nupdate the head node features by learning the joint attention score of each\nneighbor and edge. Finally, we obtain a graph-level embedding through the\nglobal pooling process of the updated head, serving as an implicit\nrepresentation for the WSI classification. Our end-to-end graph representation\nlearning approach has outperformed the state-of-the-art WSI analysis methods on\nthree TCGA benchmark datasets and in-house test sets. Our code is available at\nhttps://github.com/WonderLandxD/WiKG.", + "We developed a tool for visualizing and analyzing large pre-trained vision\nmodels by mapping them onto the brain, thus exposing their hidden inside. Our\ninnovation arises from a surprising usage of brain encoding: predicting brain\nfMRI measurements in response to images. We report two findings. First,\nexplicit mapping between the brain and deep-network features across dimensions\nof space, layers, scales, and channels is crucial. This mapping method,\nFactorTopy, is plug-and-play for any deep-network; with it, one can paint a\npicture of the network onto the brain (literally!). Second, our visualization\nshows how different training methods matter: they lead to remarkable\ndifferences in hierarchical organization and scaling behavior, growing with\nmore data or network capacity. It also provides insight into fine-tuning: how\npre-trained models change when adapting to small datasets. We found brain-like\nhierarchically organized network suffer less from catastrophic forgetting after\nfine-tuned.", + "This paper addresses an interesting yet challenging problem -- source-free\nunsupervised domain adaptation (SFUDA) for pinhole-to-panoramic semantic\nsegmentation -- given only a pinhole image-trained model (i.e., source) and\nunlabeled panoramic images (i.e., target). Tackling this problem is nontrivial\ndue to the semantic mismatches, style discrepancies, and inevitable distortion\nof panoramic images. To this end, we propose a novel method that utilizes\nTangent Projection (TP) as it has less distortion and meanwhile slits the\nequirectangular projection (ERP) with a fixed FoV to mimic the pinhole images.\nBoth projections are shown effective in extracting knowledge from the source\nmodel. However, the distinct projection discrepancies between source and target\ndomains impede the direct knowledge transfer; thus, we propose a panoramic\nprototype adaptation module (PPAM) to integrate panoramic prototypes from the\nextracted knowledge for adaptation. We then impose the loss constraints on both\npredictions and prototypes and propose a cross-dual attention module (CDAM) at\nthe feature level to better align the spatial and channel characteristics\nacross the domains and projections.", + "We then impose the loss constraints on both\npredictions and prototypes and propose a cross-dual attention module (CDAM) at\nthe feature level to better align the spatial and channel characteristics\nacross the domains and projections. Both knowledge extraction and transfer\nprocesses are synchronously updated to reach the best performance. Extensive\nexperiments on the synthetic and real-world benchmarks, including outdoor and\nindoor scenarios, demonstrate that our method achieves significantly better\nperformance than prior SFUDA methods for pinhole-to-panoramic adaptation.", + "Large-scale visual-language pre-trained models have achieved significant\nsuccess in various video tasks. However, most existing methods follow an \"adapt\nthen align\" paradigm, which adapts pre-trained image encoders to model\nvideo-level representations and utilizes one-hot or text embedding of the\naction labels for supervision. This paradigm overlooks the challenge of mapping\nfrom static images to complicated activity concepts. In this paper, we propose\na novel \"Align before Adapt\" (ALT) paradigm. Prior to adapting to video\nrepresentation learning, we exploit the entity-to-region alignments for each\nframe. The alignments are fulfilled by matching the region-aware image\nembeddings to an offline-constructed text corpus. With the aligned entities, we\nfeed their text embeddings to a transformer-based video adapter as the queries,\nwhich can help extract the semantics of the most important entities from a\nvideo to a vector. This paradigm reuses the visual-language alignment of VLP\nduring adaptation and tries to explain an action by the underlying entities.\nThis helps understand actions by bridging the gap with complex activity\nsemantics, particularly when facing unfamiliar or unseen categories. ALT\ndemonstrates competitive performance while maintaining remarkably low\ncomputational costs.", + "This helps understand actions by bridging the gap with complex activity\nsemantics, particularly when facing unfamiliar or unseen categories. ALT\ndemonstrates competitive performance while maintaining remarkably low\ncomputational costs. In fully supervised experiments, it achieves 88.1% top-1\naccuracy on Kinetics-400 with only 4947 GFLOPs. Moreover, ALT outperforms the\nprevious state-of-the-art methods in both zero-shot and few-shot experiments,\nemphasizing its superior generalizability across various learning scenarios.", + "The remarkable efficacy of text-to-image diffusion models has motivated\nextensive exploration of their potential application in video domains.\nZero-shot methods seek to extend image diffusion models to videos without\nnecessitating model training. Recent methods mainly focus on incorporating\ninter-frame correspondence into attention mechanisms. However, the soft\nconstraint imposed on determining where to attend to valid features can\nsometimes be insufficient, resulting in temporal inconsistency. In this paper,\nwe introduce FRESCO, intra-frame correspondence alongside inter-frame\ncorrespondence to establish a more robust spatial-temporal constraint. This\nenhancement ensures a more consistent transformation of semantically similar\ncontent across frames. Beyond mere attention guidance, our approach involves an\nexplicit update of features to achieve high spatial-temporal consistency with\nthe input video, significantly improving the visual coherence of the resulting\ntranslated videos. Extensive experiments demonstrate the effectiveness of our\nproposed framework in producing high-quality, coherent videos, marking a\nnotable improvement over existing zero-shot methods.", + "Single-pixel imaging (SPI) is a potential computational imaging technique\nwhich produces image by solving an illposed reconstruction problem from few\nmeasurements captured by a single-pixel detector. Deep learning has achieved\nimpressive success on SPI reconstruction. However, previous poor reconstruction\nperformance and impractical imaging model limit its real-world applications. In\nthis paper, we propose a deep unfolding network with hybrid-attention\nTransformer on Kronecker SPI model, dubbed HATNet, to improve the imaging\nquality of real SPI cameras. Specifically, we unfold the computation graph of\nthe iterative shrinkagethresholding algorithm (ISTA) into two alternative\nmodules: efficient tensor gradient descent and hybrid-attention multiscale\ndenoising. By virtue of Kronecker SPI, the gradient descent module can avoid\nhigh computational overheads rooted in previous gradient descent modules based\non vectorized SPI. The denoising module is an encoder-decoder architecture\npowered by dual-scale spatial attention for high- and low-frequency aggregation\nand channel attention for global information recalibration. Moreover, we build\na SPI prototype to verify the effectiveness of the proposed method.", + "The denoising module is an encoder-decoder architecture\npowered by dual-scale spatial attention for high- and low-frequency aggregation\nand channel attention for global information recalibration. Moreover, we build\na SPI prototype to verify the effectiveness of the proposed method. Extensive\nexperiments on synthetic and real data demonstrate that our method achieves the\nstate-of-the-art performance. The source code and pre-trained models are\navailable at https://github.com/Gang-Qu/HATNet-SPI.", + "Leveraging multi-view diffusion models as priors for 3D optimization have\nalleviated the problem of 3D consistency, e.g., the Janus face problem or the\ncontent drift problem, in zero-shot text-to-3D models. However, the 3D\ngeometric fidelity of the output remains an unresolved issue; albeit the\nrendered 2D views are realistic, the underlying geometry may contain errors\nsuch as unreasonable concavities. In this work, we propose CorrespondentDream,\nan effective method to leverage annotation-free, cross-view correspondences\nyielded from the diffusion U-Net to provide additional 3D prior to the NeRF\noptimization process. We find that these correspondences are strongly\nconsistent with human perception, and by adopting it in our loss design, we are\nable to produce NeRF models with geometries that are more coherent with common\nsense, e.g., more smoothed object surface, yielding higher 3D fidelity. We\ndemonstrate the efficacy of our approach through various comparative\nqualitative results and a solid user study.", + "We present SplattingAvatar, a hybrid 3D representation of photorealistic\nhuman avatars with Gaussian Splatting embedded on a triangle mesh, which\nrenders over 300 FPS on a modern GPU and 30 FPS on a mobile device. We\ndisentangle the motion and appearance of a virtual human with explicit mesh\ngeometry and implicit appearance modeling with Gaussian Splatting. The\nGaussians are defined by barycentric coordinates and displacement on a triangle\nmesh as Phong surfaces. We extend lifted optimization to simultaneously\noptimize the parameters of the Gaussians while walking on the triangle mesh.\nSplattingAvatar is a hybrid representation of virtual humans where the mesh\nrepresents low-frequency motion and surface deformation, while the Gaussians\ntake over the high-frequency geometry and detailed appearance. Unlike existing\ndeformation methods that rely on an MLP-based linear blend skinning (LBS) field\nfor motion, we control the rotation and translation of the Gaussians directly\nby mesh, which empowers its compatibility with various animation techniques,\ne.g., skeletal animation, blend shapes, and mesh editing.", + "Unlike existing\ndeformation methods that rely on an MLP-based linear blend skinning (LBS) field\nfor motion, we control the rotation and translation of the Gaussians directly\nby mesh, which empowers its compatibility with various animation techniques,\ne.g., skeletal animation, blend shapes, and mesh editing. Trainable from\nmonocular videos for both full-body and head avatars, SplattingAvatar shows\nstate-of-the-art rendering quality across multiple datasets.", + "Reconstructing an avatar from a portrait image has many applications in\nmultimedia, but remains a challenging research problem. Extracting reflectance\nmaps and geometry from one image is ill-posed: recovering geometry is a\none-to-many mapping problem and reflectance and light are difficult to\ndisentangle. Accurate geometry and reflectance can be captured under the\ncontrolled conditions of a light stage, but it is costly to acquire large\ndatasets in this fashion. Moreover, training solely with this type of data\nleads to poor generalization with in-the-wild images. This motivates the\nintroduction of MoSAR, a method for 3D avatar generation from monocular images.\nWe propose a semi-supervised training scheme that improves generalization by\nlearning from both light stage and in-the-wild datasets. This is achieved using\na novel differentiable shading formulation. We show that our approach\neffectively disentangles the intrinsic face parameters, producing relightable\navatars. As a result, MoSAR estimates a richer set of skin reflectance maps,\nand generates more realistic avatars than existing state-of-the-art methods.", + "We show that our approach\neffectively disentangles the intrinsic face parameters, producing relightable\navatars. As a result, MoSAR estimates a richer set of skin reflectance maps,\nand generates more realistic avatars than existing state-of-the-art methods. We\nalso introduce a new dataset, named FFHQ-UV-Intrinsics, the first public\ndataset providing intrinsic face attributes at scale (diffuse, specular,\nambient occlusion and translucency maps) for a total of 10k subjects. The\nproject website and the dataset are available on the following link:\nhttps://ubisoft-laforge.github.io/character/mosar/", + "In the realm of geospatial analysis, the diversity of remote sensors,\nencompassing both optical and microwave technologies, offers a wealth of\ndistinct observational capabilities. Recognizing this, we present msGFM, a\nmultisensor geospatial foundation model that effectively unifies data from four\nkey sensor modalities. This integration spans an expansive dataset of two\nmillion multisensor images. msGFM is uniquely adept at handling both paired and\nunpaired sensor data. For data originating from identical geolocations, our\nmodel employs an innovative cross-sensor pretraining approach in masked image\nmodeling, enabling the synthesis of joint representations from diverse sensors.\nmsGFM, incorporating four remote sensors, upholds strong performance, forming a\ncomprehensive model adaptable to various sensor types. msGFM has demonstrated\nenhanced proficiency in a range of both single-sensor and multisensor\ndownstream tasks. These include scene classification, segmentation, cloud\nremoval, and pan-sharpening.", + "msGFM has demonstrated\nenhanced proficiency in a range of both single-sensor and multisensor\ndownstream tasks. These include scene classification, segmentation, cloud\nremoval, and pan-sharpening. A key discovery of our research is that\nrepresentations derived from natural images are not always compatible with the\ndistinct characteristics of geospatial remote sensors, underscoring the\nlimitations of existing representations in this field. Our work can serve as a\nguide for developing multisensor geospatial pretraining models, paving the way\nfor more advanced geospatial capabilities.", + "We study visually grounded VideoQA in response to the emerging trends of\nutilizing pretraining techniques for video-language understanding.\nSpecifically, by forcing vision-language models (VLMs) to answer questions and\nsimultaneously provide visual evidence, we seek to ascertain the extent to\nwhich the predictions of such techniques are genuinely anchored in relevant\nvideo content, versus spurious correlations from language or irrelevant visual\ncontext. Towards this, we construct NExT-GQA -- an extension of NExT-QA with\n10.5$K$ temporal grounding (or location) labels tied to the original QA pairs.\nWith NExT-GQA, we scrutinize a series of state-of-the-art VLMs. Through\npost-hoc attention analysis, we find that these models are extremely weak in\nsubstantiating the answers despite their strong QA performance. This exposes\nthe limitation of current VLMs in making reliable predictions. As a remedy, we\nfurther explore and propose a grounded-QA method via Gaussian mask optimization\nand cross-modal learning. Experiments with different backbones demonstrate that\nthis grounding mechanism improves both grounding and QA.", + "This exposes\nthe limitation of current VLMs in making reliable predictions. As a remedy, we\nfurther explore and propose a grounded-QA method via Gaussian mask optimization\nand cross-modal learning. Experiments with different backbones demonstrate that\nthis grounding mechanism improves both grounding and QA. With these efforts, we\naim to push towards trustworthy VLMs in VQA systems. Our dataset and code are\navailable at https://github.com/doc-doc/NExT-GQA.", + "Detecting edges in images suffers from the problems of (P1) heavy imbalance\nbetween positive and negative classes as well as (P2) label uncertainty owing\nto disagreement between different annotators. Existing solutions address P1\nusing class-balanced cross-entropy loss and dice loss and P2 by only predicting\nedges agreed upon by most annotators. In this paper, we propose RankED, a\nunified ranking-based approach that addresses both the imbalance problem (P1)\nand the uncertainty problem (P2). RankED tackles these two problems with two\ncomponents: One component which ranks positive pixels over negative pixels, and\nthe second which promotes high confidence edge pixels to have more label\ncertainty. We show that RankED outperforms previous studies and sets a new\nstate-of-the-art on NYUD-v2, BSDS500 and Multi-cue datasets. Code is available\nat https://ranked-cvpr24.github.io.", + "We present DiffHuman, a probabilistic method for photorealistic 3D human\nreconstruction from a single RGB image. Despite the ill-posed nature of this\nproblem, most methods are deterministic and output a single solution, often\nresulting in a lack of geometric detail and blurriness in unseen or uncertain\nregions. In contrast, DiffHuman predicts a probability distribution over 3D\nreconstructions conditioned on an input 2D image, which allows us to sample\nmultiple detailed 3D avatars that are consistent with the image. DiffHuman is\nimplemented as a conditional diffusion model that denoises pixel-aligned 2D\nobservations of an underlying 3D shape representation. During inference, we may\nsample 3D avatars by iteratively denoising 2D renders of the predicted 3D\nrepresentation. Furthermore, we introduce a generator neural network that\napproximates rendering with considerably reduced runtime (55x speed up),\nresulting in a novel dual-branch diffusion framework. Our experiments show that\nDiffHuman can produce diverse and detailed reconstructions for the parts of the\nperson that are unseen or uncertain in the input image, while remaining\ncompetitive with the state-of-the-art when reconstructing visible surfaces.", + "Owe to the powerful generative priors, the pre-trained text-to-image (T2I)\ndiffusion models have become increasingly popular in solving the real-world\nimage super-resolution problem. However, as a consequence of the heavy quality\ndegradation of input low-resolution (LR) images, the destruction of local\nstructures can lead to ambiguous image semantics. As a result, the content of\nreproduced high-resolution image may have semantic errors, deteriorating the\nsuper-resolution performance. To address this issue, we present a\nsemantics-aware approach to better preserve the semantic fidelity of generative\nreal-world image super-resolution. First, we train a degradation-aware prompt\nextractor, which can generate accurate soft and hard semantic prompts even\nunder strong degradation. The hard semantic prompts refer to the image tags,\naiming to enhance the local perception ability of the T2I model, while the soft\nsemantic prompts compensate for the hard ones to provide additional\nrepresentation information. These semantic prompts encourage the T2I model to\ngenerate detailed and semantically accurate results.", + "The hard semantic prompts refer to the image tags,\naiming to enhance the local perception ability of the T2I model, while the soft\nsemantic prompts compensate for the hard ones to provide additional\nrepresentation information. These semantic prompts encourage the T2I model to\ngenerate detailed and semantically accurate results. Furthermore, during the\ninference process, we integrate the LR images into the initial sampling noise\nto mitigate the diffusion model's tendency to generate excessive random\ndetails. The experiments show that our method can reproduce more realistic\nimage details and hold better the semantics. The source code of our method can\nbe found at https://github.com/cswry/SeeSR.", + "Revolutionizing the field of deep learning, Transformer-based models have\nachieved remarkable performance in many tasks. Recent research has recognized\nthese models are robust to shuffling but are limited to inter-token permutation\nin the forward propagation. In this work, we propose our definition of\npermutation equivariance, a broader concept covering both inter- and intra-\ntoken permutation in the forward and backward propagation of neural networks.\nWe rigorously proved that such permutation equivariance property can be\nsatisfied on most vanilla Transformer-based models with almost no adaptation.\nWe examine the property over a range of state-of-the-art models including ViT,\nBert, GPT, and others, with experimental validations. Further, as a\nproof-of-concept, we explore how real-world applications including\nprivacy-enhancing split learning, and model authorization, could exploit the\npermutation equivariance property, which implicates wider, intriguing\napplication scenarios.", + "Establishing an automatic evaluation metric that closely aligns with human\njudgments is essential for effectively developing image captioning models.\nRecent data-driven metrics have demonstrated a stronger correlation with human\njudgments than classic metrics such as CIDEr; however they lack sufficient\ncapabilities to handle hallucinations and generalize across diverse images and\ntexts partially because they compute scalar similarities merely using\nembeddings learned from tasks unrelated to image captioning evaluation. In this\nstudy, we propose Polos, a supervised automatic evaluation metric for image\ncaptioning models. Polos computes scores from multimodal inputs, using a\nparallel feature extraction mechanism that leverages embeddings trained through\nlarge-scale contrastive learning. To train Polos, we introduce Multimodal\nMetric Learning from Human Feedback (M$^2$LHF), a framework for developing\nmetrics based on human feedback. We constructed the Polaris dataset, which\ncomprises 131K human judgments from 550 evaluators, which is approximately ten\ntimes larger than standard datasets. Our approach achieved state-of-the-art\nperformance on Composite, Flickr8K-Expert, Flickr8K-CF, PASCAL-50S, FOIL, and\nthe Polaris dataset, thereby demonstrating its effectiveness and robustness.", + "We introduce the video detours problem for navigating instructional videos.\nGiven a source video and a natural language query asking to alter the how-to\nvideo's current path of execution in a certain way, the goal is to find a\nrelated ''detour video'' that satisfies the requested alteration. To address\nthis challenge, we propose VidDetours, a novel video-language approach that\nlearns to retrieve the targeted temporal segments from a large repository of\nhow-to's using video-and-text conditioned queries. Furthermore, we devise a\nlanguage-based pipeline that exploits how-to video narration text to create\nweakly supervised training data. We demonstrate our idea applied to the domain\nof how-to cooking videos, where a user can detour from their current recipe to\nfind steps with alternate ingredients, tools, and techniques. Validating on a\nground truth annotated dataset of 16K samples, we show our model's significant\nimprovements over best available methods for video retrieval and question\nanswering, with recall rates exceeding the state of the art by 35%.", + "Many surface reconstruction methods incorporate normal integration, which is\na process to obtain a depth map from surface gradients. In this process, the\ninput may represent a surface with discontinuities, e.g., due to\nself-occlusion. To reconstruct an accurate depth map from the input normal map,\nhidden surface gradients occurring from the jumps must be handled. To model\nthese jumps correctly, we design a novel discretization scheme for the domain\nof normal integration. Our key idea is to introduce auxiliary edges, which\nbridge between piecewise-smooth patches in the domain so that the magnitude of\nhidden jumps can be explicitly expressed. Using the auxiliary edges, we design\na novel algorithm to optimize the discontinuity and the depth map from the\ninput normal map. Our method optimizes discontinuities by using a combination\nof iterative re-weighted least squares and iterative filtering of the jump\nmagnitudes on auxiliary edges to provide strong sparsity regularization.\nCompared to previous discontinuity-preserving normal integration methods, which\nmodel the magnitudes of jumps only implicitly, our method reconstructs subtle\ndiscontinuities accurately thanks to our explicit representation of jumps\nallowing for strong sparsity regularization.", + "We present DrivingGaussian, an efficient and effective framework for\nsurrounding dynamic autonomous driving scenes. For complex scenes with moving\nobjects, we first sequentially and progressively model the static background of\nthe entire scene with incremental static 3D Gaussians. We then leverage a\ncomposite dynamic Gaussian graph to handle multiple moving objects,\nindividually reconstructing each object and restoring their accurate positions\nand occlusion relationships within the scene. We further use a LiDAR prior for\nGaussian Splatting to reconstruct scenes with greater details and maintain\npanoramic consistency. DrivingGaussian outperforms existing methods in dynamic\ndriving scene reconstruction and enables photorealistic surround-view synthesis\nwith high-fidelity and multi-camera consistency. Our project page is at:\nhttps://github.com/VDIGPKU/DrivingGaussian.", + "In this paper, we propose a novel concept of path consistency to learn robust\nobject matching without using manual object identity supervision. Our key idea\nis that, to track a object through frames, we can obtain multiple different\nassociation results from a model by varying the frames it can observe, i.e.,\nskipping frames in observation. As the differences in observations do not alter\nthe identities of objects, the obtained association results should be\nconsistent. Based on this rationale, we generate multiple observation paths,\neach specifying a different set of frames to be skipped, and formulate the Path\nConsistency Loss that enforces the association results are consistent across\ndifferent observation paths. We use the proposed loss to train our object\nmatching model with only self-supervision. By extensive experiments on three\ntracking datasets (MOT17, PersonPath22, KITTI), we demonstrate that our method\noutperforms existing unsupervised methods with consistent margins on various\nevaluation metrics, and even achieves performance close to supervised methods.", + "Unsupervised learning of keypoints and landmarks has seen significant\nprogress with the help of modern neural network architectures, but performance\nis yet to match the supervised counterpart, making their practicability\nquestionable. We leverage the emergent knowledge within text-to-image diffusion\nmodels, towards more robust unsupervised keypoints. Our core idea is to find\ntext embeddings that would cause the generative model to consistently attend to\ncompact regions in images (i.e. keypoints). To do so, we simply optimize the\ntext embedding such that the cross-attention maps within the denoising network\nare localized as Gaussians with small standard deviations. We validate our\nperformance on multiple datasets: the CelebA, CUB-200-2011, Tai-Chi-HD,\nDeepFashion, and Human3.6m datasets. We achieve significantly improved\naccuracy, sometimes even outperforming supervised ones, particularly for data\nthat is non-aligned and less curated. Our code is publicly available and can be\nfound through our project page: https://ubc-vision.github.io/StableKeypoints/", + "Single-photon Light Detection and Ranging (LiDAR) systems are often equipped\nwith an array of detectors for improved spatial resolution and sensing speed.\nHowever, given a fixed amount of flux produced by the laser transmitter across\nthe scene, the per-pixel Signal-to-Noise Ratio (SNR) will decrease when more\npixels are packed in a unit space. This presents a fundamental trade-off\nbetween the spatial resolution of the sensor array and the SNR received at each\npixel. Theoretical characterization of this fundamental limit is explored. By\nderiving the photon arrival statistics and introducing a series of new\napproximation techniques, the Mean Squared Error (MSE) of the\nmaximum-likelihood estimator of the time delay is derived. The theoretical\npredictions align well with simulations and real data.", + "Cross-domain few-shot learning (CDFSL) aims to acquire knowledge from limited\ntraining data in the target domain by leveraging prior knowledge transferred\nfrom source domains with abundant training samples. CDFSL faces challenges in\ntransferring knowledge across dissimilar domains and fine-tuning models with\nlimited training data. To address these challenges, we initially extend the\nanalysis of loss landscapes from the parameter space to the representation\nspace, which allows us to simultaneously interpret the transferring and\nfine-tuning difficulties of CDFSL models. We observe that sharp minima in the\nloss landscapes of the representation space result in representations that are\nhard to transfer and fine-tune. Moreover, existing flatness-based methods have\nlimited generalization ability due to their short-range flatness. To enhance\nthe transferability and facilitate fine-tuning, we introduce a simple yet\neffective approach to achieve long-range flattening of the minima in the loss\nlandscape. This approach considers representations that are differently\nnormalized as minima in the loss landscape and flattens the high-loss region in\nthe middle by randomly sampling interpolated representations. We implement this\nmethod as a new normalization layer that replaces the original one in both CNNs\nand ViTs.", + "This approach considers representations that are differently\nnormalized as minima in the loss landscape and flattens the high-loss region in\nthe middle by randomly sampling interpolated representations. We implement this\nmethod as a new normalization layer that replaces the original one in both CNNs\nand ViTs. This layer is simple and lightweight, introducing only a minimal\nnumber of additional parameters. Experimental results on 8 datasets demonstrate\nthat our approach outperforms state-of-the-art methods in terms of average\naccuracy. Moreover, our method achieves performance improvements of up to 9\\%\ncompared to the current best approaches on individual datasets. Our code will\nbe released.", + "Improving the detection of distant 3d objects is an important yet challenging\ntask. For camera-based 3D perception, the annotation of 3d bounding relies\nheavily on LiDAR for accurate depth information. As such, the distance of\nannotation is often limited due to the sparsity of LiDAR points on distant\nobjects, which hampers the capability of existing detectors for long-range\nscenarios. We address this challenge by considering only 2D box supervision for\ndistant objects since they are easy to annotate. We propose LR3D, a framework\nthat learns to recover the missing depth of distant objects. LR3D adopts an\nimplicit projection head to learn the generation of mapping between 2D boxes\nand depth using the 3D supervision on close objects. This mapping allows the\ndepth estimation of distant objects conditioned on their 2D boxes, making\nlong-range 3D detection with 2D supervision feasible. Experiments show that\nwithout distant 3D annotations, LR3D allows camera-based methods to detect\ndistant objects (over 200m) with comparable accuracy to full 3D supervision.", + "Experiments show that\nwithout distant 3D annotations, LR3D allows camera-based methods to detect\ndistant objects (over 200m) with comparable accuracy to full 3D supervision.\nOur framework is general, and could widely benefit 3D detection methods to a\nlarge extent.", + "Recovering degraded low-resolution text images is challenging, especially for\nChinese text images with complex strokes and severe degradation in real-world\nscenarios. Ensuring both text fidelity and style realness is crucial for\nhigh-quality text image super-resolution. Recently, diffusion models have\nachieved great success in natural image synthesis and restoration due to their\npowerful data distribution modeling abilities and data generation capabilities.\nIn this work, we propose an Image Diffusion Model (IDM) to restore text images\nwith realistic styles. For diffusion models, they are not only suitable for\nmodeling realistic image distribution but also appropriate for learning text\ndistribution. Since text prior is important to guarantee the correctness of the\nrestored text structure according to existing arts, we also propose a Text\nDiffusion Model (TDM) for text recognition which can guide IDM to generate text\nimages with correct structures. We further propose a Mixture of Multi-modality\nmodule (MoM) to make these two diffusion models cooperate with each other in\nall the diffusion steps.", + "We further propose a Mixture of Multi-modality\nmodule (MoM) to make these two diffusion models cooperate with each other in\nall the diffusion steps. Extensive experiments on synthetic and real-world\ndatasets demonstrate that our Diffusion-based Blind Text Image Super-Resolution\n(DiffTSR) can restore text images with more accurate text structures as well as\nmore realistic appearances simultaneously.", + "Continual learning empowers models to adapt autonomously to the ever-changing\nenvironment or data streams without forgetting old knowledge. Prompt-based\napproaches are built on frozen pre-trained models to learn the task-specific\nprompts and classifiers efficiently. Existing prompt-based methods are\ninconsistent between training and testing, limiting their effectiveness. Two\ntypes of inconsistency are revealed. Test predictions are made from all\nclassifiers while training only focuses on the current task classifier without\nholistic alignment, leading to Classifier inconsistency. Prompt inconsistency\nindicates that the prompt selected during testing may not correspond to the one\nassociated with this task during training. In this paper, we propose a novel\nprompt-based method, Consistent Prompting (CPrompt), for more aligned training\nand testing. Specifically, all existing classifiers are exposed to prompt\ntraining, resulting in classifier consistency learning. In addition, prompt\nconsistency learning is proposed to enhance prediction robustness and boost\nprompt selection accuracy. Our Consistent Prompting surpasses its prompt-based\ncounterparts and achieves state-of-the-art performance on multiple continual\nlearning benchmarks. Detailed analysis shows that improvements come from more\nconsistent training and testing.", + "In the context of autonomous driving, the significance of effective feature\nlearning is widely acknowledged. While conventional 3D self-supervised\npre-training methods have shown widespread success, most methods follow the\nideas originally designed for 2D images. In this paper, we present UniPAD, a\nnovel self-supervised learning paradigm applying 3D volumetric differentiable\nrendering. UniPAD implicitly encodes 3D space, facilitating the reconstruction\nof continuous 3D shape structures and the intricate appearance characteristics\nof their 2D projections. The flexibility of our method enables seamless\nintegration into both 2D and 3D frameworks, enabling a more holistic\ncomprehension of the scenes. We manifest the feasibility and effectiveness of\nUniPAD by conducting extensive experiments on various downstream 3D tasks. Our\nmethod significantly improves lidar-, camera-, and lidar-camera-based baseline\nby 9.1, 7.7, and 6.9 NDS, respectively.", + "We manifest the feasibility and effectiveness of\nUniPAD by conducting extensive experiments on various downstream 3D tasks. Our\nmethod significantly improves lidar-, camera-, and lidar-camera-based baseline\nby 9.1, 7.7, and 6.9 NDS, respectively. Notably, our pre-training pipeline\nachieves 73.2 NDS for 3D object detection and 79.4 mIoU for 3D semantic\nsegmentation on the nuScenes validation set, achieving state-of-the-art results\nin comparison with previous methods. The code will be available at\nhttps://github.com/Nightmare-n/UniPAD.", + "Generative Adversarial Networks (GANs) have been widely used to recover vivid\ntextures in image super-resolution (SR) tasks. In particular, one discriminator\nis utilized to enable the SR network to learn the distribution of real-world\nhigh-quality images in an adversarial training manner. However, the\ndistribution learning is overly coarse-grained, which is susceptible to virtual\ntextures and causes counter-intuitive generation results. To mitigate this, we\npropose the simple and effective Semantic-aware Discriminator (denoted as SeD),\nwhich encourages the SR network to learn the fine-grained distributions by\nintroducing the semantics of images as a condition. Concretely, we aim to\nexcavate the semantics of images from a well-trained semantic extractor. Under\ndifferent semantics, the discriminator is able to distinguish the real-fake\nimages individually and adaptively, which guides the SR network to learn the\nmore fine-grained semantic-aware textures. To obtain accurate and abundant\nsemantics, we take full advantage of recently popular pretrained vision models\n(PVMs) with extensive datasets, and then incorporate its semantic features into\nthe discriminator through a well-designed spatial cross-attention module.", + "To obtain accurate and abundant\nsemantics, we take full advantage of recently popular pretrained vision models\n(PVMs) with extensive datasets, and then incorporate its semantic features into\nthe discriminator through a well-designed spatial cross-attention module. In\nthis way, our proposed semantic-aware discriminator empowered the SR network to\nproduce more photo-realistic and pleasing images. Extensive experiments on two\ntypical tasks, i.e., SR and Real SR have demonstrated the effectiveness of our\nproposed methods.", + "While vision-language models (VLMs) have achieved remarkable performance\nimprovements recently, there is growing evidence that these models also posses\nharmful biases with respect to social attributes such as gender and race. Prior\nstudies have primarily focused on probing such bias attributes individually\nwhile ignoring biases associated with intersections between social attributes.\nThis could be due to the difficulty of collecting an exhaustive set of\nimage-text pairs for various combinations of social attributes. To address this\nchallenge, we employ text-to-image diffusion models to produce counterfactual\nexamples for probing intersectional social biases at scale. Our approach\nutilizes Stable Diffusion with cross attention control to produce sets of\ncounterfactual image-text pairs that are highly similar in their depiction of a\nsubject (e.g., a given occupation) while differing only in their depiction of\nintersectional social attributes (e.g., race & gender). Through our\nover-generate-then-filter methodology, we produce SocialCounterfactuals, a\nhigh-quality dataset containing 171k image-text pairs for probing\nintersectional biases related to gender, race, and physical characteristics.", + "Through our\nover-generate-then-filter methodology, we produce SocialCounterfactuals, a\nhigh-quality dataset containing 171k image-text pairs for probing\nintersectional biases related to gender, race, and physical characteristics. We\nconduct extensive experiments to demonstrate the usefulness of our generated\ndataset for probing and mitigating intersectional social biases in\nstate-of-the-art VLMs.", + "As with many machine learning problems, the progress of image generation\nmethods hinges on good evaluation metrics. One of the most popular is the\nFrechet Inception Distance (FID). FID estimates the distance between a\ndistribution of Inception-v3 features of real images, and those of images\ngenerated by the algorithm. We highlight important drawbacks of FID:\nInception's poor representation of the rich and varied content generated by\nmodern text-to-image models, incorrect normality assumptions, and poor sample\ncomplexity. We call for a reevaluation of FID's use as the primary quality\nmetric for generated images. We empirically demonstrate that FID contradicts\nhuman raters, it does not reflect gradual improvement of iterative\ntext-to-image models, it does not capture distortion levels, and that it\nproduces inconsistent results when varying the sample size. We also propose an\nalternative new metric, CMMD, based on richer CLIP embeddings and the maximum\nmean discrepancy distance with the Gaussian RBF kernel. It is an unbiased\nestimator that does not make any assumptions on the probability distribution of\nthe embeddings and is sample efficient.", + "We also propose an\nalternative new metric, CMMD, based on richer CLIP embeddings and the maximum\nmean discrepancy distance with the Gaussian RBF kernel. It is an unbiased\nestimator that does not make any assumptions on the probability distribution of\nthe embeddings and is sample efficient. Through extensive experiments and\nanalysis, we demonstrate that FID-based evaluations of text-to-image models may\nbe unreliable, and that CMMD offers a more robust and reliable assessment of\nimage quality.", + "Joint camera pose and dense geometry estimation from a set of images or a\nmonocular video remains a challenging problem due to its computational\ncomplexity and inherent visual ambiguities. Most dense incremental\nreconstruction systems operate directly on image pixels and solve for their 3D\npositions using multi-view geometry cues. Such pixel-level approaches suffer\nfrom ambiguities or violations of multi-view consistency (e.g. caused by\ntextureless or specular surfaces).\n We address this issue with a new image representation which we call a\nSuperPrimitive. SuperPrimitives are obtained by splitting images into\nsemantically correlated local regions and enhancing them with estimated surface\nnormal directions, both of which are predicted by state-of-the-art single image\nneural networks. This provides a local geometry estimate per SuperPrimitive,\nwhile their relative positions are adjusted based on multi-view observations.\n We demonstrate the versatility of our new representation by addressing three\n3D reconstruction tasks: depth completion, few-view structure from motion, and\nmonocular dense visual odometry.", + "While recent model-free Reinforcement Learning (RL) methods have demonstrated\nhuman-level effectiveness in gaming environments, their success in everyday\ntasks like visual navigation has been limited, particularly under significant\nappearance variations. This limitation arises from (i) poor sample efficiency\nand (ii) over-fitting to training scenarios. To address these challenges, we\npresent a world model that learns invariant features using (i) contrastive\nunsupervised learning and (ii) an intervention-invariant regularizer. Learning\nan explicit representation of the world dynamics i.e. a world model, improves\nsample efficiency while contrastive learning implicitly enforces learning of\ninvariant features, which improves generalization. However, the na\\\"ive\nintegration of contrastive loss to world models is not good enough, as\nworld-model-based RL methods independently optimize representation learning and\nagent policy. To overcome this issue, we propose an intervention-invariant\nregularizer in the form of an auxiliary task such as depth prediction, image\ndenoising, image segmentation, etc., that explicitly enforces invariance to\nstyle interventions.", + "To overcome this issue, we propose an intervention-invariant\nregularizer in the form of an auxiliary task such as depth prediction, image\ndenoising, image segmentation, etc., that explicitly enforces invariance to\nstyle interventions. Our method outperforms current state-of-the-art\nmodel-based and model-free RL methods and significantly improves on\nout-of-distribution point navigation tasks evaluated on the iGibson benchmark.\nWith only visual observations, we further demonstrate that our approach\noutperforms recent language-guided foundation models for point navigation,\nwhich is essential for deployment on robots with limited computation\ncapabilities. Finally, we demonstrate that our proposed model excels at the\nsim-to-real transfer of its perception module on the Gibson benchmark.", + "Images produced by text-to-image diffusion models might not always faithfully\nrepresent the semantic intent of the provided text prompt, where the model\nmight overlook or entirely fail to produce certain objects. Existing solutions\noften require customly tailored functions for each of these problems, leading\nto sub-optimal results, especially for complex prompts. Our work introduces a\nnovel perspective by tackling this challenge in a contrastive context. Our\napproach intuitively promotes the segregation of objects in attention maps\nwhile also maintaining that pairs of related attributes are kept close to each\nother. We conduct extensive experiments across a wide variety of scenarios,\neach involving unique combinations of objects, attributes, and scenes. These\nexperiments effectively showcase the versatility, efficiency, and flexibility\nof our method in working with both latent and pixel-based diffusion models,\nincluding Stable Diffusion and Imagen. Moreover, we publicly share our source\ncode to facilitate further research.", + "Self-supervised pre-training has been proved to be effective in learning\ntransferable representations that benefit various visual tasks. This paper asks\nthis question: can self-supervised pre-training learn general facial\nrepresentations for various facial analysis tasks? Recent efforts toward this\ngoal are limited to treating each face image as a whole, i.e., learning\nconsistent facial representations at the image-level, which overlooks the\nconsistency of local facial representations (i.e., facial regions like eyes,\nnose, etc). In this work, we make a first attempt to propose a novel\nself-supervised facial representation learning framework to learn consistent\nglobal and local facial representations, Facial Region Awareness (FRA).\nSpecifically, we explicitly enforce the consistency of facial regions by\nmatching the local facial representations across views, which are extracted\nwith learned heatmaps highlighting the facial regions. Inspired by the mask\nprediction in supervised semantic segmentation, we obtain the heatmaps via\ncosine similarity between the per-pixel projection of feature maps and facial\nmask embeddings computed from learnable positional embeddings, which leverage\nthe attention mechanism to globally look up the facial image for facial\nregions.", + "Inspired by the mask\nprediction in supervised semantic segmentation, we obtain the heatmaps via\ncosine similarity between the per-pixel projection of feature maps and facial\nmask embeddings computed from learnable positional embeddings, which leverage\nthe attention mechanism to globally look up the facial image for facial\nregions. To learn such heatmaps, we formulate the learning of facial mask\nembeddings as a deep clustering problem by assigning the pixel features from\nthe feature maps to them. The transfer learning results on facial\nclassification and regression tasks show that our FRA outperforms previous\npre-trained models and more importantly, using ResNet as the unified backbone\nfor various tasks, our FRA achieves comparable or even better performance\ncompared with SOTA methods in facial analysis tasks.", + "In recent times, the generation of 3D assets from text prompts has shown\nimpressive results. Both 2D and 3D diffusion models can help generate decent 3D\nobjects based on prompts. 3D diffusion models have good 3D consistency, but\ntheir quality and generalization are limited as trainable 3D data is expensive\nand hard to obtain. 2D diffusion models enjoy strong abilities of\ngeneralization and fine generation, but 3D consistency is hard to guarantee.\nThis paper attempts to bridge the power from the two types of diffusion models\nvia the recent explicit and efficient 3D Gaussian splatting representation. A\nfast 3D object generation framework, named as GaussianDreamer, is proposed,\nwhere the 3D diffusion model provides priors for initialization and the 2D\ndiffusion model enriches the geometry and appearance. Operations of noisy point\ngrowing and color perturbation are introduced to enhance the initialized\nGaussians. Our GaussianDreamer can generate a high-quality 3D instance or 3D\navatar within 15 minutes on one GPU, much faster than previous methods, while\nthe generated instances can be directly rendered in real time.", + "Our GaussianDreamer can generate a high-quality 3D instance or 3D\navatar within 15 minutes on one GPU, much faster than previous methods, while\nthe generated instances can be directly rendered in real time. Demos and code\nare available at https://taoranyi.com/gaussiandreamer/.", + "Hallucination, posed as a pervasive challenge of multi-modal large language\nmodels (MLLMs), has significantly impeded their real-world usage that demands\nprecise judgment. Existing methods mitigate this issue with either training\nwith specific designed data or inferencing with external knowledge from other\nsources, incurring inevitable additional costs. In this paper, we present\nOPERA, a novel MLLM decoding method grounded in an Over-trust Penalty and a\nRetrospection-Allocation strategy, serving as a nearly free lunch to alleviate\nthe hallucination issue without additional data, knowledge, or training. Our\napproach begins with an interesting observation that, most hallucinations are\nclosely tied to the knowledge aggregation patterns manifested in the\nself-attention matrix, i.e., MLLMs tend to generate new tokens by focusing on a\nfew summary tokens, but not all the previous tokens. Such partial over-trust\ninclination results in the neglecting of image tokens and describes the image\ncontent with hallucination.", + "Such partial over-trust\ninclination results in the neglecting of image tokens and describes the image\ncontent with hallucination. Based on the observation, OPERA introduces a\npenalty term on the model logits during the beam-search decoding to mitigate\nthe over-trust issue, along with a rollback strategy that retrospects the\npresence of summary tokens in the previously generated tokens, and re-allocate\nthe token selection if necessary. With extensive experiments, OPERA shows\nsignificant hallucination-mitigating performance on different MLLMs and\nmetrics, proving its effectiveness and generality. Our code is available at:\nhttps://github.com/shikiw/OPERA.", + "Vision-language navigation (VLN) requires an agent to navigate through an 3D\nenvironment based on visual observations and natural language instructions. It\nis clear that the pivotal factor for successful navigation lies in the\ncomprehensive scene understanding. Previous VLN agents employ monocular\nframeworks to extract 2D features of perspective views directly. Though\nstraightforward, they struggle for capturing 3D geometry and semantics, leading\nto a partial and incomplete environment representation. To achieve a\ncomprehensive 3D representation with fine-grained details, we introduce a\nVolumetric Environment Representation (VER), which voxelizes the physical world\ninto structured 3D cells. For each cell, VER aggregates multi-view 2D features\ninto such a unified 3D space via 2D-3D sampling. Through coarse-to-fine feature\nextraction and multi-task learning for VER, our agent predicts 3D occupancy, 3D\nroom layout, and 3D bounding boxes jointly. Based on online collected VERs, our\nagent performs volume state estimation and builds episodic memory for\npredicting the next step.", + "Based on online collected VERs, our\nagent performs volume state estimation and builds episodic memory for\npredicting the next step. Experimental results show our environment\nrepresentations from multi-task learning lead to evident performance gains on\nVLN. Our model achieves state-of-the-art performance across VLN benchmarks\n(R2R, REVERIE, and R4R).", + "Utilizing pre-trained 2D large-scale generative models, recent works are\ncapable of generating high-quality novel views from a single in-the-wild image.\nHowever, due to the lack of information from multiple views, these works\nencounter difficulties in generating controllable novel views. In this paper,\nwe present DreamComposer, a flexible and scalable framework that can enhance\nexisting view-aware diffusion models by injecting multi-view conditions.\nSpecifically, DreamComposer first uses a view-aware 3D lifting module to obtain\n3D representations of an object from multiple views. Then, it renders the\nlatent features of the target view from 3D representations with the multi-view\nfeature fusion module. Finally the target view features extracted from\nmulti-view inputs are injected into a pre-trained diffusion model. Experiments\nshow that DreamComposer is compatible with state-of-the-art diffusion models\nfor zero-shot novel view synthesis, further enhancing them to generate\nhigh-fidelity novel view images with multi-view conditions, ready for\ncontrollable 3D object reconstruction and various other applications.", + "Human pose and shape (HPS) estimation with lensless imaging is not only\nbeneficial to privacy protection but also can be used in covert surveillance\nscenarios due to the small size and simple structure of this device. However,\nthis task presents significant challenges due to the inherent ambiguity of the\ncaptured measurements and lacks effective methods for directly estimating human\npose and shape from lensless data. In this paper, we propose the first\nend-to-end framework to recover 3D human poses and shapes from lensless\nmeasurements to our knowledge. We specifically design a multi-scale lensless\nfeature decoder to decode the lensless measurements through the optically\nencoded mask for efficient feature extraction. We also propose a double-head\nauxiliary supervision mechanism to improve the estimation accuracy of human\nlimb ends. Besides, we establish a lensless imaging system and verify the\neffectiveness of our method on various datasets acquired by our lensless\nimaging system.", + "In human-centric content generation, the pre-trained text-to-image models\nstruggle to produce user-wanted portrait images, which retain the identity of\nindividuals while exhibiting diverse expressions. This paper introduces our\nefforts towards personalized face generation. To this end, we propose a novel\nmulti-modal face generation framework, capable of simultaneous\nidentity-expression control and more fine-grained expression synthesis. Our\nexpression control is so sophisticated that it can be specialized by the\nfine-grained emotional vocabulary. We devise a novel diffusion model that can\nundertake the task of simultaneously face swapping and reenactment. Due to the\nentanglement of identity and expression, it's nontrivial to separately and\nprecisely control them in one framework, thus has not been explored yet. To\novercome this, we propose several innovative designs in the conditional\ndiffusion model, including balancing identity and expression encoder, improved\nmidpoint sampling, and explicitly background conditioning. Extensive\nexperiments have demonstrated the controllability and scalability of the\nproposed framework, in comparison with state-of-the-art text-to-image, face\nswapping, and face reenactment methods.", + "Modern video generation models like Sora have achieved remarkable success in\nproducing high-quality videos. However, a significant limitation is their\ninability to offer interactive control to users, a feature that promises to\nopen up unprecedented applications and creativity. In this work, we introduce\nthe first solution to equip diffusion-based video generation models with\nspatio-temporal control. We present Peekaboo, a novel masked attention module,\nwhich seamlessly integrates with current video generation models offering\ncontrol without the need for additional training or inference overhead. To\nfacilitate future research, we also introduce a comprehensive benchmark for\ninteractive video generation. This benchmark offers a standardized framework\nfor the community to assess the efficacy of emerging interactive video\ngeneration models. Our extensive qualitative and quantitative assessments\nreveal that Peekaboo achieves up to a 3.8x improvement in mIoU over baseline\nmodels, all while maintaining the same latency. Code and benchmark are\navailable on the webpage.", + "Computer vision techniques play a central role in the perception stack of\nautonomous vehicles. Such methods are employed to perceive the vehicle\nsurroundings given sensor data. 3D LiDAR sensors are commonly used to collect\nsparse 3D point clouds from the scene. However, compared to human perception,\nsuch systems struggle to deduce the unseen parts of the scene given those\nsparse point clouds. In this matter, the scene completion task aims at\npredicting the gaps in the LiDAR measurements to achieve a more complete scene\nrepresentation. Given the promising results of recent diffusion models as\ngenerative models for images, we propose extending them to achieve scene\ncompletion from a single 3D LiDAR scan. Previous works used diffusion models\nover range images extracted from LiDAR data, directly applying image-based\ndiffusion methods. Distinctly, we propose to directly operate on the points,\nreformulating the noising and denoising diffusion process such that it can\nefficiently work at scene scale. Together with our approach, we propose a\nregularization loss to stabilize the noise predicted during the denoising\nprocess.", + "Distinctly, we propose to directly operate on the points,\nreformulating the noising and denoising diffusion process such that it can\nefficiently work at scene scale. Together with our approach, we propose a\nregularization loss to stabilize the noise predicted during the denoising\nprocess. Our experimental evaluation shows that our method can complete the\nscene given a single LiDAR scan as input, producing a scene with more details\ncompared to state-of-the-art scene completion methods. We believe that our\nproposed diffusion process formulation can support further research in\ndiffusion models applied to scene-scale point cloud data.", + "In this paper, we propose Image Downscaling Assessment by Rate-Distortion\n(IDA-RD), a novel measure to quantitatively evaluate image downscaling\nalgorithms. In contrast to image-based methods that measure the quality of\ndownscaled images, ours is process-based that draws ideas from rate-distortion\ntheory to measure the distortion incurred during downscaling. Our main idea is\nthat downscaling and super-resolution (SR) can be viewed as the encoding and\ndecoding processes in the rate-distortion model, respectively, and that a\ndownscaling algorithm that preserves more details in the resulting\nlow-resolution (LR) images should lead to less distorted high-resolution (HR)\nimages in SR. In other words, the distortion should increase as the downscaling\nalgorithm deteriorates. However, it is non-trivial to measure this distortion\nas it requires the SR algorithm to be blind and stochastic. Our key insight is\nthat such requirements can be met by recent SR algorithms based on deep\ngenerative models that can find all matching HR images for a given LR image on\ntheir learned image manifolds. Extensive experimental results show the\neffectiveness of our IDA-RD measure.", + "Backdoor attacks have been well-studied in visible light object detection\n(VLOD) in recent years. However, VLOD can not effectively work in dark and\ntemperature-sensitive scenarios. Instead, thermal infrared object detection\n(TIOD) is the most accessible and practical in such environments. In this\npaper, our team is the first to investigate the security vulnerabilities\nassociated with TIOD in the context of backdoor attacks, spanning both the\ndigital and physical realms. We introduce two novel types of backdoor attacks\non TIOD, each offering unique capabilities: Object-affecting Attack and\nRange-affecting Attack. We conduct a comprehensive analysis of key factors\ninfluencing trigger design, which include temperature, size, material, and\nconcealment. These factors, especially temperature, significantly impact the\nefficacy of backdoor attacks on TIOD. A thorough understanding of these factors\nwill serve as a foundation for designing physical triggers and temperature\ncontrolling experiments. Our study includes extensive experiments conducted in\nboth digital and physical environments.", + "These factors, especially temperature, significantly impact the\nefficacy of backdoor attacks on TIOD. A thorough understanding of these factors\nwill serve as a foundation for designing physical triggers and temperature\ncontrolling experiments. Our study includes extensive experiments conducted in\nboth digital and physical environments. In the digital realm, we evaluate our\napproach using benchmark datasets for TIOD, achieving an Attack Success Rate\n(ASR) of up to 98.21%. In the physical realm, we test our approach in two\nreal-world settings: a traffic intersection and a parking lot, using a thermal\ninfrared camera. Here, we attain an ASR of up to 98.38%.", + "Deep Neural Networks (DNNs) are powerful tools for various computer vision\ntasks, yet they often struggle with reliable uncertainty quantification - a\ncritical requirement for real-world applications. Bayesian Neural Networks\n(BNN) are equipped for uncertainty estimation but cannot scale to large DNNs\nthat are highly unstable to train. To address this challenge, we introduce the\nAdaptable Bayesian Neural Network (ABNN), a simple and scalable strategy to\nseamlessly transform DNNs into BNNs in a post-hoc manner with minimal\ncomputational and training overheads. ABNN preserves the main predictive\nproperties of DNNs while enhancing their uncertainty quantification abilities\nthrough simple BNN adaptation layers (attached to normalization layers) and a\nfew fine-tuning steps on pre-trained models. We conduct extensive experiments\nacross multiple datasets for image classification and semantic segmentation\ntasks, and our results demonstrate that ABNN achieves state-of-the-art\nperformance without the computational budget typically associated with ensemble\nmethods.", + "The existing facial datasets, while having plentiful images at near frontal\nviews, lack images with extreme head poses, leading to the downgraded\nperformance of deep learning models when dealing with profile or pitched faces.\nThis work aims to address this gap by introducing a novel dataset named Extreme\nPose Face High-Quality Dataset (EFHQ), which includes a maximum of 450k\nhigh-quality images of faces at extreme poses. To produce such a massive\ndataset, we utilize a novel and meticulous dataset processing pipeline to\ncurate two publicly available datasets, VFHQ and CelebV-HQ, which contain many\nhigh-resolution face videos captured in various settings. Our dataset can\ncomplement existing datasets on various facial-related tasks, such as facial\nsynthesis with 2D/3D-aware GAN, diffusion-based text-to-image face generation,\nand face reenactment. Specifically, training with EFHQ helps models generalize\nwell across diverse poses, significantly improving performance in scenarios\ninvolving extreme views, confirmed by extensive experiments.", + "Specifically, training with EFHQ helps models generalize\nwell across diverse poses, significantly improving performance in scenarios\ninvolving extreme views, confirmed by extensive experiments. Additionally, we\nutilize EFHQ to define a challenging cross-view face verification benchmark, in\nwhich the performance of SOTA face recognition models drops 5-37% compared to\nfrontal-to-frontal scenarios, aiming to stimulate studies on face recognition\nunder severe pose conditions in the wild.", + "Current subject-driven image generation methods encounter significant\nchallenges in person-centric image generation. The reason is that they learn\nthe semantic scene and person generation by fine-tuning a common pre-trained\ndiffusion, which involves an irreconcilable training imbalance. Precisely, to\ngenerate realistic persons, they need to sufficiently tune the pre-trained\nmodel, which inevitably causes the model to forget the rich semantic scene\nprior and makes scene generation over-fit to the training data. Moreover, even\nwith sufficient fine-tuning, these methods can still not generate high-fidelity\npersons since joint learning of the scene and person generation also lead to\nquality compromise. In this paper, we propose Face-diffuser, an effective\ncollaborative generation pipeline to eliminate the above training imbalance and\nquality compromise. Specifically, we first develop two specialized pre-trained\ndiffusion models, i.e., Text-driven Diffusion Model (TDM) and Subject-augmented\nDiffusion Model (SDM), for scene and person generation, respectively. The\nsampling process is divided into three sequential stages, i.e., semantic scene\nconstruction, subject-scene fusion, and subject enhancement.", + "The\nsampling process is divided into three sequential stages, i.e., semantic scene\nconstruction, subject-scene fusion, and subject enhancement. The first and last\nstages are performed by TDM and SDM respectively. The subject-scene fusion\nstage, that is the collaboration achieved through a novel and highly effective\nmechanism, Saliency-adaptive Noise Fusion (SNF). Specifically, it is based on\nour key observation that there exists a robust link between classifier-free\nguidance responses and the saliency of generated images. In each time step, SNF\nleverages the unique strengths of each model and allows for the spatial\nblending of predicted noises from both models automatically in a saliency-aware\nmanner. Extensive experiments confirm the impressive effectiveness and\nrobustness of the Face-diffuser.", + "Recent advancements in large vision-language models enabled visual object\ndetection in open-vocabulary scenarios, where object classes are defined in\nfree-text formats during inference. In this paper, we aim to probe the\nstate-of-the-art methods for open-vocabulary object detection to determine to\nwhat extent they understand fine-grained properties of objects and their parts.\nTo this end, we introduce an evaluation protocol based on dynamic vocabulary\ngeneration to test whether models detect, discern, and assign the correct\nfine-grained description to objects in the presence of hard-negative classes.\nWe contribute with a benchmark suite of increasing difficulty and probing\ndifferent properties like color, pattern, and material. We further enhance our\ninvestigation by evaluating several state-of-the-art open-vocabulary object\ndetectors using the proposed protocol and find that most existing solutions,\nwhich shine in standard open-vocabulary benchmarks, struggle to accurately\ncapture and distinguish finer object details. We conclude the paper by\nhighlighting the limitations of current methodologies and exploring promising\nresearch directions to overcome the discovered drawbacks. Data and code are\navailable at https://lorebianchi98.github.io/FG-OVD/.", + "Weakly-supervised action segmentation is a task of learning to partition a\nlong video into several action segments, where training videos are only\naccompanied by transcripts (ordered list of actions). Most of existing methods\nneed to infer pseudo segmentation for training by serial alignment between all\nframes and the transcript, which is time-consuming and hard to be parallelized\nwhile training. In this work, we aim to escape from this inefficient alignment\nwith massive but redundant frames, and instead to directly localize a few\naction transitions for pseudo segmentation generation, where a transition\nrefers to the change from an action segment to its next adjacent one in the\ntranscript. As the true transitions are submerged in noisy boundaries due to\nintra-segment visual variation, we propose a novel Action-Transition-Aware\nBoundary Alignment (ATBA) framework to efficiently and effectively filter out\nnoisy boundaries and detect transitions. In addition, to boost the semantic\nlearning in the case that noise is inevitably present in the pseudo\nsegmentation, we also introduce video-level losses to utilize the trusted\nvideo-level supervision. Extensive experiments show the effectiveness of our\napproach on both performance and training speed.", + "The ability to learn from context with novel concepts, and deliver\nappropriate responses are essential in human conversations. Despite current\nMultimodal Large Language Models (MLLMs) and Large Language Models (LLMs) being\ntrained on mega-scale datasets, recognizing unseen images or understanding\nnovel concepts in a training-free manner remains a challenge. In-Context\nLearning (ICL) explores training-free few-shot learning, where models are\nencouraged to ``learn to learn\" from limited tasks and generalize to unseen\ntasks. In this work, we propose link-context learning (LCL), which emphasizes\n\"reasoning from cause and effect\" to augment the learning capabilities of\nMLLMs. LCL goes beyond traditional ICL by explicitly strengthening the causal\nrelationship between the support set and the query set. By providing\ndemonstrations with causal links, LCL guides the model to discern not only the\nanalogy but also the underlying causal associations between data points, which\nempowers MLLMs to recognize unseen images and understand novel concepts more\neffectively.", + "By providing\ndemonstrations with causal links, LCL guides the model to discern not only the\nanalogy but also the underlying causal associations between data points, which\nempowers MLLMs to recognize unseen images and understand novel concepts more\neffectively. To facilitate the evaluation of this novel approach, we introduce\nthe ISEKAI dataset, comprising exclusively of unseen generated image-label\npairs designed for link-context learning. Extensive experiments show that our\nLCL-MLLM exhibits strong link-context learning capabilities to novel concepts\nover vanilla MLLMs. Code and data will be released at\nhttps://github.com/isekai-portal/Link-Context-Learning.", + "Large language models have achieved great success in recent years, so as\ntheir variants in vision. Existing vision-language models can describe images\nin natural languages, answer visual-related questions, or perform complex\nreasoning about the image. However, it is yet unclear how localization tasks,\nsuch as word grounding or referring localization, can be performed using large\nlanguage models. In this work, we aim to develop a vision-language model that\ncan take locations, for example, a set of points or boxes, as either inputs or\noutputs. When taking locations as inputs, the model performs\nlocation-conditioned captioning, which generates captions for the indicated\nobject or region. When generating locations as outputs, our model regresses\npixel coordinates for each output word generated by the language model, and\nthus performs dense word grounding. Our model is pre-trained on the Localized\nNarrative dataset, which contains pixel-word-aligned captioning from human\nattention. We show our model can be applied to various location-aware\nvision-language tasks, including referring localization, location-conditioned\ncaptioning, and dense object captioning, archiving state-of-the-art performance\non RefCOCO and Visual Genome.", + "We show our model can be applied to various location-aware\nvision-language tasks, including referring localization, location-conditioned\ncaptioning, and dense object captioning, archiving state-of-the-art performance\non RefCOCO and Visual Genome. Project page: https://jerryxu.net/PixelLLM .", + "Extracting keypoint locations from input hand frames, known as 3D hand pose\nestimation, is a critical task in various human-computer interaction\napplications. Essentially, the 3D hand pose estimation can be regarded as a 3D\npoint subset generative problem conditioned on input frames. Thanks to the\nrecent significant progress on diffusion-based generative models, hand pose\nestimation can also benefit from the diffusion model to estimate keypoint\nlocations with high quality. However, directly deploying the existing diffusion\nmodels to solve hand pose estimation is non-trivial, since they cannot achieve\nthe complex permutation mapping and precise localization. Based on this\nmotivation, this paper proposes HandDiff, a diffusion-based hand pose\nestimation model that iteratively denoises accurate hand pose conditioned on\nhand-shaped image-point clouds. In order to recover keypoint permutation and\naccurate location, we further introduce joint-wise condition and local detail\ncondition. Experimental results demonstrate that the proposed HandDiff\nsignificantly outperforms the existing approaches on four challenging hand pose\nbenchmark datasets. Codes and pre-trained models are publicly available at\nhttps://github.com/cwc1260/HandDiff.", + "Recent advances in instruction tuning have led to the development of\nState-of-the-Art Large Multimodal Models (LMMs). Given the novelty of these\nmodels, the impact of visual adversarial attacks on LMMs has not been\nthoroughly examined. We conduct a comprehensive study of the robustness of\nvarious LMMs against different adversarial attacks, evaluated across tasks\nincluding image classification, image captioning, and Visual Question Answer\n(VQA). We find that in general LMMs are not robust to visual adversarial\ninputs. However, our findings suggest that context provided to the model via\nprompts, such as questions in a QA pair helps to mitigate the effects of visual\nadversarial inputs. Notably, the LMMs evaluated demonstrated remarkable\nresilience to such attacks on the ScienceQA task with only an 8.10% drop in\nperformance compared to their visual counterparts which dropped 99.73%. We also\npropose a new approach to real-world image classification which we term query\ndecomposition. By incorporating existence queries into our input prompt we\nobserve diminished attack effectiveness and improvements in image\nclassification accuracy.", + "We also\npropose a new approach to real-world image classification which we term query\ndecomposition. By incorporating existence queries into our input prompt we\nobserve diminished attack effectiveness and improvements in image\nclassification accuracy. This research highlights a previously under-explored\nfacet of LMM robustness and sets the stage for future work aimed at\nstrengthening the resilience of multimodal systems in adversarial environments.", + "We propose a novel self-supervised embedding to learn how actions sound from\nnarrated in-the-wild egocentric videos. Whereas existing methods rely on\ncurated data with known audio-visual correspondence, our multimodal\ncontrastive-consensus coding (MC3) embedding reinforces the associations\nbetween audio, language, and vision when all modality pairs agree, while\ndiminishing those associations when any one pair does not. We show our approach\ncan successfully discover how the long tail of human actions sound from\negocentric video, outperforming an array of recent multimodal embedding\ntechniques on two datasets (Ego4D and EPIC-Sounds) and multiple cross-modal\ntasks.", + "Semantic scene completion, also known as semantic occupancy prediction, can\nprovide dense geometric and semantic information for autonomous vehicles, which\nattracts the increasing attention of both academia and industry. Unfortunately,\nexisting methods usually formulate this task as a voxel-wise classification\nproblem and treat each voxel equally in 3D space during training. As the hard\nvoxels have not been paid enough attention, the performance in some challenging\nregions is limited. The 3D dense space typically contains a large number of\nempty voxels, which are easy to learn but require amounts of computation due to\nhandling all the voxels uniformly for the existing models. Furthermore, the\nvoxels in the boundary region are more challenging to differentiate than those\nin the interior. In this paper, we propose HASSC approach to train the semantic\nscene completion model with hardness-aware design. The global hardness from the\nnetwork optimization process is defined for dynamical hard voxel selection.\nThen, the local hardness with geometric anisotropy is adopted for voxel-wise\nrefinement. Besides, self-distillation strategy is introduced to make training\nprocess stable and consistent.", + "The global hardness from the\nnetwork optimization process is defined for dynamical hard voxel selection.\nThen, the local hardness with geometric anisotropy is adopted for voxel-wise\nrefinement. Besides, self-distillation strategy is introduced to make training\nprocess stable and consistent. Extensive experiments show that our HASSC scheme\ncan effectively promote the accuracy of the baseline model without incurring\nthe extra inference cost. Source code is available at:\nhttps://github.com/songw-zju/HASSC.", + "Recent innovations on text-to-3D generation have featured Score Distillation\nSampling (SDS), which enables the zero-shot learning of implicit 3D models\n(NeRF) by directly distilling prior knowledge from 2D diffusion models.\nHowever, current SDS-based models still struggle with intricate text prompts\nand commonly result in distorted 3D models with unrealistic textures or\ncross-view inconsistency issues. In this work, we introduce a novel Visual\nPrompt-guided text-to-3D diffusion model (VP3D) that explicitly unleashes the\nvisual appearance knowledge in 2D visual prompt to boost text-to-3D generation.\nInstead of solely supervising SDS with text prompt, VP3D first capitalizes on\n2D diffusion model to generate a high-quality image from input text, which\nsubsequently acts as visual prompt to strengthen SDS optimization with explicit\nvisual appearance. Meanwhile, we couple the SDS optimization with additional\ndifferentiable reward function that encourages rendering images of 3D models to\nbetter visually align with 2D visual prompt and semantically match with text\nprompt.", + "Meanwhile, we couple the SDS optimization with additional\ndifferentiable reward function that encourages rendering images of 3D models to\nbetter visually align with 2D visual prompt and semantically match with text\nprompt. Through extensive experiments, we show that the 2D Visual Prompt in our\nVP3D significantly eases the learning of visual appearance of 3D models and\nthus leads to higher visual fidelity with more detailed textures. It is also\nappealing in view that when replacing the self-generating visual prompt with a\ngiven reference image, VP3D is able to trigger a new task of stylized\ntext-to-3D generation. Our project page is available at\nhttps://vp3d-cvpr24.github.io.", + "Undoubtedly, high-fidelity 3D hair is crucial for achieving realism, artistic\nexpression, and immersion in computer graphics. While existing 3D hair modeling\nmethods have achieved impressive performance, the challenge of achieving\nhigh-quality hair reconstruction persists: they either require strict capture\nconditions, making practical applications difficult, or heavily rely on learned\nprior data, obscuring fine-grained details in images. To address these\nchallenges, we propose MonoHair,a generic framework to achieve high-fidelity\nhair reconstruction from a monocular video, without specific requirements for\nenvironments. Our approach bifurcates the hair modeling process into two main\nstages: precise exterior reconstruction and interior structure inference. The\nexterior is meticulously crafted using our Patch-based Multi-View Optimization\n(PMVO). This method strategically collects and integrates hair information from\nmultiple views, independent of prior data, to produce a high-fidelity exterior\n3D line map. This map not only captures intricate details but also facilitates\nthe inference of the hair's inner structure. For the interior, we employ a\ndata-driven, multi-view 3D hair reconstruction method.", + "This map not only captures intricate details but also facilitates\nthe inference of the hair's inner structure. For the interior, we employ a\ndata-driven, multi-view 3D hair reconstruction method. This method utilizes 2D\nstructural renderings derived from the reconstructed exterior, mirroring the\nsynthetic 2D inputs used during training. This alignment effectively bridges\nthe domain gap between our training data and real-world data, thereby enhancing\nthe accuracy and reliability of our interior structure inference. Lastly, we\ngenerate a strand model and resolve the directional ambiguity by our hair\ngrowth algorithm. Our experiments demonstrate that our method exhibits\nrobustness across diverse hairstyles and achieves state-of-the-art performance.\nFor more results, please refer to our project page\nhttps://keyuwu-cs.github.io/MonoHair/.", + "The absence of real targets to guide the model training is one of the main\nproblems with the makeup transfer task. Most existing methods tackle this\nproblem by synthesizing pseudo ground truths (PGTs). However, the generated\nPGTs are often sub-optimal and their imprecision will eventually lead to\nperformance degradation. To alleviate this issue, in this paper, we propose a\nnovel Content-Style Decoupled Makeup Transfer (CSD-MT) method, which works in a\npurely unsupervised manner and thus eliminates the negative effects of\ngenerating PGTs. Specifically, based on the frequency characteristics analysis,\nwe assume that the low-frequency (LF) component of a face image is more\nassociated with its makeup style information, while the high-frequency (HF)\ncomponent is more related to its content details. This assumption allows CSD-MT\nto decouple the content and makeup style information in each face image through\nthe frequency decomposition. After that, CSD-MT realizes makeup transfer by\nmaximizing the consistency of these two types of information between the\ntransferred result and input images, respectively. Two newly designed loss\nfunctions are also introduced to further improve the transfer performance.", + "After that, CSD-MT realizes makeup transfer by\nmaximizing the consistency of these two types of information between the\ntransferred result and input images, respectively. Two newly designed loss\nfunctions are also introduced to further improve the transfer performance.\nExtensive quantitative and qualitative analyses show the effectiveness of our\nCSD-MT method. Our code is available at\nhttps://github.com/Snowfallingplum/CSD-MT.", + "Large pre-trained Vision-Language Models (VLMs) like CLIP, despite having\nremarkable generalization ability, are highly vulnerable to adversarial\nexamples. This work studies the adversarial robustness of VLMs from the novel\nperspective of the text prompt instead of the extensively studied model weights\n(frozen in this work). We first show that the effectiveness of both adversarial\nattack and defense are sensitive to the used text prompt. Inspired by this, we\npropose a method to improve resilience to adversarial attacks by learning a\nrobust text prompt for VLMs. The proposed method, named Adversarial Prompt\nTuning (APT), is effective while being both computationally and data efficient.\nExtensive experiments are conducted across 15 datasets and 4 data sparsity\nschemes (from 1-shot to full training data settings) to show APT's superiority\nover hand-engineered prompts and other state-of-the-art adaption methods. APT\ndemonstrated excellent abilities in terms of the in-distribution performance\nand the generalization under input distribution shift and across datasets.", + "APT\ndemonstrated excellent abilities in terms of the in-distribution performance\nand the generalization under input distribution shift and across datasets.\nSurprisingly, by simply adding one learned word to the prompts, APT can\nsignificantly boost the accuracy and robustness (epsilon=4/255) over the\nhand-engineered prompts by +13% and +8.5% on average respectively. The\nimprovement further increases, in our most effective setting, to +26.4% for\naccuracy and +16.7% for robustness. Code is available at\nhttps://github.com/TreeLLi/APT.", + "Safety and robustness are crucial factors in developing trustworthy\nautonomous vehicles. One essential aspect of addressing these factors is to\nequip vehicles with the capability to predict future trajectories for all\nmoving objects in the surroundings and quantify prediction uncertainties. In\nthis paper, we propose the Sequential Neural Variational Agent (SeNeVA), a\ngenerative model that describes the distribution of future trajectories for a\nsingle moving object. Our approach can distinguish Out-of-Distribution data\nwhile quantifying uncertainty and achieving competitive performance compared to\nstate-of-the-art methods on the Argoverse 2 and INTERACTION datasets.\nSpecifically, a 0.446 meters minimum Final Displacement Error, a 0.203 meters\nminimum Average Displacement Error, and a 5.35% Miss Rate are achieved on the\nINTERACTION test set. Extensive qualitative and quantitative analysis is also\nprovided to evaluate the proposed model. Our open-source code is available at\nhttps://github.com/PurdueDigitalTwin/seneva.", + "Vision-Language Models (VLMs) are pretrained on large, diverse, and noisy\nweb-crawled datasets. This underscores the critical need for dataset pruning,\nas the quality of these datasets is strongly correlated with the performance of\nVLMs on downstream tasks. Using CLIPScore from a pretrained model to only train\nmodels using highly-aligned samples is one of the most successful methods for\npruning. We argue that this approach suffers from multiple limitations\nincluding: false positives and negatives due to CLIP's pretraining on noisy\nlabels. We propose a pruning signal, Sieve, that employs synthetic captions\ngenerated by image-captioning models pretrained on small, diverse, and\nwell-aligned image-text pairs to evaluate the alignment of noisy image-text\npairs. To bridge the gap between the limited diversity of generated captions\nand the high diversity of alternative text (alt-text), we estimate the semantic\ntextual similarity in the embedding space of a language model pretrained on\nunlabeled text corpus.", + "To bridge the gap between the limited diversity of generated captions\nand the high diversity of alternative text (alt-text), we estimate the semantic\ntextual similarity in the embedding space of a language model pretrained on\nunlabeled text corpus. Using DataComp, a multimodal dataset filtering\nbenchmark, when evaluating on 38 downstream tasks, our pruning approach,\nsurpasses CLIPScore by 2.6\\% and 1.7\\% on medium and large scale respectively.\nIn addition, on retrieval tasks, Sieve leads to a significant improvement of\n2.7% and 4.5% on medium and large scale respectively.", + "In this paper, we propose the first generalizable view synthesis approach\nthat specifically targets multi-view stereo-camera images. Since recent stereo\nmatching has demonstrated accurate geometry prediction, we introduce stereo\nmatching into novel-view synthesis for high-quality geometry reconstruction. To\nthis end, this paper proposes a novel framework, dubbed StereoNeRF, which\nintegrates stereo matching into a NeRF-based generalizable view synthesis\napproach. StereoNeRF is equipped with three key components to effectively\nexploit stereo matching in novel-view synthesis: a stereo feature extractor, a\ndepth-guided plane-sweeping, and a stereo depth loss. Moreover, we propose the\nStereoNVS dataset, the first multi-view dataset of stereo-camera images,\nencompassing a wide variety of both real and synthetic scenes. Our experimental\nresults demonstrate that StereoNeRF surpasses previous approaches in\ngeneralizable view synthesis.", + "We introduce DyNFL, a novel neural field-based approach for high-fidelity\nre-simulation of LiDAR scans in dynamic driving scenes. DyNFL processes LiDAR\nmeasurements from dynamic environments, accompanied by bounding boxes of moving\nobjects, to construct an editable neural field. This field, comprising\nseparately reconstructed static background and dynamic objects, allows users to\nmodify viewpoints, adjust object positions, and seamlessly add or remove\nobjects in the re-simulated scene. A key innovation of our method is the neural\nfield composition technique, which effectively integrates reconstructed neural\nassets from various scenes through a ray drop test, accounting for occlusions\nand transparent surfaces. Our evaluation with both synthetic and real-world\nenvironments demonstrates that DyNFL substantially improves dynamic scene LiDAR\nsimulation, offering a combination of physical fidelity and flexible editing\ncapabilities.", + "Test-time adaptation (TTA) has emerged as a viable solution to adapt\npre-trained models to domain shifts using unlabeled test data. However, TTA\nfaces challenges of adaptation failures due to its reliance on blind adaptation\nto unknown test samples in dynamic scenarios. Traditional methods for\nout-of-distribution performance estimation are limited by unrealistic\nassumptions in the TTA context, such as requiring labeled data or re-training\nmodels. To address this issue, we propose AETTA, a label-free accuracy\nestimation algorithm for TTA. We propose the prediction disagreement as the\naccuracy estimate, calculated by comparing the target model prediction with\ndropout inferences. We then improve the prediction disagreement to extend the\napplicability of AETTA under adaptation failures. Our extensive evaluation with\nfour baselines and six TTA methods demonstrates that AETTA shows an average of\n19.8%p more accurate estimation compared with the baselines. We further\ndemonstrate the effectiveness of accuracy estimation with a model recovery case\nstudy, showcasing the practicality of our model recovery based on accuracy\nestimation. The source code is available at https://github.com/taeckyung/AETTA.", + "In this work, we present Digital Life Project, a framework utilizing language\nas the universal medium to build autonomous 3D characters, who are capable of\nengaging in social interactions and expressing with articulated body motions,\nthereby simulating life in a digital environment. Our framework comprises two\nprimary components: 1) SocioMind: a meticulously crafted digital brain that\nmodels personalities with systematic few-shot exemplars, incorporates a\nreflection process based on psychology principles, and emulates autonomy by\ninitiating dialogue topics; 2) MoMat-MoGen: a text-driven motion synthesis\nparadigm for controlling the character's digital body. It integrates motion\nmatching, a proven industry technique to ensure motion quality, with\ncutting-edge advancements in motion generation for diversity. Extensive\nexperiments demonstrate that each module achieves state-of-the-art performance\nin its respective domain. Collectively, they enable virtual characters to\ninitiate and sustain dialogues autonomously, while evolving their\nsocio-psychological states. Concurrently, these characters can perform\ncontextually relevant bodily movements. Additionally, a motion captioning\nmodule further allows the virtual character to recognize and appropriately\nrespond to human players' actions.", + "Concurrently, these characters can perform\ncontextually relevant bodily movements. Additionally, a motion captioning\nmodule further allows the virtual character to recognize and appropriately\nrespond to human players' actions. Homepage: https://digital-life-project.com/", + "3D Object Detectors (3D-OD) are crucial for understanding the environment in\nmany robotic tasks, especially autonomous driving. Including 3D information via\nLidar sensors improves accuracy greatly. However, such detectors perform poorly\non domains they were not trained on, i.e. different locations, sensors,\nweather, etc., limiting their reliability in safety-critical applications.\nThere exist methods to adapt 3D-ODs to these domains; however, these methods\ntreat 3D-ODs as a black box, neglecting underlying architectural decisions and\nsource-domain training strategies. Instead, we dive deep into the details of\n3D-ODs, focusing our efforts on fundamental factors that influence robustness\nprior to domain adaptation.\n We systematically investigate four design choices (and the interplay between\nthem) often overlooked in 3D-OD robustness and domain adaptation: architecture,\nvoxel encoding, data augmentations, and anchor strategies. We assess their\nimpact on the robustness of nine state-of-the-art 3D-ODs across six benchmarks\nencompassing three types of domain gaps - sensor type, weather, and location.", + "We assess their\nimpact on the robustness of nine state-of-the-art 3D-ODs across six benchmarks\nencompassing three types of domain gaps - sensor type, weather, and location.\n Our main findings are: (1) transformer backbones with local point features\nare more robust than 3D CNNs, (2) test-time anchor size adjustment is crucial\nfor adaptation across geographical locations, significantly boosting scores\nwithout retraining, (3) source-domain augmentations allow the model to\ngeneralize to low-resolution sensors, and (4) surprisingly, robustness to bad\nweather is improved when training directly on more clean weather data than on\ntraining with bad weather data. We outline our main conclusions and findings to\nprovide practical guidance on developing more robust 3D-ODs.", + "Several unsupervised image segmentation approaches have been proposed which\neliminate the need for dense manually-annotated segmentation masks; current\nmodels separately handle either semantic segmentation (e.g., STEGO) or\nclass-agnostic instance segmentation (e.g., CutLER), but not both (i.e.,\npanoptic segmentation). We propose an Unsupervised Universal Segmentation model\n(U2Seg) adept at performing various image segmentation tasks -- instance,\nsemantic and panoptic -- using a novel unified framework. U2Seg generates\npseudo semantic labels for these segmentation tasks via leveraging\nself-supervised models followed by clustering; each cluster represents\ndifferent semantic and/or instance membership of pixels. We then self-train the\nmodel on these pseudo semantic labels, yielding substantial performance gains\nover specialized methods tailored to each task: a +2.6 AP$^{\\text{box}}$ boost\nvs. CutLER in unsupervised instance segmentation on COCO and a +7.0 PixelAcc\nincrease (vs. STEGO) in unsupervised semantic segmentation on COCOStuff.\nMoreover, our method sets up a new baseline for unsupervised panoptic\nsegmentation, which has not been previously explored.", + "STEGO) in unsupervised semantic segmentation on COCOStuff.\nMoreover, our method sets up a new baseline for unsupervised panoptic\nsegmentation, which has not been previously explored. U2Seg is also a strong\npretrained model for few-shot segmentation, surpassing CutLER by +5.0\nAP$^{\\text{mask}}$ when trained on a low-data regime, e.g., only 1% COCO\nlabels. We hope our simple yet effective method can inspire more research on\nunsupervised universal image segmentation.", + "Few-shot segmentation remains challenging due to the limitations of its\nlabeling information for unseen classes. Most previous approaches rely on\nextracting high-level feature maps from the frozen visual encoder to compute\nthe pixel-wise similarity as a key prior guidance for the decoder. However,\nsuch a prior representation suffers from coarse granularity and poor\ngeneralization to new classes since these high-level feature maps have obvious\ncategory bias. In this work, we propose to replace the visual prior\nrepresentation with the visual-text alignment capacity to capture more reliable\nguidance and enhance the model generalization. Specifically, we design two\nkinds of training-free prior information generation strategy that attempts to\nutilize the semantic alignment capability of the Contrastive Language-Image\nPre-training model (CLIP) to locate the target class. Besides, to acquire more\naccurate prior guidance, we build a high-order relationship of attention maps\nand utilize it to refine the initial prior information. Experiments on both the\nPASCAL-5{i} and COCO-20{i} datasets show that our method obtains a clearly\nsubstantial improvement and reaches the new state-of-the-art performance.", + "There are five types of trajectory prediction tasks: deterministic,\nstochastic, domain adaptation, momentary observation, and few-shot. These\nassociated tasks are defined by various factors, such as the length of input\npaths, data split and pre-processing methods. Interestingly, even though they\ncommonly take sequential coordinates of observations as input and infer future\npaths in the same coordinates as output, designing specialized architectures\nfor each task is still necessary. For the other task, generality issues can\nlead to sub-optimal performances. In this paper, we propose SingularTrajectory,\na diffusion-based universal trajectory prediction framework to reduce the\nperformance gap across the five tasks. The core of SingularTrajectory is to\nunify a variety of human dynamics representations on the associated tasks. To\ndo this, we first build a Singular space to project all types of motion\npatterns from each task into one embedding space. We next propose an adaptive\nanchor working in the Singular space. Unlike traditional fixed anchor methods\nthat sometimes yield unacceptable paths, our adaptive anchor enables correct\nanchors, which are put into a wrong location, based on a traversability map.", + "We next propose an adaptive\nanchor working in the Singular space. Unlike traditional fixed anchor methods\nthat sometimes yield unacceptable paths, our adaptive anchor enables correct\nanchors, which are put into a wrong location, based on a traversability map.\nFinally, we adopt a diffusion-based predictor to further enhance the prototype\npaths using a cascaded denoising process. Our unified framework ensures the\ngenerality across various benchmark settings such as input modality, and\ntrajectory lengths. Extensive experiments on five public benchmarks demonstrate\nthat SingularTrajectory substantially outperforms existing models, highlighting\nits effectiveness in estimating general dynamics of human movements. Code is\npublicly available at https://github.com/inhwanbae/SingularTrajectory .", + "Traditional 3D content creation tools empower users to bring their\nimagination to life by giving them direct control over a scene's geometry,\nappearance, motion, and camera path. Creating computer-generated videos,\nhowever, is a tedious manual process, which can be automated by emerging\ntext-to-video diffusion models. Despite great promise, video diffusion models\nare difficult to control, hindering a user to apply their own creativity rather\nthan amplifying it. To address this challenge, we present a novel approach that\ncombines the controllability of dynamic 3D meshes with the expressivity and\neditability of emerging diffusion models. For this purpose, our approach takes\nan animated, low-fidelity rendered mesh as input and injects the ground truth\ncorrespondence information obtained from the dynamic mesh into various stages\nof a pre-trained text-to-image generation model to output high-quality and\ntemporally consistent frames. We demonstrate our approach on various examples\nwhere motion can be obtained by animating rigged assets or changing the camera\npath.", + "The fidelity of relighting is bounded by both geometry and appearance\nrepresentations. For geometry, both mesh and volumetric approaches have\ndifficulty modeling intricate structures like 3D hair geometry. For appearance,\nexisting relighting models are limited in fidelity and often too slow to render\nin real-time with high-resolution continuous environments. In this work, we\npresent Relightable Gaussian Codec Avatars, a method to build high-fidelity\nrelightable head avatars that can be animated to generate novel expressions.\nOur geometry model based on 3D Gaussians can capture 3D-consistent\nsub-millimeter details such as hair strands and pores on dynamic face\nsequences. To support diverse materials of human heads such as the eyes, skin,\nand hair in a unified manner, we present a novel relightable appearance model\nbased on learnable radiance transfer. Together with global illumination-aware\nspherical harmonics for the diffuse components, we achieve real-time relighting\nwith all-frequency reflections using spherical Gaussians. This appearance model\ncan be efficiently relit under both point light and continuous illumination.", + "Together with global illumination-aware\nspherical harmonics for the diffuse components, we achieve real-time relighting\nwith all-frequency reflections using spherical Gaussians. This appearance model\ncan be efficiently relit under both point light and continuous illumination. We\nfurther improve the fidelity of eye reflections and enable explicit gaze\ncontrol by introducing relightable explicit eye models. Our method outperforms\nexisting approaches without compromising real-time performance. We also\ndemonstrate real-time relighting of avatars on a tethered consumer VR headset,\nshowcasing the efficiency and fidelity of our avatars.", + "In this paper, we explore the capability of an agent to construct a logical\nsequence of action steps, thereby assembling a strategic procedural plan. This\nplan is crucial for navigating from an initial visual observation to a target\nvisual outcome, as depicted in real-life instructional videos. Existing works\nhave attained partial success by extensively leveraging various sources of\ninformation available in the datasets, such as heavy intermediate visual\nobservations, procedural names, or natural language step-by-step instructions,\nfor features or supervision signals. However, the task remains formidable due\nto the implicit causal constraints in the sequencing of steps and the\nvariability inherent in multiple feasible plans. To tackle these intricacies\nthat previous efforts have overlooked, we propose to enhance the capabilities\nof the agent by infusing it with procedural knowledge. This knowledge, sourced\nfrom training procedure plans and structured as a directed weighted graph,\nequips the agent to better navigate the complexities of step sequencing and its\npotential variations. We coin our approach KEPP, a novel Knowledge-Enhanced\nProcedure Planning system, which harnesses a probabilistic procedural knowledge\ngraph extracted from training data, effectively acting as a comprehensive\ntextbook for the training domain.", + "We coin our approach KEPP, a novel Knowledge-Enhanced\nProcedure Planning system, which harnesses a probabilistic procedural knowledge\ngraph extracted from training data, effectively acting as a comprehensive\ntextbook for the training domain. Experimental evaluations across three\nwidely-used datasets under settings of varying complexity reveal that KEPP\nattains superior, state-of-the-art results while requiring only minimal\nsupervision.", + "Knowledge distillation (KD) has been applied to various tasks successfully,\nand mainstream methods typically boost the student model via spatial imitation\nlosses. However, the consecutive downsamplings induced in the spatial domain of\nteacher model is a type of corruption, hindering the student from analyzing\nwhat specific information needs to be imitated, which results in accuracy\ndegradation. To better understand the underlying pattern of corrupted feature\nmaps, we shift our attention to the frequency domain. During frequency\ndistillation, we encounter a new challenge: the low-frequency bands convey\ngeneral but minimal context, while the high are more informative but also\nintroduce noise. Not each pixel within the frequency bands contributes equally\nto the performance. To address the above problem: (1) We propose the Frequency\nPrompt plugged into the teacher model, absorbing the semantic frequency context\nduring finetuning. (2) During the distillation period, a pixel-wise frequency\nmask is generated via Frequency Prompt, to localize those pixel of interests\n(PoIs) in various frequency bands. Additionally, we employ a position-aware\nrelational frequency loss for dense prediction tasks, delivering a high-order\nspatial enhancement to the student model.", + "Additionally, we employ a position-aware\nrelational frequency loss for dense prediction tasks, delivering a high-order\nspatial enhancement to the student model. We dub our Frequency Knowledge\nDistillation method as FreeKD, which determines the optimal localization and\nextent for the frequency distillation. Extensive experiments demonstrate that\nFreeKD not only outperforms spatial-based distillation methods consistently on\ndense prediction tasks (e.g., FreeKD brings 3.8 AP gains for RepPoints-R50 on\nCOCO2017 and 4.55 mIoU gains for PSPNet-R18 on Cityscapes), but also conveys\nmore robustness to the student. Notably, we also validate the generalization of\nour approach on large-scale vision models (e.g., DINO and SAM).", + "In Visual Place Recognition (VPR) the pose of a query image is estimated by\ncomparing the image to a map of reference images with known reference poses. As\nis typical for image retrieval problems, a feature extractor maps the query and\nreference images to a feature space, where a nearest neighbor search is then\nperformed. However, till recently little attention has been given to\nquantifying the confidence that a retrieved reference image is a correct match.\nHighly certain but incorrect retrieval can lead to catastrophic failure of\nVPR-based localization pipelines. This work compares for the first time the\nmain approaches for estimating the image-matching uncertainty, including the\ntraditional retrieval-based uncertainty estimation, more recent data-driven\naleatoric uncertainty estimation, and the compute-intensive geometric\nverification. We further formulate a simple baseline method, ``SUE'', which\nunlike the other methods considers the freely-available poses of the reference\nimages in the map. Our experiments reveal that a simple L2-distance between the\nquery and reference descriptors is already a better estimate of image-matching\nuncertainty than current data-driven approaches.", + "Our experiments reveal that a simple L2-distance between the\nquery and reference descriptors is already a better estimate of image-matching\nuncertainty than current data-driven approaches. SUE outperforms the other\nefficient uncertainty estimation methods, and its uncertainty estimates\ncomplement the computationally expensive geometric verification approach.\nFuture works for uncertainty estimation in VPR should consider the baselines\ndiscussed in this work.", + "Referring Image Segmentation (RIS) is a challenging task that requires an\nalgorithm to segment objects referred by free-form language expressions.\nDespite significant progress in recent years, most state-of-the-art (SOTA)\nmethods still suffer from considerable language-image modality gap at the pixel\nand word level. These methods generally 1) rely on sentence-level language\nfeatures for language-image alignment and 2) lack explicit training supervision\nfor fine-grained visual grounding. Consequently, they exhibit weak object-level\ncorrespondence between visual and language features. Without well-grounded\nfeatures, prior methods struggle to understand complex expressions that require\nstrong reasoning over relationships among multiple objects, especially when\ndealing with rarely used or ambiguous clauses. To tackle this challenge, we\nintroduce a novel Mask Grounding auxiliary task that significantly improves\nvisual grounding within language features, by explicitly teaching the model to\nlearn fine-grained correspondence between masked textual tokens and their\nmatching visual objects. Mask Grounding can be directly used on prior RIS\nmethods and consistently bring improvements. Furthermore, to holistically\naddress the modality gap, we also design a cross-modal alignment loss and an\naccompanying alignment module.", + "Mask Grounding can be directly used on prior RIS\nmethods and consistently bring improvements. Furthermore, to holistically\naddress the modality gap, we also design a cross-modal alignment loss and an\naccompanying alignment module. These additions work synergistically with Mask\nGrounding. With all these techniques, our comprehensive approach culminates in\nMagNet (Mask-grounded Network), an architecture that significantly outperforms\nprior arts on three key benchmarks (RefCOCO, RefCOCO+ and G-Ref), demonstrating\nour method's effectiveness in addressing current limitations of RIS algorithms.\nOur code and pre-trained weights will be released.", + "The pursuit of accurate 3D hand pose estimation stands as a keystone for\nunderstanding human activity in the realm of egocentric vision. The majority of\nexisting estimation methods still rely on single-view images as input, leading\nto potential limitations, e.g., limited field-of-view and ambiguity in depth.\nTo address these problems, adding another camera to better capture the shape of\nhands is a practical direction. However, existing multi-view hand pose\nestimation methods suffer from two main drawbacks: 1) Requiring multi-view\nannotations for training, which are expensive. 2) During testing, the model\nbecomes inapplicable if camera parameters/layout are not the same as those used\nin training. In this paper, we propose a novel Single-to-Dual-view adaptation\n(S2DHand) solution that adapts a pre-trained single-view estimator to dual\nviews. Compared with existing multi-view training methods, 1) our adaptation\nprocess is unsupervised, eliminating the need for multi-view annotation. 2)\nMoreover, our method can handle arbitrary dual-view pairs with unknown camera\nparameters, making the model applicable to diverse camera settings.", + "Compared with existing multi-view training methods, 1) our adaptation\nprocess is unsupervised, eliminating the need for multi-view annotation. 2)\nMoreover, our method can handle arbitrary dual-view pairs with unknown camera\nparameters, making the model applicable to diverse camera settings.\nSpecifically, S2DHand is built on certain stereo constraints, including\npair-wise cross-view consensus and invariance of transformation between both\nviews. These two stereo constraints are used in a complementary manner to\ngenerate pseudo-labels, allowing reliable adaptation. Evaluation results reveal\nthat S2DHand achieves significant improvements on arbitrary camera pairs under\nboth in-dataset and cross-dataset settings, and outperforms existing adaptation\nmethods with leading performance. Project page:\nhttps://github.com/MickeyLLG/S2DHand.", + "We propose a computational imaging method for time-efficient light-field\nacquisition that combines a coded aperture with an event-based camera.\nDifferent from the conventional coded-aperture imaging method, our method\napplies a sequence of coding patterns during a single exposure for an image\nframe. The parallax information, which is related to the differences in coding\npatterns, is recorded as events. The image frame and events, all of which are\nmeasured in a single exposure, are jointly used to computationally reconstruct\na light field. We also designed an algorithm pipeline for our method that is\nend-to-end trainable on the basis of deep optics and compatible with real\ncamera hardware. We experimentally showed that our method can achieve more\naccurate reconstruction than several other imaging methods with a single\nexposure. We also developed a hardware prototype with the potential to complete\nthe measurement on the camera within 22 msec and demonstrated that light fields\nfrom real 3-D scenes can be obtained with convincing visual quality. Our\nsoftware and supplementary video are available from our project website.", + "Character Animation aims to generating character videos from still images\nthrough driving signals. Currently, diffusion models have become the mainstream\nin visual generation research, owing to their robust generative capabilities.\nHowever, challenges persist in the realm of image-to-video, especially in\ncharacter animation, where temporally maintaining consistency with detailed\ninformation from character remains a formidable problem. In this paper, we\nleverage the power of diffusion models and propose a novel framework tailored\nfor character animation. To preserve consistency of intricate appearance\nfeatures from reference image, we design ReferenceNet to merge detail features\nvia spatial attention. To ensure controllability and continuity, we introduce\nan efficient pose guider to direct character's movements and employ an\neffective temporal modeling approach to ensure smooth inter-frame transitions\nbetween video frames. By expanding the training data, our approach can animate\narbitrary characters, yielding superior results in character animation compared\nto other image-to-video methods. Furthermore, we evaluate our method on\nbenchmarks for fashion video and human dance synthesis, achieving\nstate-of-the-art results.", + "Benefiting from large-scale pre-trained text-to-image (T2I) generative\nmodels, impressive progress has been achieved in customized image generation,\nwhich aims to generate user-specified concepts. Existing approaches have\nextensively focused on single-concept customization and still encounter\nchallenges when it comes to complex scenarios that involve combining multiple\nconcepts. These approaches often require retraining/fine-tuning using a few\nimages, leading to time-consuming training processes and impeding their swift\nimplementation. Furthermore, the reliance on multiple images to represent a\nsingular concept increases the difficulty of customization. To this end, we\npropose FreeCustom, a novel tuning-free method to generate customized images of\nmulti-concept composition based on reference concepts, using only one image per\nconcept as input. Specifically, we introduce a new multi-reference\nself-attention (MRSA) mechanism and a weighted mask strategy that enables the\ngenerated image to access and focus more on the reference concepts. In\naddition, MRSA leverages our key finding that input concepts are better\npreserved when providing images with context interactions.", + "In\naddition, MRSA leverages our key finding that input concepts are better\npreserved when providing images with context interactions. Experiments show\nthat our method's produced images are consistent with the given concepts and\nbetter aligned with the input text. Our method outperforms or performs on par\nwith other training-based methods in terms of multi-concept composition and\nsingle-concept customization, but is simpler. Codes can be found at\nhttps://github.com/aim-uofa/FreeCustom.", + "Sequence-to-sequence vision-language models are showing promise, but their\napplicability is limited by their inference latency due to their autoregressive\nway of generating predictions. We propose a parallel decoding\nsequence-to-sequence vision-language model, trained with a Query-CTC loss, that\nmarginalizes over multiple inference paths in the decoder. This allows us to\nmodel the joint distribution of tokens, rather than restricting to conditional\ndistribution as in an autoregressive model. The resulting model, NARVL,\nachieves performance on-par with its state-of-the-art autoregressive\ncounterpart, but is faster at inference time, reducing from the linear\ncomplexity associated with the sequential generation of tokens to a paradigm of\nconstant time joint inference.", + "Recent advances in generative AI have significantly enhanced image and video\nediting, particularly in the context of text prompt control. State-of-the-art\napproaches predominantly rely on diffusion models to accomplish these tasks.\nHowever, the computational demands of diffusion-based methods are substantial,\noften necessitating large-scale paired datasets for training, and therefore\nchallenging the deployment in real applications. To address these issues, this\npaper breaks down the text-based video editing task into two stages. First, we\nleverage an pre-trained text-to-image diffusion model to simultaneously edit\nfew keyframes in an zero-shot way. Second, we introduce an efficient model\ncalled MaskINT, which is built on non-autoregressive masked generative\ntransformers and specializes in frame interpolation between the edited\nkeyframes, using the structural guidance from intermediate frames. Experimental\nresults suggest that our MaskINT achieves comparable performance with\ndiffusion-based methodologies, while significantly improve the inference time.\nThis research offers a practical solution for text-based video editing and\nshowcases the potential of non-autoregressive masked generative transformers in\nthis domain.", + "Pre-trained Vision Language Models (VLMs) have demonstrated notable progress\nin various zero-shot tasks, such as classification and retrieval. Despite their\nperformance, because improving performance on new tasks requires task-specific\nknowledge, their adaptation is essential. While labels are needed for the\nadaptation, acquiring them is typically expensive. To overcome this challenge,\nactive learning, a method of achieving a high performance by obtaining labels\nfor a small number of samples from experts, has been studied. Active learning\nprimarily focuses on selecting unlabeled samples for labeling and leveraging\nthem to train models. In this study, we pose the question, \"how can the\npre-trained VLMs be adapted under the active learning framework?\" In response\nto this inquiry, we observe that (1) simply applying a conventional active\nlearning framework to pre-trained VLMs even may degrade performance compared to\nrandom selection because of the class imbalance in labeling candidates, and (2)\nthe knowledge of VLMs can provide hints for achieving the balance before\nlabeling. Based on these observations, we devise a novel active learning\nframework for VLMs, denoted as PCB.", + "Based on these observations, we devise a novel active learning\nframework for VLMs, denoted as PCB. To assess the effectiveness of our\napproach, we conduct experiments on seven different real-world datasets, and\nthe results demonstrate that PCB surpasses conventional active learning and\nrandom sampling methods. Code will be available in\nhttps://github.com/kaist-dmlab/pcb .", + "Current metrics for text-to-image models typically rely on statistical\nmetrics which inadequately represent the real preference of humans. Although\nrecent work attempts to learn these preferences via human annotated images,\nthey reduce the rich tapestry of human preference to a single overall score.\nHowever, the preference results vary when humans evaluate images with different\naspects. Therefore, to learn the multi-dimensional human preferences, we\npropose the Multi-dimensional Preference Score (MPS), the first\nmulti-dimensional preference scoring model for the evaluation of text-to-image\nmodels. The MPS introduces the preference condition module upon CLIP model to\nlearn these diverse preferences. It is trained based on our Multi-dimensional\nHuman Preference (MHP) Dataset, which comprises 918,315 human preference\nchoices across four dimensions (i.e., aesthetics, semantic alignment, detail\nquality and overall assessment) on 607,541 images. The images are generated by\na wide range of latest text-to-image models. The MPS outperforms existing\nscoring methods across 3 datasets in 4 dimensions, enabling it a promising\nmetric for evaluating and improving text-to-image generation.", + "Accurately detecting active objects undergoing state changes is essential for\ncomprehending human interactions and facilitating decision-making. The existing\nmethods for active object detection (AOD) primarily rely on visual appearance\nof the objects within input, such as changes in size, shape and relationship\nwith hands. However, these visual changes can be subtle, posing challenges,\nparticularly in scenarios with multiple distracting no-change instances of the\nsame category. We observe that the state changes are often the result of an\ninteraction being performed upon the object, thus propose to use informed\npriors about object related plausible interactions (including semantics and\nvisual appearance) to provide more reliable cues for AOD. Specifically, we\npropose a knowledge aggregation procedure to integrate the aforementioned\ninformed priors into oracle queries within the teacher decoder, offering more\nobject affordance commonsense to locate the active object. To streamline the\ninference process and reduce extra knowledge inputs, we propose a knowledge\ndistillation approach that encourages the student decoder to mimic the\ndetection capabilities of the teacher decoder using the oracle query by\nreplicating its predictions and attention.", + "To streamline the\ninference process and reduce extra knowledge inputs, we propose a knowledge\ndistillation approach that encourages the student decoder to mimic the\ndetection capabilities of the teacher decoder using the oracle query by\nreplicating its predictions and attention. Our proposed framework achieves\nstate-of-the-art performance on four datasets, namely Ego4D, Epic-Kitchens,\nMECCANO, and 100DOH, which demonstrates the effectiveness of our approach in\nimproving AOD.", + "Generating human motions from textual descriptions has gained growing\nresearch interest due to its wide range of applications. However, only a few\nworks consider human-scene interactions together with text conditions, which is\ncrucial for visual and physical realism. This paper focuses on the task of\ngenerating human motions in 3D indoor scenes given text descriptions of the\nhuman-scene interactions. This task presents challenges due to the\nmulti-modality nature of text, scene, and motion, as well as the need for\nspatial reasoning. To address these challenges, we propose a new approach that\ndecomposes the complex problem into two more manageable sub-problems: (1)\nlanguage grounding of the target object and (2) object-centric motion\ngeneration. For language grounding of the target object, we leverage the power\nof large language models. For motion generation, we design an object-centric\nscene representation for the generative model to focus on the target object,\nthereby reducing the scene complexity and facilitating the modeling of the\nrelationship between human motions and the object. Experiments demonstrate the\nbetter motion quality of our approach compared to baselines and validate our\ndesign choices.", + "Audiovisual segmentation (AVS) is a challenging task that aims to segment\nvisual objects in videos according to their associated acoustic cues. With\nmultiple sound sources and background disturbances involved, establishing\nrobust correspondences between audio and visual contents poses unique\nchallenges due to (1) complex entanglement across sound sources and (2)\nfrequent changes in the occurrence of distinct sound events. Assuming sound\nevents occur independently, the multi-source semantic space can be represented\nas the Cartesian product of single-source sub-spaces. We are motivated to\ndecompose the multi-source audio semantics into single-source semantics for\nmore effective interactions with visual content. We propose a semantic\ndecomposition method based on product quantization, where the multi-source\nsemantics can be decomposed and represented by several disentangled and\nnoise-suppressed single-source semantics. Furthermore, we introduce a\nglobal-to-local quantization mechanism, which distills knowledge from stable\nglobal (clip-level) features into local (frame-level) ones, to handle frequent\nchanges in audio semantics.", + "Furthermore, we introduce a\nglobal-to-local quantization mechanism, which distills knowledge from stable\nglobal (clip-level) features into local (frame-level) ones, to handle frequent\nchanges in audio semantics. Extensive experiments demonstrate that our\nsemantically decomposed audio representation significantly improves AVS\nperformance, e.g., +21.2% mIoU on the challenging AVS-Semantic benchmark with\nResNet50 backbone. https://github.com/lxa9867/QSD.", + "Active recognition, which allows intelligent agents to explore observations\nfor better recognition performance, serves as a prerequisite for various\nembodied AI tasks, such as grasping, navigation and room arrangements. Given\nthe evolving environment and the multitude of object classes, it is impractical\nto include all possible classes during the training stage. In this paper, we\naim at advancing active open-vocabulary recognition, empowering embodied agents\nto actively perceive and classify arbitrary objects. However, directly adopting\nrecent open-vocabulary classification models, like Contrastive Language Image\nPretraining (CLIP), poses its unique challenges. Specifically, we observe that\nCLIP's performance is heavily affected by the viewpoint and occlusions,\ncompromising its reliability in unconstrained embodied perception scenarios.\nFurther, the sequential nature of observations in agent-environment\ninteractions necessitates an effective method for integrating features that\nmaintains discriminative strength for open-vocabulary classification. To\naddress these issues, we introduce a novel agent for active open-vocabulary\nrecognition. The proposed method leverages inter-frame and inter-concept\nsimilarities to navigate agent movements and to fuse features, without relying\non class-specific knowledge.", + "To\naddress these issues, we introduce a novel agent for active open-vocabulary\nrecognition. The proposed method leverages inter-frame and inter-concept\nsimilarities to navigate agent movements and to fuse features, without relying\non class-specific knowledge. Compared to baseline CLIP model with 29.6%\naccuracy on ShapeNet dataset, the proposed agent could achieve 53.3% accuracy\nfor open-vocabulary recognition, without any fine-tuning to the equipped CLIP\nmodel. Additional experiments conducted with the Habitat simulator further\naffirm the efficacy of our method.", + "Solving complex visual tasks such as \"Who invented the musical instrument on\nthe right?\" involves a composition of skills: understanding space, recognizing\ninstruments, and also retrieving prior knowledge. Recent work shows promise by\ndecomposing such tasks using a large language model (LLM) into an executable\nprogram that invokes specialized vision models. However, generated programs are\nerror-prone: they omit necessary steps, include spurious ones, and are unable\nto recover when the specialized models give incorrect outputs. Moreover, they\nrequire loading multiple models, incurring high latency and computation costs.\nWe propose Visual Program Distillation (VPD), an instruction tuning framework\nthat produces a vision-language model (VLM) capable of solving complex visual\ntasks with a single forward pass. VPD distills the reasoning ability of LLMs by\nusing them to sample multiple candidate programs, which are then executed and\nverified to identify a correct one. It translates each correct program into a\nlanguage description of the reasoning steps, which are then distilled into a\nVLM. Extensive experiments show that VPD improves the VLM's ability to count,\nunderstand spatial relations, and reason compositionally.", + "It translates each correct program into a\nlanguage description of the reasoning steps, which are then distilled into a\nVLM. Extensive experiments show that VPD improves the VLM's ability to count,\nunderstand spatial relations, and reason compositionally. Our VPD-trained\nPaLI-X outperforms all prior VLMs, achieving state-of-the-art performance\nacross complex vision tasks, including MMBench, OK-VQA, A-OKVQA, TallyQA, POPE,\nand Hateful Memes. An evaluation with human annotators also confirms that VPD\nimproves model response factuality and consistency. Finally, experiments on\ncontent moderation demonstrate that VPD is also helpful for adaptation to\nreal-world applications with limited data.", + "Semantic, instance, and panoptic segmentation of 3D point clouds have been\naddressed using task-specific models of distinct design. Thereby, the\nsimilarity of all segmentation tasks and the implicit relationship between them\nhave not been utilized effectively. This paper presents a unified, simple, and\neffective model addressing all these tasks jointly. The model, named\nOneFormer3D, performs instance and semantic segmentation consistently, using a\ngroup of learnable kernels, where each kernel is responsible for generating a\nmask for either an instance or a semantic category. These kernels are trained\nwith a transformer-based decoder with unified instance and semantic queries\npassed as an input. Such a design enables training a model end-to-end in a\nsingle run, so that it achieves top performance on all three segmentation tasks\nsimultaneously. Specifically, our OneFormer3D ranks 1st and sets a new\nstate-of-the-art (+2.1 mAP50) in the ScanNet test leaderboard.", + "Specifically, our OneFormer3D ranks 1st and sets a new\nstate-of-the-art (+2.1 mAP50) in the ScanNet test leaderboard. We also\ndemonstrate the state-of-the-art results in semantic, instance, and panoptic\nsegmentation of ScanNet (+21 PQ), ScanNet200 (+3.8 mAP50), and S3DIS (+0.8\nmIoU) datasets.", + "Human comprehension of a video stream is naturally broad: in a few instants,\nwe are able to understand what is happening, the relevance and relationship of\nobjects, and forecast what will follow in the near future, everything all at\nonce. We believe that - to effectively transfer such an holistic perception to\nintelligent machines - an important role is played by learning to correlate\nconcepts and to abstract knowledge coming from different tasks, to\nsynergistically exploit them when learning novel skills. To accomplish this, we\nseek for a unified approach to video understanding which combines shared\ntemporal modelling of human actions with minimal overhead, to support multiple\ndownstream tasks and enable cooperation when learning novel skills. We then\npropose EgoPack, a solution that creates a collection of task perspectives that\ncan be carried across downstream tasks and used as a potential source of\nadditional insights, as a backpack of skills that a robot can carry around and\nuse when needed. We demonstrate the effectiveness and efficiency of our\napproach on four Ego4D benchmarks, outperforming current state-of-the-art\nmethods.", + "The rapid advancement of generative models, facilitating the creation of\nhyper-realistic images from textual descriptions, has concurrently escalated\ncritical societal concerns such as misinformation. Although providing some\nmitigation, traditional fingerprinting mechanisms fall short in attributing\nresponsibility for the malicious use of synthetic images. This paper introduces\na novel approach to model fingerprinting that assigns responsibility for the\ngenerated images, thereby serving as a potential countermeasure to model\nmisuse. Our method modifies generative models based on each user's unique\ndigital fingerprint, imprinting a unique identifier onto the resultant content\nthat can be traced back to the user. This approach, incorporating fine-tuning\ninto Text-to-Image (T2I) tasks using the Stable Diffusion Model, demonstrates\nnear-perfect attribution accuracy with a minimal impact on output quality.\nThrough extensive evaluation, we show that our method outperforms baseline\nmethods with an average improvement of 11\\% in handling image post-processes.\nOur method presents a promising and novel avenue for accountable model\ndistribution and responsible use. Our code is available in\n\\url{https://github.com/kylemin/WOUAF}.", + "In-context prompting in large language models (LLMs) has become a prevalent\napproach to improve zero-shot capabilities, but this idea is less explored in\nthe vision domain. Existing visual prompting methods focus on referring\nsegmentation to segment the most relevant object, falling short of addressing\nmany generic vision tasks like open-set segmentation and detection. In this\npaper, we introduce a universal visual in-context prompting framework for both\ntasks. In particular, we build on top of an encoder-decoder architecture, and\ndevelop a versatile prompt encoder to support a variety of prompts like\nstrokes, boxes, and points. We further enhance it to take an arbitrary number\nof reference image segments as the context. Our extensive explorations show\nthat the proposed visual in-context prompting elicits extraordinary referring\nand generic segmentation capabilities to refer and detect, yielding competitive\nperformance to close-set in-domain datasets and showing promising results on\nmany open-set segmentation datasets. By joint training on COCO and SA-1B, our\nmodel achieves $57.7$ PQ on COCO and $23.2$ PQ on ADE20K. Code will be\navailable at https://github.com/UX-Decoder/DINOv.", + "We present HAAR, a new strand-based generative model for 3D human hairstyles.\nSpecifically, based on textual inputs, HAAR produces 3D hairstyles that could\nbe used as production-level assets in modern computer graphics engines. Current\nAI-based generative models take advantage of powerful 2D priors to reconstruct\n3D content in the form of point clouds, meshes, or volumetric functions.\nHowever, by using the 2D priors, they are intrinsically limited to only\nrecovering the visual parts. Highly occluded hair structures can not be\nreconstructed with those methods, and they only model the ''outer shell'',\nwhich is not ready to be used in physics-based rendering or simulation\npipelines. In contrast, we propose a first text-guided generative method that\nuses 3D hair strands as an underlying representation. Leveraging 2D visual\nquestion-answering (VQA) systems, we automatically annotate synthetic hair\nmodels that are generated from a small set of artist-created hairstyles. This\nallows us to train a latent diffusion model that operates in a common hairstyle\nUV space.", + "Leveraging 2D visual\nquestion-answering (VQA) systems, we automatically annotate synthetic hair\nmodels that are generated from a small set of artist-created hairstyles. This\nallows us to train a latent diffusion model that operates in a common hairstyle\nUV space. In qualitative and quantitative studies, we demonstrate the\ncapabilities of the proposed model and compare it to existing hairstyle\ngeneration approaches.", + "Despite recent advances in text-to-3D generative methods, there is a notable\nabsence of reliable evaluation metrics. Existing metrics usually focus on a\nsingle criterion each, such as how well the asset aligned with the input text.\nThese metrics lack the flexibility to generalize to different evaluation\ncriteria and might not align well with human preferences. Conducting user\npreference studies is an alternative that offers both adaptability and\nhuman-aligned results. User studies, however, can be very expensive to scale.\nThis paper presents an automatic, versatile, and human-aligned evaluation\nmetric for text-to-3D generative models. To this end, we first develop a prompt\ngenerator using GPT-4V to generate evaluating prompts, which serve as input to\ncompare text-to-3D models. We further design a method instructing GPT-4V to\ncompare two 3D assets according to user-defined criteria. Finally, we use these\npairwise comparison results to assign these models Elo ratings. Experimental\nresults suggest our metric strongly align with human preference across\ndifferent evaluation criteria.", + "Neural 3D reconstruction from multi-view images has recently attracted\nincreasing attention from the community. Existing methods normally learn a\nneural field for the whole scene, while it is still under-explored how to\nreconstruct a target object indicated by users. Considering the Segment\nAnything Model (SAM) has shown effectiveness in segmenting any 2D images, in\nthis paper, we propose NTO3D, a novel high-quality Neural Target Object 3D\n(NTO3D) reconstruction method, which leverages the benefits of both neural\nfield and SAM. We first propose a novel strategy to lift the multi-view 2D\nsegmentation masks of SAM into a unified 3D occupancy field. The 3D occupancy\nfield is then projected into 2D space and generates the new prompts for SAM.\nThis process is iterative until convergence to separate the target object from\nthe scene. After this, we then lift the 2D features of the SAM encoder into a\n3D feature field in order to improve the reconstruction quality of the target\nobject.", + "This process is iterative until convergence to separate the target object from\nthe scene. After this, we then lift the 2D features of the SAM encoder into a\n3D feature field in order to improve the reconstruction quality of the target\nobject. NTO3D lifts the 2D masks and features of SAM into the 3D neural field\nfor high-quality neural target object 3D reconstruction. We conduct detailed\nexperiments on several benchmark datasets to demonstrate the advantages of our\nmethod. The code will be available at: https://github.com/ucwxb/NTO3D.", + "Large Vision-Language Models (LVLMs) have demonstrated remarkable\ncapabilities in various multimodal tasks. However, their potential in the\nmedical domain remains largely unexplored. A significant challenge arises from\nthe scarcity of diverse medical images spanning various modalities and\nanatomical regions, which is essential in real-world medical applications. To\nsolve this problem, in this paper, we introduce OmniMedVQA, a novel\ncomprehensive medical Visual Question Answering (VQA) benchmark. This benchmark\nis collected from 73 different medical datasets, including 12 different\nmodalities and covering more than 20 distinct anatomical regions. Importantly,\nall images in this benchmark are sourced from authentic medical scenarios,\nensuring alignment with the requirements of the medical field and suitability\nfor evaluating LVLMs. Through our extensive experiments, we have found that\nexisting LVLMs struggle to address these medical VQA problems effectively.\nMoreover, what surprises us is that medical-specialized LVLMs even exhibit\ninferior performance to those general-domain models, calling for a more\nversatile and robust LVLM in the biomedical field.", + "Moreover, what surprises us is that medical-specialized LVLMs even exhibit\ninferior performance to those general-domain models, calling for a more\nversatile and robust LVLM in the biomedical field. The evaluation results not\nonly reveal the current limitations of LVLM in understanding real medical\nimages but also highlight our dataset's significance. Our code with dataset are\navailable at https://github.com/OpenGVLab/Multi-Modality-Arena.", + "High-resolution image generation with Generative Artificial Intelligence\n(GenAI) has immense potential but, due to the enormous capital investment\nrequired for training, it is increasingly centralised to a few large\ncorporations, and hidden behind paywalls. This paper aims to democratise\nhigh-resolution GenAI by advancing the frontier of high-resolution generation\nwhile remaining accessible to a broad audience. We demonstrate that existing\nLatent Diffusion Models (LDMs) possess untapped potential for higher-resolution\nimage generation. Our novel DemoFusion framework seamlessly extends open-source\nGenAI models, employing Progressive Upscaling, Skip Residual, and Dilated\nSampling mechanisms to achieve higher-resolution image generation. The\nprogressive nature of DemoFusion requires more passes, but the intermediate\nresults can serve as \"previews\", facilitating rapid prompt iteration.", + "We present a method to generate full-body selfies from photographs originally\ntaken at arms length. Because self-captured photos are typically taken close\nup, they have limited field of view and exaggerated perspective that distorts\nfacial shapes. We instead seek to generate the photo some one else would take\nof you from a few feet away. Our approach takes as input four selfies of your\nface and body, a background image, and generates a full-body selfie in a\ndesired target pose. We introduce a novel diffusion-based approach to combine\nall of this information into high-quality, well-composed photos of you with the\ndesired pose and background.", + "3D Visual Grounding (3DVG) aims at localizing 3D object based on textual\ndescriptions. Conventional supervised methods for 3DVG often necessitate\nextensive annotations and a predefined vocabulary, which can be restrictive. To\naddress this issue, we propose a novel visual programming approach for\nzero-shot open-vocabulary 3DVG, leveraging the capabilities of large language\nmodels (LLMs). Our approach begins with a unique dialog-based method, engaging\nwith LLMs to establish a foundational understanding of zero-shot 3DVG. Building\non this, we design a visual program that consists of three types of modules,\ni.e., view-independent, view-dependent, and functional modules. These modules,\nspecifically tailored for 3D scenarios, work collaboratively to perform complex\nreasoning and inference. Furthermore, we develop an innovative language-object\ncorrelation module to extend the scope of existing 3D object detectors into\nopen-vocabulary scenarios. Extensive experiments demonstrate that our zero-shot\napproach can outperform some supervised baselines, marking a significant stride\ntowards effective 3DVG.", + "In this paper we tackle the problem of learning Structure-from-Motion (SfM)\nthrough the use of graph attention networks. SfM is a classic computer vision\nproblem that is solved though iterative minimization of reprojection errors,\nreferred to as Bundle Adjustment (BA), starting from a good initialization. In\norder to obtain a good enough initialization to BA, conventional methods rely\non a sequence of sub-problems (such as pairwise pose estimation, pose averaging\nor triangulation) which provide an initial solution that can then be refined\nusing BA. In this work we replace these sub-problems by learning a model that\ntakes as input the 2D keypoints detected across multiple views, and outputs the\ncorresponding camera poses and 3D keypoint coordinates. Our model takes\nadvantage of graph neural networks to learn SfM-specific primitives, and we\nshow that it can be used for fast inference of the reconstruction for new and\nunseen sequences. The experimental results show that the proposed model\noutperforms competing learning-based methods, and challenges COLMAP while\nhaving lower runtime. Our code is available at\nhttps://github.com/lucasbrynte/gasfm/.", + "Shape and geometric patterns are essential in defining stylistic identity.\nHowever, current 3D style transfer methods predominantly focus on transferring\ncolors and textures, often overlooking geometric aspects. In this paper, we\nintroduce Geometry Transfer, a novel method that leverages geometric\ndeformation for 3D style transfer. This technique employs depth maps to extract\na style guide, subsequently applied to stylize the geometry of radiance fields.\nMoreover, we propose new techniques that utilize geometric cues from the 3D\nscene, thereby enhancing aesthetic expressiveness and more accurately\nreflecting intended styles. Our extensive experiments show that Geometry\nTransfer enables a broader and more expressive range of stylizations, thereby\nsignificantly expanding the scope of 3D style transfer.", + "We present the first approach to render highly realistic free-viewpoint\nvideos of a human actor in general apparel, from sparse multi-view recording to\ndisplay, in real-time at an unprecedented 4K resolution. At inference, our\nmethod only requires four camera views of the moving actor and the respective\n3D skeletal pose. It handles actors in wide clothing, and reproduces even\nfine-scale dynamic detail, e.g. clothing wrinkles, face expressions, and hand\ngestures. At training time, our learning-based approach expects dense\nmulti-view video and a rigged static surface scan of the actor. Our method\ncomprises three main stages. Stage 1 is a skeleton-driven neural approach for\nhigh-quality capture of the detailed dynamic mesh geometry. Stage 2 is a novel\nsolution to create a view-dependent texture using four test-time camera views\nas input. Finally, stage 3 comprises a new image-based refinement network\nrendering the final 4K image given the output from the previous stages. Our\napproach establishes a new benchmark for real-time rendering resolution and\nquality using sparse input camera views, unlocking possibilities for immersive\ntelepresence.", + "For computer vision, Vision Transformers (ViTs) have become one of the go-to\ndeep net architectures. Despite being inspired by Convolutional Neural Networks\n(CNNs), ViTs' output remains sensitive to small spatial shifts in the input,\ni.e., not shift invariant. To address this shortcoming, we introduce novel\ndata-adaptive designs for each of the modules in ViTs, such as tokenization,\nself-attention, patch merging, and positional encoding. With our proposed\nmodules, we achieve true shift-equivariance on four well-established ViTs,\nnamely, Swin, SwinV2, CvT, and MViTv2. Empirically, we evaluate the proposed\nadaptive models on image classification and semantic segmentation tasks. These\nmodels achieve competitive performance across three different datasets while\nmaintaining 100% shift consistency.", + "Spike cameras, leveraging spike-based integration sampling and high temporal\nresolution, offer distinct advantages over standard cameras. However, existing\napproaches reliant on spike cameras often assume optimal illumination, a\ncondition frequently unmet in real-world scenarios. To address this, we\nintroduce SpikeNeRF, the first work that derives a NeRF-based volumetric scene\nrepresentation from spike camera data. Our approach leverages NeRF's multi-view\nconsistency to establish robust self-supervision, effectively eliminating\nerroneous measurements and uncovering coherent structures within exceedingly\nnoisy input amidst diverse real-world illumination scenarios. The framework\ncomprises two core elements: a spike generation model incorporating an\nintegrate-and-fire neuron layer and parameters accounting for non-idealities,\nsuch as threshold variation, and a spike rendering loss capable of generalizing\nacross varying illumination conditions. We describe how to effectively optimize\nneural radiance fields to render photorealistic novel views from the novel\ncontinuous spike stream, demonstrating advantages over other vision sensors in\ncertain scenes. Empirical evaluations conducted on both real and novel\nrealistically simulated sequences affirm the efficacy of our methodology.", + "We describe how to effectively optimize\nneural radiance fields to render photorealistic novel views from the novel\ncontinuous spike stream, demonstrating advantages over other vision sensors in\ncertain scenes. Empirical evaluations conducted on both real and novel\nrealistically simulated sequences affirm the efficacy of our methodology. The\ndataset and source code are released at\nhttps://github.com/BIT-Vision/SpikeNeRF.", + "We present Egocentric Action Scene Graphs (EASGs), a new representation for\nlong-form understanding of egocentric videos. EASGs extend standard\nmanually-annotated representations of egocentric videos, such as verb-noun\naction labels, by providing a temporally evolving graph-based description of\nthe actions performed by the camera wearer, including interacted objects, their\nrelationships, and how actions unfold in time. Through a novel annotation\nprocedure, we extend the Ego4D dataset by adding manually labeled Egocentric\nAction Scene Graphs offering a rich set of annotations designed for long-from\negocentric video understanding. We hence define the EASG generation task and\nprovide a baseline approach, establishing preliminary benchmarks. Experiments\non two downstream tasks, egocentric action anticipation and egocentric activity\nsummarization, highlight the effectiveness of EASGs for long-form egocentric\nvideo understanding. We will release the dataset and the code to replicate\nexperiments and annotations.", + "Existing research based on deep learning has extensively explored the problem\nof daytime image dehazing. However, few studies have considered the\ncharacteristics of nighttime hazy scenes. There are two distinctions between\nnighttime and daytime haze. First, there may be multiple active colored light\nsources with lower illumination intensity in nighttime scenes, which may cause\nhaze, glow and noise with localized, coupled and frequency inconsistent\ncharacteristics. Second, due to the domain discrepancy between simulated and\nreal-world data, unrealistic brightness may occur when applying a dehazing\nmodel trained on simulated data to real-world data. To address the above two\nissues, we propose a semi-supervised model for real-world nighttime dehazing.\nFirst, the spatial attention and frequency spectrum filtering are implemented\nas a spatial-frequency domain information interaction module to handle the\nfirst issue. Second, a pseudo-label-based retraining strategy and a local\nwindow-based brightness loss for semi-supervised training process is designed\nto suppress haze and glow while achieving realistic brightness. Experiments on\npublic benchmarks validate the effectiveness of the proposed method and its\nsuperiority over state-of-the-art methods.", + "Second, a pseudo-label-based retraining strategy and a local\nwindow-based brightness loss for semi-supervised training process is designed\nto suppress haze and glow while achieving realistic brightness. Experiments on\npublic benchmarks validate the effectiveness of the proposed method and its\nsuperiority over state-of-the-art methods. The source code and Supplementary\nMaterials are placed in the https://github.com/Xiaofeng-life/SFSNiD.", + "Data-Free Knowledge Distillation (DFKD) is a promising task to train\nhigh-performance small models to enhance actual deployment without relying on\nthe original training data. Existing methods commonly avoid relying on private\ndata by utilizing synthetic or sampled data. However, a long-overlooked issue\nis that the severe distribution shifts between their substitution and original\ndata, which manifests as huge differences in the quality of images and class\nproportions. The harmful shifts are essentially the confounder that\nsignificantly causes performance bottlenecks. To tackle the issue, this paper\nproposes a novel perspective with causal inference to disentangle the student\nmodels from the impact of such shifts. By designing a customized causal graph,\nwe first reveal the causalities among the variables in the DFKD task.\nSubsequently, we propose a Knowledge Distillation Causal Intervention (KDCI)\nframework based on the backdoor adjustment to de-confound the confounder. KDCI\ncan be flexibly combined with most existing state-of-the-art baselines.", + "Subsequently, we propose a Knowledge Distillation Causal Intervention (KDCI)\nframework based on the backdoor adjustment to de-confound the confounder. KDCI\ncan be flexibly combined with most existing state-of-the-art baselines.\nExperiments in combination with six representative DFKD methods demonstrate the\neffectiveness of our KDCI, which can obviously help existing methods under\nalmost all settings, \\textit{e.g.}, improving the baseline by up to 15.54\\%\naccuracy on the CIFAR-100 dataset.", + "Video Paragraph Grounding (VPG) is an emerging task in video-language\nunderstanding, which aims at localizing multiple sentences with semantic\nrelations and temporal order from an untrimmed video. However, existing VPG\napproaches are heavily reliant on a considerable number of temporal labels that\nare laborious and time-consuming to acquire. In this work, we introduce and\nexplore Weakly-Supervised Video Paragraph Grounding (WSVPG) to eliminate the\nneed of temporal annotations. Different from previous weakly-supervised\ngrounding frameworks based on multiple instance learning or reconstruction\nlearning for two-stage candidate ranking, we propose a novel siamese learning\nframework that jointly learns the cross-modal feature alignment and temporal\ncoordinate regression without timestamp labels to achieve concise one-stage\nlocalization for WSVPG. Specifically, we devise a Siamese Grounding TRansformer\n(SiamGTR) consisting of two weight-sharing branches for learning complementary\nsupervision.", + "Specifically, we devise a Siamese Grounding TRansformer\n(SiamGTR) consisting of two weight-sharing branches for learning complementary\nsupervision. An Augmentation Branch is utilized for directly regressing the\ntemporal boundaries of a complete paragraph within a pseudo video, and an\nInference Branch is designed to capture the order-guided feature correspondence\nfor localizing multiple sentences in a normal video. We demonstrate by\nextensive experiments that our paradigm has superior practicability and\nflexibility to achieve efficient weakly-supervised or semi-supervised learning,\noutperforming state-of-the-art methods trained with the same or stronger\nsupervision.", + "Deep Neural Networks (DNNs) are known to be susceptible to adversarial\nattacks. Previous researches mainly focus on improving adversarial robustness\nin the fully supervised setting, leaving the challenging domain of zero-shot\nadversarial robustness an open question. In this work, we investigate this\ndomain by leveraging the recent advances in large vision-language models, such\nas CLIP, to introduce zero-shot adversarial robustness to DNNs. We propose\nLAAT, a Language-driven, Anchor-based Adversarial Training strategy. LAAT\nutilizes the features of a text encoder for each category as fixed anchors\n(normalized feature embeddings) for each category, which are then employed for\nadversarial training. By leveraging the semantic consistency of the text\nencoders, LAAT aims to enhance the adversarial robustness of the image model on\nnovel categories. However, naively using text encoders leads to poor results.\nThrough analysis, we identified the issue to be the high cosine similarity\nbetween text encoders. We then design an expansion algorithm and an alignment\ncross-entropy loss to alleviate the problem.", + "However, naively using text encoders leads to poor results.\nThrough analysis, we identified the issue to be the high cosine similarity\nbetween text encoders. We then design an expansion algorithm and an alignment\ncross-entropy loss to alleviate the problem. Our experimental results\ndemonstrated that LAAT significantly improves zero-shot adversarial robustness\nover state-of-the-art methods. LAAT has the potential to enhance adversarial\nrobustness by large-scale multimodal models, especially when labeled data is\nunavailable during training.", + "Diffusion model-based image restoration (IR) aims to use diffusion models to\nrecover high-quality (HQ) images from degraded images, achieving promising\nperformance. Due to the inherent property of diffusion models, most existing\nmethods need long serial sampling chains to restore HQ images step-by-step,\nresulting in expensive sampling time and high computation costs. Moreover, such\nlong sampling chains hinder understanding the relationship between inputs and\nrestoration results since it is hard to compute the gradients in the whole\nchains. In this work, we aim to rethink the diffusion model-based IR models\nthrough a different perspective, i.e., a deep equilibrium (DEQ) fixed point\nsystem, called DeqIR. Specifically, we derive an analytical solution by\nmodeling the entire sampling chain in these IR models as a joint multivariate\nfixed point system. Based on the analytical solution, we can conduct parallel\nsampling and restore HQ images without training. Furthermore, we compute fast\ngradients via DEQ inversion and found that initialization optimization can\nboost image quality and control the generation direction. Extensive experiments\non benchmarks demonstrate the effectiveness of our method on typical IR tasks\nand real-world settings.", + "Object detection with event cameras benefits from the sensor's low latency\nand high dynamic range. However, it is costly to fully label event streams for\nsupervised training due to their high temporal resolution. To reduce this cost,\nwe present LEOD, the first method for label-efficient event-based detection.\nOur approach unifies weakly- and semi-supervised object detection with a\nself-training mechanism. We first utilize a detector pre-trained on limited\nlabels to produce pseudo ground truth on unlabeled events. Then, the detector\nis re-trained with both real and generated labels. Leveraging the temporal\nconsistency of events, we run bi-directional inference and apply tracking-based\npost-processing to enhance the quality of pseudo labels. To stabilize training\nagainst label noise, we further design a soft anchor assignment strategy. We\nintroduce new experimental protocols to evaluate the task of label-efficient\nevent-based detection on Gen1 and 1Mpx datasets. LEOD consistently outperforms\nsupervised baselines across various labeling ratios.", + "To stabilize training\nagainst label noise, we further design a soft anchor assignment strategy. We\nintroduce new experimental protocols to evaluate the task of label-efficient\nevent-based detection on Gen1 and 1Mpx datasets. LEOD consistently outperforms\nsupervised baselines across various labeling ratios. For example, on Gen1, it\nimproves mAP by 8.6% and 7.8% for RVT-S trained with 1% and 2% labels. On 1Mpx,\nRVT-S with 10% labels even surpasses its fully-supervised counterpart using\n100% labels. LEOD maintains its effectiveness even when all labeled data are\navailable, reaching new state-of-the-art results. Finally, we show that our\nmethod readily scales to improve larger detectors as well. Code is released at\nhttps://github.com/Wuziyi616/LEOD", + "Representation learning of pathology whole-slide images (WSIs) has been has\nprimarily relied on weak supervision with Multiple Instance Learning (MIL).\nHowever, the slide representations resulting from this approach are highly\ntailored to specific clinical tasks, which limits their expressivity and\ngeneralization, particularly in scenarios with limited data. Instead, we\nhypothesize that morphological redundancy in tissue can be leveraged to build a\ntask-agnostic slide representation in an unsupervised fashion. To this end, we\nintroduce PANTHER, a prototype-based approach rooted in the Gaussian mixture\nmodel that summarizes the set of WSI patches into a much smaller set of\nmorphological prototypes. Specifically, each patch is assumed to have been\ngenerated from a mixture distribution, where each mixture component represents\na morphological exemplar. Utilizing the estimated mixture parameters, we then\nconstruct a compact slide representation that can be readily used for a wide\nrange of downstream tasks.", + "Specifically, each patch is assumed to have been\ngenerated from a mixture distribution, where each mixture component represents\na morphological exemplar. Utilizing the estimated mixture parameters, we then\nconstruct a compact slide representation that can be readily used for a wide\nrange of downstream tasks. By performing an extensive evaluation of PANTHER on\nsubtyping and survival tasks using 13 datasets, we show that 1) PANTHER\noutperforms or is on par with supervised MIL baselines and 2) the analysis of\nmorphological prototypes brings new qualitative and quantitative insights into\nmodel interpretability.", + "Polarization is a fundamental property of light that encodes abundant\ninformation regarding surface shape, material, illumination and viewing\ngeometry. The computer vision community has witnessed a blossom of\npolarization-based vision applications, such as reflection removal,\nshape-from-polarization, transparent object segmentation and color constancy,\npartially due to the emergence of single-chip mono/color polarization sensors\nthat make polarization data acquisition easier than ever. However, is\npolarization-based vision vulnerable to adversarial attacks? If so, is that\npossible to realize these adversarial attacks in the physical world, without\nbeing perceived by human eyes? In this paper, we warn the community of the\nvulnerability of polarization-based vision, which can be more serious than\nRGB-based vision. By adapting a commercial LCD projector, we achieve locally\ncontrollable polarizing projection, which is successfully utilized to fool\nstate-of-the-art polarization-based vision algorithms for glass segmentation\nand color constancy.", + "By adapting a commercial LCD projector, we achieve locally\ncontrollable polarizing projection, which is successfully utilized to fool\nstate-of-the-art polarization-based vision algorithms for glass segmentation\nand color constancy. Compared with existing physical attacks on RGB-based\nvision, which always suffer from the trade-off between attack efficacy and eye\nconceivability, the adversarial attackers based on polarizing projection are\ncontact-free and visually imperceptible, since naked human eyes can rarely\nperceive the difference of viciously manipulated polarizing light and ordinary\nillumination. This poses unprecedented risks on polarization-based vision, both\nin the monochromatic and trichromatic domain, for which due attentions should\nbe paid and counter measures be considered.", + "Recent approaches to point tracking are able to recover the trajectory of any\nscene point through a large portion of a video despite the presence of\nocclusions. They are, however, too slow in practice to track every point\nobserved in a single frame in a reasonable amount of time. This paper\nintroduces DOT, a novel, simple and efficient method for solving this problem.\nIt first extracts a small set of tracks from key regions at motion boundaries\nusing an off-the-shelf point tracking algorithm. Given source and target\nframes, DOT then computes rough initial estimates of a dense flow field and\nvisibility mask through nearest-neighbor interpolation, before refining them\nusing a learnable optical flow estimator that explicitly handles occlusions and\ncan be trained on synthetic data with ground-truth correspondences. We show\nthat DOT is significantly more accurate than current optical flow techniques,\noutperforms sophisticated \"universal\" trackers like OmniMotion, and is on par\nwith, or better than, the best point tracking algorithms like CoTracker while\nbeing at least two orders of magnitude faster. Quantitative and qualitative\nexperiments with synthetic and real videos validate the promise of the proposed\napproach.", + "Quantitative and qualitative\nexperiments with synthetic and real videos validate the promise of the proposed\napproach. Code, data, and videos showcasing the capabilities of our approach\nare available in the project webpage: https://16lemoing.github.io/dot .", + "Split Learning (SL) is a distributed learning framework renowned for its\nprivacy-preserving features and minimal computational requirements. Previous\nresearch consistently highlights the potential privacy breaches in SL systems\nby server adversaries reconstructing training data. However, these studies\noften rely on strong assumptions or compromise system utility to enhance attack\nperformance. This paper introduces a new semi-honest Data Reconstruction Attack\non SL, named Feature-Oriented Reconstruction Attack (FORA). In contrast to\nprior works, FORA relies on limited prior knowledge, specifically that the\nserver utilizes auxiliary samples from the public without knowing any client's\nprivate information. This allows FORA to conduct the attack stealthily and\nachieve robust performance. The key vulnerability exploited by FORA is the\nrevelation of the model representation preference in the smashed data output by\nvictim client. FORA constructs a substitute client through feature-level\ntransfer learning, aiming to closely mimic the victim client's representation\npreference. Leveraging this substitute client, the server trains the attack\nmodel to effectively reconstruct private data. Extensive experiments showcase\nFORA's superior performance compared to state-of-the-art methods. Furthermore,\nthe paper systematically evaluates the proposed method's applicability across\ndiverse settings and advanced defense strategies.", + "With the rapid development of face recognition (FR) systems, the privacy of\nface images on social media is facing severe challenges due to the abuse of\nunauthorized FR systems. Some studies utilize adversarial attack techniques to\ndefend against malicious FR systems by generating adversarial examples.\nHowever, the generated adversarial examples, i.e., the protected face images,\ntend to suffer from subpar visual quality and low transferability. In this\npaper, we propose a novel face protection approach, dubbed DiffAM, which\nleverages the powerful generative ability of diffusion models to generate\nhigh-quality protected face images with adversarial makeup transferred from\nreference images. To be specific, we first introduce a makeup removal module to\ngenerate non-makeup images utilizing a fine-tuned diffusion model with guidance\nof textual prompts in CLIP space. As the inverse process of makeup transfer,\nmakeup removal can make it easier to establish the deterministic relationship\nbetween makeup domain and non-makeup domain regardless of elaborate text\nprompts.", + "As the inverse process of makeup transfer,\nmakeup removal can make it easier to establish the deterministic relationship\nbetween makeup domain and non-makeup domain regardless of elaborate text\nprompts. Then, with this relationship, a CLIP-based makeup loss along with an\nensemble attack strategy is introduced to jointly guide the direction of\nadversarial makeup domain, achieving the generation of protected face images\nwith natural-looking makeup and high black-box transferability. Extensive\nexperiments demonstrate that DiffAM achieves higher visual quality and attack\nsuccess rates with a gain of 12.98% under black-box setting compared with the\nstate of the arts. The code will be available at\nhttps://github.com/HansSunY/DiffAM.", + "LiDAR Upsampling is a challenging task for the perception systems of robots\nand autonomous vehicles, due to the sparse and irregular structure of\nlarge-scale scene contexts. Recent works propose to solve this problem by\nconverting LiDAR data from 3D Euclidean space into an image super-resolution\nproblem in 2D image space. Although their methods can generate high-resolution\nrange images with fine-grained details, the resulting 3D point clouds often\nblur out details and predict invalid points. In this paper, we propose TULIP, a\nnew method to reconstruct high-resolution LiDAR point clouds from\nlow-resolution LiDAR input. We also follow a range image-based approach but\nspecifically modify the patch and window geometries of a Swin-Transformer-based\nnetwork to better fit the characteristics of range images. We conducted several\nexperiments on three public real-world and simulated datasets. TULIP\noutperforms state-of-the-art methods in all relevant metrics and generates\nrobust and more realistic point clouds than prior works.", + "Inspired by the success of Large Language Models in dealing with new tasks\nvia In-Context Learning (ICL) in NLP, researchers have also developed Large\nVision-Language Models (LVLMs) with ICL capabilities. However, when\nimplementing ICL using these LVLMs, researchers usually resort to the simplest\nway like random sampling to configure the in-context sequence, thus leading to\nsub-optimal results. To enhance the ICL performance, in this study, we use\nVisual Question Answering (VQA) as case study to explore diverse in-context\nconfigurations to find the powerful ones. Additionally, through observing the\nchanges of the LVLM outputs by altering the in-context sequence, we gain\ninsights into the inner properties of LVLMs, improving our understanding of\nthem. Specifically, to explore in-context configurations, we design diverse\nretrieval methods and employ different strategies to manipulate the retrieved\ndemonstrations. Through exhaustive experiments on three VQA datasets: VQAv2,\nVizWiz, and OK-VQA, we uncover three important inner properties of the applied\nLVLM and demonstrate which strategies can consistently improve the ICL VQA\nperformance.", + "Through exhaustive experiments on three VQA datasets: VQAv2,\nVizWiz, and OK-VQA, we uncover three important inner properties of the applied\nLVLM and demonstrate which strategies can consistently improve the ICL VQA\nperformance. Our code is provided in:\nhttps://github.com/GaryJiajia/OFv2_ICL_VQA.", + "Efficient generation of 3D digital humans is important in several industries,\nincluding virtual reality, social media, and cinematic production. 3D\ngenerative adversarial networks (GANs) have demonstrated state-of-the-art\n(SOTA) quality and diversity for generated assets. Current 3D GAN\narchitectures, however, typically rely on volume representations, which are\nslow to render, thereby hampering the GAN training and requiring\nmulti-view-inconsistent 2D upsamplers. Here, we introduce Gaussian Shell Maps\n(GSMs) as a framework that connects SOTA generator network architectures with\nemerging 3D Gaussian rendering primitives using an articulable multi\nshell--based scaffold. In this setting, a CNN generates a 3D texture stack with\nfeatures that are mapped to the shells. The latter represent inflated and\ndeflated versions of a template surface of a digital human in a canonical body\npose. Instead of rasterizing the shells directly, we sample 3D Gaussians on the\nshells whose attributes are encoded in the texture features. These Gaussians\nare efficiently and differentiably rendered.", + "Instead of rasterizing the shells directly, we sample 3D Gaussians on the\nshells whose attributes are encoded in the texture features. These Gaussians\nare efficiently and differentiably rendered. The ability to articulate the\nshells is important during GAN training and, at inference time, to deform a\nbody into arbitrary user-defined poses. Our efficient rendering scheme bypasses\nthe need for view-inconsistent upsamplers and achieves high-quality multi-view\nconsistent renderings at a native resolution of $512 \\times 512$ pixels. We\ndemonstrate that GSMs successfully generate 3D humans when trained on\nsingle-view datasets, including SHHQ and DeepFashion.", + "The task of No-Reference Image Quality Assessment (NR-IQA) is to estimate the\nquality score of an input image without additional information. NR-IQA models\nplay a crucial role in the media industry, aiding in performance evaluation and\noptimization guidance. However, these models are found to be vulnerable to\nadversarial attacks, which introduce imperceptible perturbations to input\nimages, resulting in significant changes in predicted scores. In this paper, we\npropose a defense method to improve the stability in predicted scores when\nattacked by small perturbations, thus enhancing the adversarial robustness of\nNR-IQA models. To be specific, we present theoretical evidence showing that the\nmagnitude of score changes is related to the $\\ell_1$ norm of the model's\ngradient with respect to the input image. Building upon this theoretical\nfoundation, we propose a norm regularization training strategy aimed at\nreducing the $\\ell_1$ norm of the gradient, thereby boosting the robustness of\nNR-IQA models. Experiments conducted on four NR-IQA baseline models demonstrate\nthe effectiveness of our strategy in reducing score changes in the presence of\nadversarial attacks.", + "Experiments conducted on four NR-IQA baseline models demonstrate\nthe effectiveness of our strategy in reducing score changes in the presence of\nadversarial attacks. To the best of our knowledge, this work marks the first\nattempt to defend against adversarial attacks on NR-IQA models. Our study\noffers valuable insights into the adversarial robustness of NR-IQA models and\nprovides a foundation for future research in this area.", + "Humans commonly work with multiple objects in daily life and can intuitively\ntransfer manipulation skills to novel objects by understanding object\nfunctional regularities. However, existing technical approaches for analyzing\nand synthesizing hand-object manipulation are mostly limited to handling a\nsingle hand and object due to the lack of data support. To address this, we\nconstruct TACO, an extensive bimanual hand-object-interaction dataset spanning\na large variety of tool-action-object compositions for daily human activities.\nTACO contains 2.5K motion sequences paired with third-person and egocentric\nviews, precise hand-object 3D meshes, and action labels. To rapidly expand the\ndata scale, we present a fully automatic data acquisition pipeline combining\nmulti-view sensing with an optical motion capture system. With the vast\nresearch fields provided by TACO, we benchmark three generalizable\nhand-object-interaction tasks: compositional action recognition, generalizable\nhand-object motion forecasting, and cooperative grasp synthesis. Extensive\nexperiments reveal new insights, challenges, and opportunities for advancing\nthe studies of generalizable hand-object motion analysis and synthesis. Our\ndata and code are available at https://taco2024.github.io.", + "While existing motion style transfer methods are effective between two\nmotions with identical content, their performance significantly diminishes when\ntransferring style between motions with different contents. This challenge lies\nin the lack of clear separation between content and style of a motion. To\ntackle this challenge, we propose a novel motion style transformer that\neffectively disentangles style from content and generates a plausible motion\nwith transferred style from a source motion. Our distinctive approach to\nachieving the goal of disentanglement is twofold: (1) a new architecture for\nmotion style transformer with `part-attentive style modulator across body\nparts' and `Siamese encoders that encode style and content features\nseparately'; (2) style disentanglement loss. Our method outperforms existing\nmethods and demonstrates exceptionally high quality, particularly in motion\npairs with different contents, without the need for heuristic post-processing.\nCodes are available at https://github.com/Boeun-Kim/MoST.", + "The quality of the prompts provided to text-to-image diffusion models\ndetermines how faithful the generated content is to the user's intent, often\nrequiring `prompt engineering'. To harness visual concepts from target images\nwithout prompt engineering, current approaches largely rely on embedding\ninversion by optimizing and then mapping them to pseudo-tokens. However,\nworking with such high-dimensional vector representations is challenging\nbecause they lack semantics and interpretability, and only allow simple vector\noperations when using them. Instead, this work focuses on inverting the\ndiffusion model to obtain interpretable language prompts directly. The\nchallenge of doing this lies in the fact that the resulting optimization\nproblem is fundamentally discrete and the space of prompts is exponentially\nlarge; this makes using standard optimization techniques, such as stochastic\ngradient descent, difficult. To this end, we utilize a delayed projection\nscheme to optimize for prompts representative of the vocabulary space in the\nmodel. Further, we leverage the findings that different timesteps of the\ndiffusion process cater to different levels of detail in an image.", + "To this end, we utilize a delayed projection\nscheme to optimize for prompts representative of the vocabulary space in the\nmodel. Further, we leverage the findings that different timesteps of the\ndiffusion process cater to different levels of detail in an image. The later,\nnoisy, timesteps of the forward diffusion process correspond to the semantic\ninformation, and therefore, prompt inversion in this range provides tokens\nrepresentative of the image semantics. We show that our approach can identify\nsemantically interpretable and meaningful prompts for a target image which can\nbe used to synthesize diverse images with similar content. We further\nillustrate the application of the optimized prompts in evolutionary image\ngeneration and concept removal.", + "Neural implicit fields have been a de facto standard in novel view synthesis.\nRecently, there exist some methods exploring fusing multiple modalities within\na single field, aiming to share implicit features from different modalities to\nenhance reconstruction performance. However, these modalities often exhibit\nmisaligned behaviors: optimizing for one modality, such as LiDAR, can adversely\naffect another, like camera performance, and vice versa. In this work, we\nconduct comprehensive analyses on the multimodal implicit field of LiDAR-camera\njoint synthesis, revealing the underlying issue lies in the misalignment of\ndifferent sensors. Furthermore, we introduce AlignMiF, a geometrically aligned\nmultimodal implicit field with two proposed modules: Geometry-Aware Alignment\n(GAA) and Shared Geometry Initialization (SGI). These modules effectively align\nthe coarse geometry across different modalities, significantly enhancing the\nfusion process between LiDAR and camera data. Through extensive experiments\nacross various datasets and scenes, we demonstrate the effectiveness of our\napproach in facilitating better interaction between LiDAR and camera modalities\nwithin a unified neural field.", + "Through extensive experiments\nacross various datasets and scenes, we demonstrate the effectiveness of our\napproach in facilitating better interaction between LiDAR and camera modalities\nwithin a unified neural field. Specifically, our proposed AlignMiF, achieves\nremarkable improvement over recent implicit fusion methods (+2.01 and +3.11\nimage PSNR on the KITTI-360 and Waymo datasets) and consistently surpasses\nsingle modality performance (13.8% and 14.2% reduction in LiDAR Chamfer\nDistance on the respective datasets).", + "Large generative diffusion models have revolutionized text-to-image\ngeneration and offer immense potential for conditional generation tasks such as\nimage enhancement, restoration, editing, and compositing. However, their\nwidespread adoption is hindered by the high computational cost, which limits\ntheir real-time application. To address this challenge, we introduce a novel\nmethod dubbed CoDi, that adapts a pre-trained latent diffusion model to accept\nadditional image conditioning inputs while significantly reducing the sampling\nsteps required to achieve high-quality results. Our method can leverage\narchitectures such as ControlNet to incorporate conditioning inputs without\ncompromising the model's prior knowledge gained during large scale\npre-training. Additionally, a conditional consistency loss enforces consistent\npredictions across diffusion steps, effectively compelling the model to\ngenerate high-quality images with conditions in a few steps. Our\nconditional-task learning and distillation approach outperforms previous\ndistillation methods, achieving a new state-of-the-art in producing\nhigh-quality images with very few steps (e.g., 1-4) across multiple tasks,\nincluding super-resolution, text-guided image editing, and depth-to-image\ngeneration.", + "CAD programs are a popular way to compactly encode shapes as a sequence of\noperations that are easy to parametrically modify. However, without sufficient\nsemantic comments and structure, such programs can be challenging to\nunderstand, let alone modify. We introduce the problem of semantic commenting\nCAD programs, wherein the goal is to segment the input program into code blocks\ncorresponding to semantically meaningful shape parts and assign a semantic\nlabel to each block. We solve the problem by combining program parsing with\nvisual-semantic analysis afforded by recent advances in foundational language\nand vision models. Specifically, by executing the input programs, we create\nshapes, which we use to generate conditional photorealistic images to make use\nof semantic annotators for such images. We then distill the information across\nthe images and link back to the original programs to semantically comment on\nthem. Additionally, we collected and annotated a benchmark dataset, CADTalk,\nconsisting of 5,288 machine-made programs and 45 human-made programs with\nground truth semantic comments.", + "We then distill the information across\nthe images and link back to the original programs to semantically comment on\nthem. Additionally, we collected and annotated a benchmark dataset, CADTalk,\nconsisting of 5,288 machine-made programs and 45 human-made programs with\nground truth semantic comments. We extensively evaluated our approach, compared\nit to a GPT-based baseline, and an open-set shape segmentation baseline, and\nreported an 83.24% accuracy on the new CADTalk dataset. Code and data:\nhttps://enigma-li.github.io/CADTalk/.", + "Collecting well-matched multimedia datasets is crucial for training\ncross-modal retrieval models. However, in real-world scenarios, massive\nmultimodal data are harvested from the Internet, which inevitably contains\nPartially Mismatched Pairs (PMPs). Undoubtedly, such semantical irrelevant data\nwill remarkably harm the cross-modal retrieval performance. Previous efforts\ntend to mitigate this problem by estimating a soft correspondence to\ndown-weight the contribution of PMPs. In this paper, we aim to address this\nchallenge from a new perspective: the potential semantic similarity among\nunpaired samples makes it possible to excavate useful knowledge from mismatched\npairs. To achieve this, we propose L2RM, a general framework based on Optimal\nTransport (OT) that learns to rematch mismatched pairs. In detail, L2RM aims to\ngenerate refined alignments by seeking a minimal-cost transport plan across\ndifferent modalities. To formalize the rematching idea in OT, first, we propose\na self-supervised cost function that automatically learns from explicit\nsimilarity-cost mapping relation.", + "In detail, L2RM aims to\ngenerate refined alignments by seeking a minimal-cost transport plan across\ndifferent modalities. To formalize the rematching idea in OT, first, we propose\na self-supervised cost function that automatically learns from explicit\nsimilarity-cost mapping relation. Second, we present to model a partial OT\nproblem while restricting the transport among false positives to further boost\nrefined alignments. Extensive experiments on three benchmarks demonstrate our\nL2RM significantly improves the robustness against PMPs for existing models.\nThe code is available at https://github.com/hhc1997/L2RM.", + "Self-supervised foundation models have shown great potential in computer\nvision thanks to the pre-training paradigm of masked autoencoding. Scale is a\nprimary factor influencing the performance of these foundation models. However,\nthese large foundation models often result in high computational cost. This\npaper focuses on pre-training relatively small vision transformer models that\ncould be efficiently adapted to downstream tasks. Specifically, taking\ninspiration from knowledge distillation in model compression, we propose a new\nasymmetric masked distillation (AMD) framework for pre-training relatively\nsmall models with autoencoding. The core of AMD is to devise an asymmetric\nmasking strategy, where the teacher model is enabled to see more context\ninformation with a lower masking ratio, while the student model is still\nequipped with a high masking ratio. We design customized multi-layer feature\nalignment between the teacher encoder and student encoder to regularize the\npre-training of student MAE. To demonstrate the effectiveness and versatility\nof AMD, we apply it to both ImageMAE and VideoMAE for pre-training relatively\nsmall ViT models. AMD achieved 84.6% classification accuracy on IN1K using the\nViT-B model.", + "To demonstrate the effectiveness and versatility\nof AMD, we apply it to both ImageMAE and VideoMAE for pre-training relatively\nsmall ViT models. AMD achieved 84.6% classification accuracy on IN1K using the\nViT-B model. And AMD achieves 73.3% classification accuracy using the ViT-B\nmodel on the Something-in-Something V2 dataset, a 3.7% improvement over the\noriginal ViT-B model from VideoMAE. We also transfer AMD pre-trained models to\ndownstream tasks and obtain consistent performance improvement over the\noriginal masked autoencoding. The code and models are available at\nhttps://github.com/MCG-NJU/AMD.", + "Understanding human motion from video is essential for a range of\napplications, including pose estimation, mesh recovery and action recognition.\nWhile state-of-the-art methods predominantly rely on transformer-based\narchitectures, these approaches have limitations in practical scenarios.\nTransformers are slower when sequentially predicting on a continuous stream of\nframes in real-time, and do not generalize to new frame rates. In light of\nthese constraints, we propose a novel attention-free spatiotemporal model for\nhuman motion understanding building upon recent advancements in state space\nmodels. Our model not only matches the performance of transformer-based models\nin various motion understanding tasks but also brings added benefits like\nadaptability to different video frame rates and enhanced training speed when\nworking with longer sequence of keypoints. Moreover, the proposed model\nsupports both offline and real-time applications. For real-time sequential\nprediction, our model is both memory efficient and several times faster than\ntransformer-based approaches while maintaining their high accuracy.", + "It is a long-lasting goal to design an embodied system that can solve\nlong-horizon open-world tasks in human-like ways. However, existing approaches\nusually struggle with compound difficulties caused by the logic-aware\ndecomposition and context-aware execution of these tasks. To this end, we\nintroduce MP5, an open-ended multimodal embodied system built upon the\nchallenging Minecraft simulator, which can decompose feasible sub-objectives,\ndesign sophisticated situation-aware plans, and perform embodied action\ncontrol, with frequent communication with a goal-conditioned active perception\nscheme. Specifically, MP5 is developed on top of recent advances in Multimodal\nLarge Language Models (MLLMs), and the system is modulated into functional\nmodules that can be scheduled and collaborated to ultimately solve pre-defined\ncontext- and process-dependent tasks. Extensive experiments prove that MP5 can\nachieve a 22% success rate on difficult process-dependent tasks and a 91%\nsuccess rate on tasks that heavily depend on the context. Moreover, MP5\nexhibits a remarkable ability to address many open-ended tasks that are\nentirely novel.", + "Video anomaly understanding (VAU) aims to automatically comprehend unusual\noccurrences in videos, thereby enabling various applications such as traffic\nsurveillance and industrial manufacturing. While existing VAU benchmarks\nprimarily concentrate on anomaly detection and localization, our focus is on\nmore practicality, prompting us to raise the following crucial questions: \"what\nanomaly occurred?\", \"why did it happen?\", and \"how severe is this abnormal\nevent?\". In pursuit of these answers, we present a comprehensive benchmark for\nCausation Understanding of Video Anomaly (CUVA). Specifically, each instance of\nthe proposed benchmark involves three sets of human annotations to indicate the\n\"what\", \"why\" and \"how\" of an anomaly, including 1) anomaly type, start and end\ntimes, and event descriptions, 2) natural language explanations for the cause\nof an anomaly, and 3) free text reflecting the effect of the abnormality. In\naddition, we also introduce MMEval, a novel evaluation metric designed to\nbetter align with human preferences for CUVA, facilitating the measurement of\nexisting LLMs in comprehending the underlying cause and corresponding effect of\nvideo anomalies.", + "In\naddition, we also introduce MMEval, a novel evaluation metric designed to\nbetter align with human preferences for CUVA, facilitating the measurement of\nexisting LLMs in comprehending the underlying cause and corresponding effect of\nvideo anomalies. Finally, we propose a novel prompt-based method that can serve\nas a baseline approach for the challenging CUVA. We conduct extensive\nexperiments to show the superiority of our evaluation metric and the\nprompt-based approach. Our code and dataset are available at\nhttps://github.com/fesvhtr/CUVA.", + "3D visual grounding involves matching natural language descriptions with\ntheir corresponding objects in 3D spaces. Existing methods often face\nchallenges with accuracy in object recognition and struggle in interpreting\ncomplex linguistic queries, particularly with descriptions that involve\nmultiple anchors or are view-dependent. In response, we present the MiKASA\n(Multi-Key-Anchor Scene-Aware) Transformer. Our novel end-to-end trained model\nintegrates a self-attention-based scene-aware object encoder and an original\nmulti-key-anchor technique, enhancing object recognition accuracy and the\nunderstanding of spatial relationships. Furthermore, MiKASA improves the\nexplainability of decision-making, facilitating error diagnosis. Our model\nachieves the highest overall accuracy in the Referit3D challenge for both the\nSr3D and Nr3D datasets, particularly excelling by a large margin in categories\nthat require viewpoint-dependent descriptions.", + "The long-tailed distribution problem in medical image analysis reflects a\nhigh prevalence of common conditions and a low prevalence of rare ones, which\nposes a significant challenge in developing a unified model capable of\nidentifying rare or novel tumor categories not encountered during training. In\nthis paper, we propose a new zero-shot pan-tumor segmentation framework (ZePT)\nbased on query-disentangling and self-prompting to segment unseen tumor\ncategories beyond the training set. ZePT disentangles the object queries into\ntwo subsets and trains them in two stages. Initially, it learns a set of\nfundamental queries for organ segmentation through an object-aware feature\ngrouping strategy, which gathers organ-level visual features. Subsequently, it\nrefines the other set of advanced queries that focus on the auto-generated\nvisual prompts for unseen tumor segmentation. Moreover, we introduce\nquery-knowledge alignment at the feature level to enhance each query's\ndiscriminative representation and generalizability. Extensive experiments on\nvarious tumor segmentation tasks demonstrate the performance superiority of\nZePT, which surpasses the previous counterparts and evidence the promising\nability for zero-shot tumor segmentation in real-world settings.", + "Video moment retrieval and highlight detection are two highly valuable tasks\nin video understanding, but until recently they have been jointly studied.\nAlthough existing studies have made impressive advancement recently, they\npredominantly follow the data-driven bottom-up paradigm. Such paradigm\noverlooks task-specific and inter-task effects, resulting in poor model\nperformance. In this paper, we propose a novel task-driven top-down framework\nTaskWeave for joint moment retrieval and highlight detection. The framework\nintroduces a task-decoupled unit to capture task-specific and common\nrepresentations. To investigate the interplay between the two tasks, we propose\nan inter-task feedback mechanism, which transforms the results of one task as\nguiding masks to assist the other task. Different from existing methods, we\npresent a task-dependent joint loss function to optimize the model.\nComprehensive experiments and in-depth ablation studies on QVHighlights, TVSum,\nand Charades-STA datasets corroborate the effectiveness and flexibility of the\nproposed framework. Codes are available at\nhttps://github.com/EdenGabriel/TaskWeave.", + "Contrastive pretraining of image-text foundation models, such as CLIP,\ndemonstrated excellent zero-shot performance and improved robustness on a wide\nrange of downstream tasks. However, these models utilize large\ntransformer-based encoders with significant memory and latency overhead which\npose challenges for deployment on mobile devices. In this work, we introduce\nMobileCLIP -- a new family of efficient image-text models optimized for runtime\nperformance along with a novel and efficient training approach, namely\nmulti-modal reinforced training. The proposed training approach leverages\nknowledge transfer from an image captioning model and an ensemble of strong\nCLIP encoders to improve the accuracy of efficient models. Our approach avoids\ntrain-time compute overhead by storing the additional knowledge in a reinforced\ndataset. MobileCLIP sets a new state-of-the-art latency-accuracy tradeoff for\nzero-shot classification and retrieval tasks on several datasets. Our\nMobileCLIP-S2 variant is 2.3$\\times$ faster while more accurate compared to\nprevious best CLIP model based on ViT-B/16.", + "Our\nMobileCLIP-S2 variant is 2.3$\\times$ faster while more accurate compared to\nprevious best CLIP model based on ViT-B/16. We further demonstrate the\neffectiveness of our multi-modal reinforced training by training a CLIP model\nbased on ViT-B/16 image backbone and achieving +2.9% average performance\nimprovement on 38 evaluation benchmarks compared to the previous best.\nMoreover, we show that the proposed approach achieves 10$\\times$-1000$\\times$\nimproved learning efficiency when compared with non-reinforced CLIP training.\nCode and models are available at https://github.com/apple/ml-mobileclip .", + "Point-based interactive editing serves as an essential tool to complement the\ncontrollability of existing generative models. A concurrent work,\nDragDiffusion, updates the diffusion latent map in response to user inputs,\ncausing global latent map alterations. This results in imprecise preservation\nof the original content and unsuccessful editing due to gradient vanishing. In\ncontrast, we present DragNoise, offering robust and accelerated editing without\nretracing the latent map. The core rationale of DragNoise lies in utilizing the\npredicted noise output of each U-Net as a semantic editor. This approach is\ngrounded in two critical observations: firstly, the bottleneck features of\nU-Net inherently possess semantically rich features ideal for interactive\nediting; secondly, high-level semantics, established early in the denoising\nprocess, show minimal variation in subsequent stages. Leveraging these\ninsights, DragNoise edits diffusion semantics in a single denoising step and\nefficiently propagates these changes, ensuring stability and efficiency in\ndiffusion editing. Comparative experiments reveal that DragNoise achieves\nsuperior control and semantic retention, reducing the optimization time by over\n50% compared to DragDiffusion.", + "Comparative experiments reveal that DragNoise achieves\nsuperior control and semantic retention, reducing the optimization time by over\n50% compared to DragDiffusion. Our codes are available at\nhttps://github.com/haofengl/DragNoise.", + "Pseudo-label-based semi-supervised learning (SSL) algorithms trained on a\nclass-imbalanced set face two cascading challenges: 1) Classifiers tend to be\nbiased towards majority classes, and 2) Biased pseudo-labels are used for\ntraining. It is difficult to appropriately re-balance the classifiers in SSL\nbecause the class distribution of an unlabeled set is often unknown and could\nbe mismatched with that of a labeled set. We propose a novel class-imbalanced\nSSL algorithm called class-distribution-mismatch-aware debiasing (CDMAD). For\neach iteration of training, CDMAD first assesses the classifier's biased degree\ntowards each class by calculating the logits on an image without any patterns\n(e.g., solid color image), which can be considered irrelevant to the training\nset. CDMAD then refines biased pseudo-labels of the base SSL algorithm by\nensuring the classifier's neutrality. CDMAD uses these refined pseudo-labels\nduring the training of the base SSL algorithm to improve the quality of the\nrepresentations. In the test phase, CDMAD similarly refines biased class\npredictions on test samples.", + "CDMAD uses these refined pseudo-labels\nduring the training of the base SSL algorithm to improve the quality of the\nrepresentations. In the test phase, CDMAD similarly refines biased class\npredictions on test samples. CDMAD can be seen as an extension of post-hoc\nlogit adjustment to address a challenge of incorporating the unknown class\ndistribution of the unlabeled set for re-balancing the biased classifier under\nclass distribution mismatch. CDMAD ensures Fisher consistency for the balanced\nerror. Extensive experiments verify the effectiveness of CDMAD.", + "Despite being (pre)trained on a massive amount of data, state-of-the-art\nvideo-language alignment models are not robust to semantically-plausible\ncontrastive changes in the video captions. Our work addresses this by\nidentifying a broad spectrum of contrast misalignments, such as replacing\nentities, actions, and flipping event order, which alignment models should be\nrobust against. To this end, we introduce the VideoCon, a video-language\nalignment dataset constructed by a large language model that generates\nplausible contrast video captions and explanations for differences between\noriginal and contrast video captions. Then, a generative video-language model\nis finetuned with VideoCon to assess video-language entailment and generate\nexplanations. Our VideoCon-based alignment model significantly outperforms\ncurrent models. It exhibits a 12-point increase in AUC for the video-language\nalignment task on human-generated contrast captions. Finally, our model sets\nnew state of the art zero-shot performance in temporally-extensive\nvideo-language tasks such as text-to-video retrieval (SSv2-Temporal) and video\nquestion answering (ATP-Hard).", + "Finally, our model sets\nnew state of the art zero-shot performance in temporally-extensive\nvideo-language tasks such as text-to-video retrieval (SSv2-Temporal) and video\nquestion answering (ATP-Hard). Moreover, our model shows superior performance\non novel videos and human-crafted captions and explanations. Our code and data\nare available at https://github.com/Hritikbansal/videocon.", + "Sketch semantic segmentation is a well-explored and pivotal problem in\ncomputer vision involving the assignment of pre-defined part labels to\nindividual strokes. This paper presents ContextSeg - a simple yet highly\neffective approach to tackling this problem with two stages. In the first\nstage, to better encode the shape and positional information of strokes, we\npropose to predict an extra dense distance field in an autoencoder network to\nreinforce structural information learning. In the second stage, we treat an\nentire stroke as a single entity and label a group of strokes within the same\nsemantic part using an auto-regressive Transformer with the default attention\nmechanism. By group-based labeling, our method can fully leverage the context\ninformation when making decisions for the remaining groups of strokes. Our\nmethod achieves the best segmentation accuracy compared with state-of-the-art\napproaches on two representative datasets and has been extensively evaluated\ndemonstrating its superior performance. Additionally, we offer insights into\nsolving part imbalance in training data and the preliminary experiment on\ncross-category training, which can inspire future research in this field.", + "How do two sets of images differ? Discerning set-level differences is crucial\nfor understanding model behaviors and analyzing datasets, yet manually sifting\nthrough thousands of images is impractical. To aid in this discovery process,\nwe explore the task of automatically describing the differences between two\n$\\textbf{sets}$ of images, which we term Set Difference Captioning. This task\ntakes in image sets $D_A$ and $D_B$, and outputs a description that is more\noften true on $D_A$ than $D_B$. We outline a two-stage approach that first\nproposes candidate difference descriptions from image sets and then re-ranks\nthe candidates by checking how well they can differentiate the two sets. We\nintroduce VisDiff, which first captions the images and prompts a language model\nto propose candidate descriptions, then re-ranks these descriptions using CLIP.\nTo evaluate VisDiff, we collect VisDiffBench, a dataset with 187 paired image\nsets with ground truth difference descriptions. We apply VisDiff to various\ndomains, such as comparing datasets (e.g., ImageNet vs. ImageNetV2), comparing\nclassification models (e.g., zero-shot CLIP vs.", + "We apply VisDiff to various\ndomains, such as comparing datasets (e.g., ImageNet vs. ImageNetV2), comparing\nclassification models (e.g., zero-shot CLIP vs. supervised ResNet), summarizing\nmodel failure modes (supervised ResNet), characterizing differences between\ngenerative models (e.g., StableDiffusionV1 and V2), and discovering what makes\nimages memorable. Using VisDiff, we are able to find interesting and previously\nunknown differences in datasets and models, demonstrating its utility in\nrevealing nuanced insights.", + "Addressing biases in computer vision models is crucial for real-world AI\ndeployments. However, mitigating visual biases is challenging due to their\nunexplainable nature, often identified indirectly through visualization or\nsample statistics, which necessitates additional human supervision for\ninterpretation. To tackle this issue, we propose the Bias-to-Text (B2T)\nframework, which interprets visual biases as keywords. Specifically, we extract\ncommon keywords from the captions of mispredicted images to identify potential\nbiases in the model. We then validate these keywords by measuring their\nsimilarity to the mispredicted images using a vision-language scoring model.\nThe keyword explanation form of visual bias offers several advantages, such as\na clear group naming for bias discovery and a natural extension for debiasing\nusing these group names. Our experiments demonstrate that B2T can identify\nknown biases, such as gender bias in CelebA, background bias in Waterbirds, and\ndistribution shifts in ImageNet-R/C. Additionally, B2T uncovers novel biases in\nlarger datasets, such as Dollar Street and ImageNet. For example, we discovered\na contextual bias between \"bee\" and \"flower\" in ImageNet.", + "Additionally, B2T uncovers novel biases in\nlarger datasets, such as Dollar Street and ImageNet. For example, we discovered\na contextual bias between \"bee\" and \"flower\" in ImageNet. We also highlight\nvarious applications of B2T keywords, including debiased training, CLIP\nprompting, and model comparison.", + "Context-aware emotion recognition (CAER) has recently boosted the practical\napplications of affective computing techniques in unconstrained environments.\nMainstream CAER methods invariably extract ensemble representations from\ndiverse contexts and subject-centred characteristics to perceive the target\nperson's emotional state. Despite advancements, the biggest challenge remains\ndue to context bias interference. The harmful bias forces the models to rely on\nspurious correlations between background contexts and emotion labels in\nlikelihood estimation, causing severe performance bottlenecks and confounding\nvaluable context priors. In this paper, we propose a counterfactual emotion\ninference (CLEF) framework to address the above issue. Specifically, we first\nformulate a generalized causal graph to decouple the causal relationships among\nthe variables in CAER. Following the causal graph, CLEF introduces a\nnon-invasive context branch to capture the adverse direct effect caused by the\ncontext bias. During the inference, we eliminate the direct context effect from\nthe total causal effect by comparing factual and counterfactual outcomes,\nresulting in bias mitigation and robust prediction. As a model-agnostic\nframework, CLEF can be readily integrated into existing methods, bringing\nconsistent performance gains.", + "We introduce a lightweight and accurate localization method that only\nutilizes the geometry of 2D-3D lines. Given a pre-captured 3D map, our approach\nlocalizes a panorama image, taking advantage of the holistic 360 view. The\nsystem mitigates potential privacy breaches or domain discrepancies by avoiding\ntrained or hand-crafted visual descriptors. However, as lines alone can be\nambiguous, we express distinctive yet compact spatial contexts from\nrelationships between lines, namely the dominant directions of parallel lines\nand the intersection between non-parallel lines. The resulting representations\nare efficient in processing time and memory compared to conventional visual\ndescriptor-based methods. Given the groups of dominant line directions and\ntheir intersections, we accelerate the search process to test thousands of pose\ncandidates in less than a millisecond without sacrificing accuracy. We\nempirically show that the proposed 2D-3D matching can localize panoramas for\nchallenging scenes with similar structures, dramatic domain shifts or\nillumination changes. Our fully geometric approach does not involve extensive\nparameter tuning or neural network training, making it a practical algorithm\nthat can be readily deployed in the real world.", + "Our fully geometric approach does not involve extensive\nparameter tuning or neural network training, making it a practical algorithm\nthat can be readily deployed in the real world. Project page including the code\nis available through this link: https://82magnolia.github.io/fgpl/.", + "Deep Neural Networks (DNNs) are widely used for visual classification tasks,\nbut their complex computation process and black-box nature hinder decision\ntransparency and interpretability. Class activation maps (CAMs) and recent\nvariants provide ways to visually explain the DNN decision-making process by\ndisplaying 'attention' heatmaps of the DNNs. Nevertheless, the CAM explanation\nonly offers relative attention information, that is, on an attention heatmap,\nwe can interpret which image region is more or less important than the others.\nHowever, these regions cannot be meaningfully compared across classes, and the\ncontribution of each region to the model's class prediction is not revealed. To\naddress these challenges that ultimately lead to better DNN Interpretation, in\nthis paper, we propose CAPE, a novel reformulation of CAM that provides a\nunified and probabilistically meaningful assessment of the contributions of\nimage regions. We quantitatively and qualitatively compare CAPE with\nstate-of-the-art CAM methods on CUB and ImageNet benchmark datasets to\ndemonstrate enhanced interpretability.", + "We quantitatively and qualitatively compare CAPE with\nstate-of-the-art CAM methods on CUB and ImageNet benchmark datasets to\ndemonstrate enhanced interpretability. We also test on a cytology imaging\ndataset depicting a challenging Chronic Myelomonocytic Leukemia (CMML)\ndiagnosis problem. Code is available at: https://github.com/AIML-MED/CAPE.", + "Despite the extensive research on training generative adversarial networks\n(GANs) with limited training data, learning to generate images from long-tailed\ntraining distributions remains fairly unexplored. In the presence of imbalanced\nmulti-class training data, GANs tend to favor classes with more samples,\nleading to the generation of low-quality and less diverse samples in tail\nclasses. In this study, we aim to improve the training of class-conditional\nGANs with long-tailed data. We propose a straightforward yet effective method\nfor knowledge sharing, allowing tail classes to borrow from the rich\ninformation from classes with more abundant training data. More concretely, we\npropose modifications to existing class-conditional GAN architectures to ensure\nthat the lower-resolution layers of the generator are trained entirely\nunconditionally while reserving class-conditional generation for the\nhigher-resolution layers. Experiments on several long-tail benchmarks and GAN\narchitectures demonstrate a significant improvement over existing methods in\nboth the diversity and fidelity of the generated images. The code is available\nat https://github.com/khorrams/utlo.", + "Current diffusion-based video editing primarily focuses on\nstructure-preserved editing by utilizing various dense correspondences to\nensure temporal consistency and motion alignment. However, these approaches are\noften ineffective when the target edit involves a shape change. To embark on\nvideo editing with shape change, we explore customized video subject swapping\nin this work, where we aim to replace the main subject in a source video with a\ntarget subject having a distinct identity and potentially different shape. In\ncontrast to previous methods that rely on dense correspondences, we introduce\nthe VideoSwap framework that exploits semantic point correspondences, inspired\nby our observation that only a small number of semantic points are necessary to\nalign the subject's motion trajectory and modify its shape. We also introduce\nvarious user-point interactions (\\eg, removing points and dragging points) to\naddress various semantic point correspondence. Extensive experiments\ndemonstrate state-of-the-art video subject swapping results across a variety of\nreal-world videos.", + "There has been a growing interest in the task of generating sound for silent\nvideos, primarily because of its practicality in streamlining video\npost-production. However, existing methods for video-sound generation attempt\nto directly create sound from visual representations, which can be challenging\ndue to the difficulty of aligning visual representations with audio\nrepresentations. In this paper, we present SonicVisionLM, a novel framework\naimed at generating a wide range of sound effects by leveraging vision-language\nmodels(VLMs). Instead of generating audio directly from video, we use the\ncapabilities of powerful VLMs. When provided with a silent video, our approach\nfirst identifies events within the video using a VLM to suggest possible sounds\nthat match the video content. This shift in approach transforms the challenging\ntask of aligning image and audio into more well-studied sub-problems of\naligning image-to-text and text-to-audio through the popular diffusion models.\nTo improve the quality of audio recommendations with LLMs, we have collected an\nextensive dataset that maps text descriptions to specific sound effects and\ndeveloped a time-controlled audio adapter.", + "To improve the quality of audio recommendations with LLMs, we have collected an\nextensive dataset that maps text descriptions to specific sound effects and\ndeveloped a time-controlled audio adapter. Our approach surpasses current\nstate-of-the-art methods for converting video to audio, enhancing\nsynchronization with the visuals, and improving alignment between audio and\nvideo components. Project page:\nhttps://yusiissy.github.io/SonicVisionLM.github.io/", + "A unified and versatile LiDAR segmentation model with strong robustness and\ngeneralizability is desirable for safe autonomous driving perception. This work\npresents M3Net, a one-of-a-kind framework for fulfilling multi-task,\nmulti-dataset, multi-modality LiDAR segmentation in a universal manner using\njust a single set of parameters. To better exploit data volume and diversity,\nwe first combine large-scale driving datasets acquired by different types of\nsensors from diverse scenes and then conduct alignments in three spaces, namely\ndata, feature, and label spaces, during the training. As a result, M3Net is\ncapable of taming heterogeneous data for training state-of-the-art LiDAR\nsegmentation models. Extensive experiments on twelve LiDAR segmentation\ndatasets verify our effectiveness. Notably, using a shared set of parameters,\nM3Net achieves 75.1%, 83.1%, and 72.4% mIoU scores, respectively, on the\nofficial benchmarks of SemanticKITTI, nuScenes, and Waymo Open.", + "We present DiffuScene for indoor 3D scene synthesis based on a novel scene\nconfiguration denoising diffusion model. It generates 3D instance properties\nstored in an unordered object set and retrieves the most similar geometry for\neach object configuration, which is characterized as a concatenation of\ndifferent attributes, including location, size, orientation, semantics, and\ngeometry features. We introduce a diffusion network to synthesize a collection\nof 3D indoor objects by denoising a set of unordered object attributes.\nUnordered parametrization simplifies and eases the joint distribution\napproximation. The shape feature diffusion facilitates natural object\nplacements, including symmetries. Our method enables many downstream\napplications, including scene completion, scene arrangement, and\ntext-conditioned scene synthesis. Experiments on the 3D-FRONT dataset show that\nour method can synthesize more physically plausible and diverse indoor scenes\nthan state-of-the-art methods. Extensive ablation studies verify the\neffectiveness of our design choice in scene diffusion models.", + "Recent Vision Transformer Compression (VTC) works mainly follow a two-stage\nscheme, where the importance score of each model unit is first evaluated or\npreset in each submodule, followed by the sparsity score evaluation according\nto the target sparsity constraint. Such a separate evaluation process induces\nthe gap between importance and sparsity score distributions, thus causing high\nsearch costs for VTC. In this work, for the first time, we investigate how to\nintegrate the evaluations of importance and sparsity scores into a single\nstage, searching the optimal subnets in an efficient manner. Specifically, we\npresent OFB, a cost-efficient approach that simultaneously evaluates both\nimportance and sparsity scores, termed Once for Both (OFB), for VTC. First, a\nbi-mask scheme is developed by entangling the importance score and the\ndifferentiable sparsity score to jointly determine the pruning potential\n(prunability) of each unit. Such a bi-mask search strategy is further used\ntogether with a proposed adaptive one-hot loss to realize the\nprogressive-and-efficient search for the most important subnet.", + "Such a bi-mask search strategy is further used\ntogether with a proposed adaptive one-hot loss to realize the\nprogressive-and-efficient search for the most important subnet. Finally,\nProgressive Masked Image Modeling (PMIM) is proposed to regularize the feature\nspace to be more representative during the search process, which may be\ndegraded by the dimension reduction. Extensive experiments demonstrate that OFB\ncan achieve superior compression performance over state-of-the-art\nsearching-based and pruning-based methods under various Vision Transformer\narchitectures, meanwhile promoting search efficiency significantly, e.g.,\ncosting one GPU search day for the compression of DeiT-S on ImageNet-1K.", + "Panoptic segmentation, combining semantic and instance segmentation, stands\nas a cutting-edge computer vision task. Despite recent progress with deep\nlearning models, the dynamic nature of real-world applications necessitates\ncontinual learning, where models adapt to new classes (plasticity) over time\nwithout forgetting old ones (catastrophic forgetting). Current continual\nsegmentation methods often rely on distillation strategies like knowledge\ndistillation and pseudo-labeling, which are effective but result in increased\ntraining complexity and computational overhead. In this paper, we introduce a\nnovel and efficient method for continual panoptic segmentation based on Visual\nPrompt Tuning, dubbed ECLIPSE. Our approach involves freezing the base model\nparameters and fine-tuning only a small set of prompt embeddings, addressing\nboth catastrophic forgetting and plasticity and significantly reducing the\ntrainable parameters. To mitigate inherent challenges such as error propagation\nand semantic drift in continual segmentation, we propose logit manipulation to\neffectively leverage common knowledge across the classes.", + "To mitigate inherent challenges such as error propagation\nand semantic drift in continual segmentation, we propose logit manipulation to\neffectively leverage common knowledge across the classes. Experiments on ADE20K\ncontinual panoptic segmentation benchmark demonstrate the superiority of\nECLIPSE, notably its robustness against catastrophic forgetting and its\nreasonable plasticity, achieving a new state-of-the-art. The code is available\nat https://github.com/clovaai/ECLIPSE.", + "Continual learning can empower vision-language models to continuously acquire\nnew knowledge, without the need for access to the entire historical dataset.\nHowever, mitigating the performance degradation in large-scale models is\nnon-trivial due to (i) parameter shifts throughout lifelong learning and (ii)\nsignificant computational burdens associated with full-model tuning. In this\nwork, we present a parameter-efficient continual learning framework to\nalleviate long-term forgetting in incremental learning with vision-language\nmodels. Our approach involves the dynamic expansion of a pre-trained CLIP\nmodel, through the integration of Mixture-of-Experts (MoE) adapters in response\nto new tasks. To preserve the zero-shot recognition capability of\nvision-language models, we further introduce a Distribution Discriminative\nAuto-Selector (DDAS) that automatically routes in-distribution and\nout-of-distribution inputs to the MoE Adapter and the original CLIP,\nrespectively. Through extensive experiments across various settings, our\nproposed method consistently outperforms previous state-of-the-art approaches\nwhile concurrently reducing parameter training burdens by 60%. Our code locates\nat https://github.com/JiazuoYu/MoE-Adapters4CL", + "Human matting is a foundation task in image and video processing, where human\nforeground pixels are extracted from the input. Prior works either improve the\naccuracy by additional guidance or improve the temporal consistency of a single\ninstance across frames. We propose a new framework MaGGIe, Masked Guided\nGradual Human Instance Matting, which predicts alpha mattes progressively for\neach human instances while maintaining the computational cost, precision, and\nconsistency. Our method leverages modern architectures, including transformer\nattention and sparse convolution, to output all instance mattes simultaneously\nwithout exploding memory and latency. Although keeping constant inference costs\nin the multiple-instance scenario, our framework achieves robust and versatile\nperformance on our proposed synthesized benchmarks. With the higher quality\nimage and video matting benchmarks, the novel multi-instance synthesis approach\nfrom publicly available sources is introduced to increase the generalization of\nmodels in real-world scenarios.", + "We introduce Free3D, a simple accurate method for monocular open-set novel\nview synthesis (NVS). Similar to Zero-1-to-3, we start from a pre-trained 2D\nimage generator for generalization, and fine-tune it for NVS. Compared to other\nworks that took a similar approach, we obtain significant improvements without\nresorting to an explicit 3D representation, which is slow and memory-consuming,\nand without training an additional network for 3D reconstruction. Our key\ncontribution is to improve the way the target camera pose is encoded in the\nnetwork, which we do by introducing a new ray conditioning normalization (RCN)\nlayer. The latter injects pose information in the underlying 2D image generator\nby telling each pixel its viewing direction. We further improve multi-view\nconsistency by using light-weight multi-view attention layers and by sharing\ngeneration noise between the different views. We train Free3D on the Objaverse\ndataset and demonstrate excellent generalization to new categories in new\ndatasets, including OmniObject3D and GSO. The project page is available at\nhttps://chuanxiaz.com/free3d/.", + "This paper proposes a novel direct Audio-Visual Speech to Audio-Visual Speech\nTranslation (AV2AV) framework, where the input and output of the system are\nmultimodal (i.e., audio and visual speech). With the proposed AV2AV, two key\nadvantages can be brought: 1) We can perform real-like conversations with\nindividuals worldwide in a virtual meeting by utilizing our own primary\nlanguages. In contrast to Speech-to-Speech Translation (A2A), which solely\ntranslates between audio modalities, the proposed AV2AV directly translates\nbetween audio-visual speech. This capability enhances the dialogue experience\nby presenting synchronized lip movements along with the translated speech. 2)\nWe can improve the robustness of the spoken language translation system. By\nemploying the complementary information of audio-visual speech, the system can\neffectively translate spoken language even in the presence of acoustic noise,\nshowcasing robust performance. To mitigate the problem of the absence of a\nparallel AV2AV translation dataset, we propose to train our spoken language\ntranslation system with the audio-only dataset of A2A.", + "To mitigate the problem of the absence of a\nparallel AV2AV translation dataset, we propose to train our spoken language\ntranslation system with the audio-only dataset of A2A. This is done by learning\nunified audio-visual speech representations through self-supervised learning in\nadvance to train the translation system. Moreover, we propose an AV-Renderer\nthat can generate raw audio and video in parallel. It is designed with\nzero-shot speaker modeling, thus the speaker in source audio-visual speech can\nbe maintained at the target translated audio-visual speech. The effectiveness\nof AV2AV is evaluated with extensive experiments in a many-to-many language\ntranslation setting. Demo page is available on\nhttps://choijeongsoo.github.io/av2av.", + "Semi-supervised semantic segmentation allows model to mine effective\nsupervision from unlabeled data to complement label-guided training. Recent\nresearch has primarily focused on consistency regularization techniques,\nexploring perturbation-invariant training at both the image and feature levels.\nIn this work, we proposed a novel feature-level consistency learning framework\nnamed Density-Descending Feature Perturbation (DDFP). Inspired by the\nlow-density separation assumption in semi-supervised learning, our key insight\nis that feature density can shed a light on the most promising direction for\nthe segmentation classifier to explore, which is the regions with lower\ndensity. We propose to shift features with confident predictions towards\nlower-density regions by perturbation injection. The perturbed features are\nthen supervised by the predictions on the original features, thereby compelling\nthe classifier to explore less dense regions to effectively regularize the\ndecision boundary. Central to our method is the estimation of feature density.\nTo this end, we introduce a lightweight density estimator based on normalizing\nflow, allowing for efficient capture of the feature density distribution in an\nonline manner. By extracting gradients from the density estimator, we can\ndetermine the direction towards less dense regions for each feature.", + "To this end, we introduce a lightweight density estimator based on normalizing\nflow, allowing for efficient capture of the feature density distribution in an\nonline manner. By extracting gradients from the density estimator, we can\ndetermine the direction towards less dense regions for each feature. The\nproposed DDFP outperforms other designs on feature-level perturbations and\nshows state of the art performances on both Pascal VOC and Cityscapes dataset\nunder various partition protocols. The project is available at\nhttps://github.com/Gavinwxy/DDFP.", + "Current methods for 2D and 3D object understanding struggle with severe\nocclusions in busy urban environments, partly due to the lack of large-scale\nlabeled ground-truth annotations for learning occlusion. In this work, we\nintroduce a novel framework for automatically generating a large, realistic\ndataset of dynamic objects under occlusions using freely available time-lapse\nimagery. By leveraging off-the-shelf 2D (bounding box, segmentation, keypoint)\nand 3D (pose, shape) predictions as pseudo-groundtruth, unoccluded 3D objects\nare identified automatically and composited into the background in a clip-art\nstyle, ensuring realistic appearances and physically accurate occlusion\nconfigurations. The resulting clip-art image with pseudo-groundtruth enables\nefficient training of object reconstruction methods that are robust to\nocclusions. Our method demonstrates significant improvements in both 2D and 3D\nreconstruction, particularly in scenarios with heavily occluded objects like\nvehicles and people in urban scenes.", + "Real-time multi-person pose estimation presents significant challenges in\nbalancing speed and precision. While two-stage top-down methods slow down as\nthe number of people in the image increases, existing one-stage methods often\nfail to simultaneously deliver high accuracy and real-time performance. This\npaper introduces RTMO, a one-stage pose estimation framework that seamlessly\nintegrates coordinate classification by representing keypoints using dual 1-D\nheatmaps within the YOLO architecture, achieving accuracy comparable to\ntop-down methods while maintaining high speed. We propose a dynamic coordinate\nclassifier and a tailored loss function for heatmap learning, specifically\ndesigned to address the incompatibilities between coordinate classification and\ndense prediction models. RTMO outperforms state-of-the-art one-stage pose\nestimators, achieving 1.1% higher AP on COCO while operating about 9 times\nfaster with the same backbone. Our largest model, RTMO-l, attains 74.8% AP on\nCOCO val2017 and 141 FPS on a single V100 GPU, demonstrating its efficiency and\naccuracy. The code and models are available at\nhttps://github.com/open-mmlab/mmpose/tree/main/projects/rtmo.", + "We address the problem of generalized category discovery (GCD) that aims to\npartition a partially labeled collection of images; only a small part of the\ncollection is labeled and the total number of target classes is unknown. To\naddress this generalized image clustering problem, we revisit the mean-shift\nalgorithm, i.e., a classic, powerful technique for mode seeking, and\nincorporate it into a contrastive learning framework. The proposed method,\ndubbed Contrastive Mean-Shift (CMS) learning, trains an image encoder to\nproduce representations with better clustering properties by an iterative\nprocess of mean shift and contrastive update. Experiments demonstrate that our\nmethod, both in settings with and without the total number of clusters being\nknown, achieves state-of-the-art performance on six public GCD benchmarks\nwithout bells and whistles.", + "We introduce a new task -- language-driven video inpainting, which uses\nnatural language instructions to guide the inpainting process. This approach\novercomes the limitations of traditional video inpainting methods that depend\non manually labeled binary masks, a process often tedious and labor-intensive.\nWe present the Remove Objects from Videos by Instructions (ROVI) dataset,\ncontaining 5,650 videos and 9,091 inpainting results, to support training and\nevaluation for this task. We also propose a novel diffusion-based\nlanguage-driven video inpainting framework, the first end-to-end baseline for\nthis task, integrating Multimodal Large Language Models to understand and\nexecute complex language-based inpainting requests effectively. Our\ncomprehensive results showcase the dataset's versatility and the model's\neffectiveness in various language-instructed inpainting scenarios. We will make\ndatasets, code, and models publicly available.", + "Although diffusion models are rising as a powerful solution for blind face\nrestoration, they are criticized for two problems: 1) slow training and\ninference speed, and 2) failure in preserving identity and recovering\nfine-grained facial details. In this work, we propose WaveFace to solve the\nproblems in the frequency domain, where low- and high-frequency components\ndecomposed by wavelet transformation are considered individually to maximize\nauthenticity as well as efficiency. The diffusion model is applied to recover\nthe low-frequency component only, which presents general information of the\noriginal image but 1/16 in size. To preserve the original identity, the\ngeneration is conditioned on the low-frequency component of low-quality images\nat each denoising step. Meanwhile, high-frequency components at multiple\ndecomposition levels are handled by a unified network, which recovers complex\nfacial details in a single step. Evaluations on four benchmark datasets show\nthat: 1) WaveFace outperforms state-of-the-art methods in authenticity,\nespecially in terms of identity preservation, and 2) authentic images are\nrestored with the efficiency 10x faster than existing diffusion model-based BFR\nmethods.", + "Recent advances in 3D avatar generation have gained significant attentions.\nThese breakthroughs aim to produce more realistic animatable avatars, narrowing\nthe gap between virtual and real-world experiences. Most of existing works\nemploy Score Distillation Sampling (SDS) loss, combined with a differentiable\nrenderer and text condition, to guide a diffusion model in generating 3D\navatars. However, SDS often generates oversmoothed results with few facial\ndetails, thereby lacking the diversity compared with ancestral sampling. On the\nother hand, other works generate 3D avatar from a single image, where the\nchallenges of unwanted lighting effects, perspective views, and inferior image\nquality make them difficult to reliably reconstruct the 3D face meshes with the\naligned complete textures. In this paper, we propose a novel 3D avatar\ngeneration approach termed UltrAvatar with enhanced fidelity of geometry, and\nsuperior quality of physically based rendering (PBR) textures without unwanted\nlighting. To this end, the proposed approach presents a diffuse color\nextraction model and an authenticity guided texture diffusion model.", + "To this end, the proposed approach presents a diffuse color\nextraction model and an authenticity guided texture diffusion model. The former\nremoves the unwanted lighting effects to reveal true diffuse colors so that the\ngenerated avatars can be rendered under various lighting conditions. The latter\nfollows two gradient-based guidances for generating PBR textures to render\ndiverse face-identity features and details better aligning with 3D mesh\ngeometry. We demonstrate the effectiveness and robustness of the proposed\nmethod, outperforming the state-of-the-art methods by a large margin in the\nexperiments.", + "Visual object tracking aims to localize the target object of each frame based\non its initial appearance in the first frame. Depending on the input modility,\ntracking tasks can be divided into RGB tracking and RGB+X (e.g. RGB+N, and\nRGB+D) tracking. Despite the different input modalities, the core aspect of\ntracking is the temporal matching. Based on this common ground, we present a\ngeneral framework to unify various tracking tasks, termed as OneTracker.\nOneTracker first performs a large-scale pre-training on a RGB tracker called\nFoundation Tracker. This pretraining phase equips the Foundation Tracker with a\nstable ability to estimate the location of the target object. Then we regard\nother modality information as prompt and build Prompt Tracker upon Foundation\nTracker. Through freezing the Foundation Tracker and only adjusting some\nadditional trainable parameters, Prompt Tracker inhibits the strong\nlocalization ability from Foundation Tracker and achieves parameter-efficient\nfinetuning on downstream RGB+X tracking tasks.", + "Then we regard\nother modality information as prompt and build Prompt Tracker upon Foundation\nTracker. Through freezing the Foundation Tracker and only adjusting some\nadditional trainable parameters, Prompt Tracker inhibits the strong\nlocalization ability from Foundation Tracker and achieves parameter-efficient\nfinetuning on downstream RGB+X tracking tasks. To evaluate the effectiveness of\nour general framework OneTracker, which is consisted of Foundation Tracker and\nPrompt Tracker, we conduct extensive experiments on 6 popular tracking tasks\nacross 11 benchmarks and our OneTracker outperforms other models and achieves\nstate-of-the-art performance.", + "Embodied agents operating in complex and uncertain environments face\nconsiderable challenges. While some advanced agents handle complex manipulation\ntasks with proficiency, their success often hinges on extensive training data\nto develop their capabilities. In contrast, humans typically rely on recalling\npast experiences and analogous situations to solve new problems. Aiming to\nemulate this human approach in robotics, we introduce the Retrieval-Augmented\nEmbodied Agent (RAEA). This innovative system equips robots with a form of\nshared memory, significantly enhancing their performance. Our approach\nintegrates a policy retriever, allowing robots to access relevant strategies\nfrom an external policy memory bank based on multi-modal inputs. Additionally,\na policy generator is employed to assimilate these strategies into the learning\nprocess, enabling robots to formulate effective responses to tasks. Extensive\ntesting of RAEA in both simulated and real-world scenarios demonstrates its\nsuperior performance over traditional methods, representing a major leap\nforward in robotic technology.", + "We present EgoTAP, a heatmap-to-3D pose lifting method for highly accurate\nstereo egocentric 3D pose estimation. Severe self-occlusion and out-of-view\nlimbs in egocentric camera views make accurate pose estimation a challenging\nproblem. To address the challenge, prior methods employ joint\nheatmaps-probabilistic 2D representations of the body pose, but heatmap-to-3D\npose conversion still remains an inaccurate process. We propose a novel\nheatmap-to-3D lifting method composed of the Grid ViT Encoder and the\nPropagation Network. The Grid ViT Encoder summarizes joint heatmaps into\neffective feature embedding using self-attention. Then, the Propagation Network\nestimates the 3D pose by utilizing skeletal information to better estimate the\nposition of obscure joints. Our method significantly outperforms the previous\nstate-of-the-art qualitatively and quantitatively demonstrated by a 23.9\\%\nreduction of error in an MPJPE metric. Our source code is available in GitHub.", + "Our paper aims to generate diverse and realistic animal motion sequences from\ntextual descriptions, without a large-scale animal text-motion dataset. While\nthe task of text-driven human motion synthesis is already extensively studied\nand benchmarked, it remains challenging to transfer this success to other\nskeleton structures with limited data. In this work, we design a model\narchitecture that imitates Generative Pretraining Transformer (GPT), utilizing\nprior knowledge learned from human data to the animal domain. We jointly train\nmotion autoencoders for both animal and human motions and at the same time\noptimize through the similarity scores among human motion encoding, animal\nmotion encoding, and text CLIP embedding. Presenting the first solution to this\nproblem, we are able to generate animal motions with high diversity and\nfidelity, quantitatively and qualitatively outperforming the results of\ntraining human motion generation baselines on animal data. Additionally, we\nintroduce AnimalML3D, the first text-animal motion dataset with 1240 animation\nsequences spanning 36 different animal identities. We hope this dataset would\nmediate the data scarcity problem in text-driven animal motion generation,\nproviding a new playground for the research community.", + "Text-to-image diffusion models produce high quality images but do not offer\ncontrol over individual instances in the image. We introduce InstanceDiffusion\nthat adds precise instance-level control to text-to-image diffusion models.\nInstanceDiffusion supports free-form language conditions per instance and\nallows flexible ways to specify instance locations such as simple single\npoints, scribbles, bounding boxes or intricate instance segmentation masks, and\ncombinations thereof. We propose three major changes to text-to-image models\nthat enable precise instance-level control. Our UniFusion block enables\ninstance-level conditions for text-to-image models, the ScaleU block improves\nimage fidelity, and our Multi-instance Sampler improves generations for\nmultiple instances. InstanceDiffusion significantly surpasses specialized\nstate-of-the-art models for each location condition. Notably, on the COCO\ndataset, we outperform previous state-of-the-art by 20.4% AP$_{50}^\\text{box}$\nfor box inputs, and 25.4% IoU for mask inputs.", + "Most models of visual attention aim at predicting either top-down or\nbottom-up control, as studied using different visual search and free-viewing\ntasks. In this paper we propose the Human Attention Transformer (HAT), a single\nmodel that predicts both forms of attention control. HAT uses a novel\ntransformer-based architecture and a simplified foveated retina that\ncollectively create a spatio-temporal awareness akin to the dynamic visual\nworking memory of humans. HAT not only establishes a new state-of-the-art in\npredicting the scanpath of fixations made during target-present and\ntarget-absent visual search and ``taskless'' free viewing, but also makes human\ngaze behavior interpretable. Unlike previous methods that rely on a coarse grid\nof fixation cells and experience information loss due to fixation\ndiscretization, HAT features a sequential dense prediction architecture and\noutputs a dense heatmap for each fixation, thus avoiding discretizing\nfixations. HAT sets a new standard in computational attention, which emphasizes\neffectiveness, generality, and interpretability.", + "HAT sets a new standard in computational attention, which emphasizes\neffectiveness, generality, and interpretability. HAT's demonstrated scope and\napplicability will likely inspire the development of new attention models that\ncan better predict human behavior in various attention-demanding scenarios.\nCode is available at https://github.com/cvlab-stonybrook/HAT.", + "Gradient-based saliency maps have been widely used to explain the decisions\nof deep neural network classifiers. However, standard gradient-based\ninterpretation maps, including the simple gradient and integrated gradient\nalgorithms, often lack desired structures such as sparsity and connectedness in\ntheir application to real-world computer vision models. A frequently used\napproach to inducing sparsity structures into gradient-based saliency maps is\nto alter the simple gradient scheme using sparsification or norm-based\nregularization. A drawback with such post-processing methods is their\nfrequently-observed significant loss in fidelity to the original simple\ngradient map. In this work, we propose to apply adversarial training as an\nin-processing scheme to train neural networks with structured simple gradient\nmaps. We show a duality relation between the regularized norms of the\nadversarial perturbations and gradient-based maps, based on which we design\nadversarial training loss functions promoting sparsity and group-sparsity\nproperties in simple gradient maps. We present several numerical results to\nshow the influence of our proposed norm-based adversarial training methods on\nthe standard gradient-based maps of standard neural network architectures on\nbenchmark image datasets.", + "An effective pre-training framework with universal 3D representations is\nextremely desired in perceiving large-scale dynamic scenes. However,\nestablishing such an ideal framework that is both task-generic and\nlabel-efficient poses a challenge in unifying the representation of the same\nprimitive across diverse scenes. The current contrastive 3D pre-training\nmethods typically follow a frame-level consistency, which focuses on the 2D-3D\nrelationships in each detached image. Such inconsiderate consistency greatly\nhampers the promising path of reaching an universal pre-training framework: (1)\nThe cross-scene semantic self-conflict, i.e., the intense collision between\nprimitive segments of the same semantics from different scenes; (2) Lacking a\nglobally unified bond that pushes the cross-scene semantic consistency into 3D\nrepresentation learning. To address above challenges, we propose a CSC\nframework that puts a scene-level semantic consistency in the heart, bridging\nthe connection of the similar semantic segments across various scenes. To\nachieve this goal, we combine the coherent semantic cues provided by the vision\nfoundation model and the knowledge-rich cross-scene prototypes derived from the\ncomplementary multi-modality information.", + "To\nachieve this goal, we combine the coherent semantic cues provided by the vision\nfoundation model and the knowledge-rich cross-scene prototypes derived from the\ncomplementary multi-modality information. These allow us to train a universal\n3D pre-training model that facilitates various downstream tasks with less\nfine-tuning efforts. Empirically, we achieve consistent improvements over SOTA\npre-training approaches in semantic segmentation (+1.4% mIoU), object detection\n(+1.0% mAP), and panoptic segmentation (+3.0% PQ) using their task-specific 3D\nnetwork on nuScenes. Code is released at https://github.com/chenhaomingbob/CSC,\nhoping to inspire future research.", + "This paper introduces 3DFIRES, a novel system for scene-level 3D\nreconstruction from posed images. Designed to work with as few as one view,\n3DFIRES reconstructs the complete geometry of unseen scenes, including hidden\nsurfaces. With multiple view inputs, our method produces full reconstruction\nwithin all camera frustums. A key feature of our approach is the fusion of\nmulti-view information at the feature level, enabling the production of\ncoherent and comprehensive 3D reconstruction. We train our system on\nnon-watertight scans from large-scale real scene dataset. We show it matches\nthe efficacy of single-view reconstruction methods with only one input and\nsurpasses existing techniques in both quantitative and qualitative measures for\nsparse-view 3D reconstruction.", + "Recently, diffusion-based methods, like InstructPix2Pix (IP2P), have achieved\neffective instruction-based image editing, requiring only natural language\ninstructions from the user. However, these methods often inadvertently alter\nunintended areas and struggle with multi-instruction editing, resulting in\ncompromised outcomes. To address these issues, we introduce the Focus on Your\nInstruction (FoI), a method designed to ensure precise and harmonious editing\nacross multiple instructions without extra training or test-time optimization.\nIn the FoI, we primarily emphasize two aspects: (1) precisely extracting\nregions of interest for each instruction and (2) guiding the denoising process\nto concentrate within these regions of interest. For the first objective, we\nidentify the implicit grounding capability of IP2P from the cross-attention\nbetween instruction and image, then develop an effective mask extraction\nmethod. For the second objective, we introduce a cross attention modulation\nmodule for rough isolation of target editing regions and unrelated regions.\nAdditionally, we introduce a mask-guided disentangle sampling strategy to\nfurther ensure clear region isolation.", + "For the second objective, we introduce a cross attention modulation\nmodule for rough isolation of target editing regions and unrelated regions.\nAdditionally, we introduce a mask-guided disentangle sampling strategy to\nfurther ensure clear region isolation. Experimental results demonstrate that\nFoI surpasses existing methods in both quantitative and qualitative\nevaluations, especially excelling in multi-instruction editing task.", + "Multimodal Visual Object Tracking (VOT) has recently gained significant\nattention due to its robustness. Early research focused on fully fine-tuning\nRGB-based trackers, which was inefficient and lacked generalized representation\ndue to the scarcity of multimodal data. Therefore, recent studies have utilized\nprompt tuning to transfer pre-trained RGB-based trackers to multimodal data.\nHowever, the modality gap limits pre-trained knowledge recall, and the\ndominance of the RGB modality persists, preventing the full utilization of\ninformation from other modalities. To address these issues, we propose a novel\nsymmetric multimodal tracking framework called SDSTrack. We introduce\nlightweight adaptation for efficient fine-tuning, which directly transfers the\nfeature extraction ability from RGB to other domains with a small number of\ntrainable parameters and integrates multimodal features in a balanced,\nsymmetric manner. Furthermore, we design a complementary masked patch\ndistillation strategy to enhance the robustness of trackers in complex\nenvironments, such as extreme weather, poor imaging, and sensor failure.", + "Furthermore, we design a complementary masked patch\ndistillation strategy to enhance the robustness of trackers in complex\nenvironments, such as extreme weather, poor imaging, and sensor failure.\nExtensive experiments demonstrate that SDSTrack outperforms state-of-the-art\nmethods in various multimodal tracking scenarios, including RGB+Depth,\nRGB+Thermal, and RGB+Event tracking, and exhibits impressive results in extreme\nconditions. Our source code is available at https://github.com/hoqolo/SDSTrack.", + "Recent advancements in post-hoc and inherently interpretable methods have\nmarkedly enhanced the explanations of black box classifier models. These\nmethods operate either through post-analysis or by integrating concept learning\nduring model training. Although being effective in bridging the semantic gap\nbetween a model's latent space and human interpretation, these explanation\nmethods only partially reveal the model's decision-making process. The outcome\nis typically limited to high-level semantics derived from the last feature map.\nWe argue that the explanations lacking insights into the decision processes at\nlow and mid-level features are neither fully faithful nor useful. Addressing\nthis gap, we introduce the Multi-Level Concept Prototypes Classifier (MCPNet),\nan inherently interpretable model. MCPNet autonomously learns meaningful\nconcept prototypes across multiple feature map levels using Centered Kernel\nAlignment (CKA) loss and an energy-based weighted PCA mechanism, and it does so\nwithout reliance on predefined concept labels. Further, we propose a novel\nclassifier paradigm that learns and aligns multi-level concept prototype\ndistributions for classification purposes via Class-aware Concept Distribution\n(CCD) loss.", + "Further, we propose a novel\nclassifier paradigm that learns and aligns multi-level concept prototype\ndistributions for classification purposes via Class-aware Concept Distribution\n(CCD) loss. Our experiments reveal that our proposed MCPNet while being\nadaptable to various model architectures, offers comprehensive multi-level\nexplanations while maintaining classification accuracy. Additionally, its\nconcept distribution-based classification approach shows improved\ngeneralization capabilities in few-shot classification scenarios.", + "Large Language Models(LLMs) have shown remarkable emergent abilities in\nunifying almost all (if not every) NLP tasks. In the human motion-related\nrealm, however, researchers still develop siloed models for each task. Inspired\nby InstuctGPT, and the generalist concept behind Gato, we introduce AvatarGPT,\nan All-in-One framework for motion understanding, planning, generations as well\nas other tasks such as motion in-between synthesis. AvatarGPT treats each task\nas one type of instruction fine-tuned on the shared LLM. All the tasks are\nseamlessly interconnected with language as the universal interface,\nconstituting a closed-loop within the framework. To achieve this, human motion\nsequences are first encoded as discrete tokens, which serve as the extended\nvocabulary of LLM. Then, an unsupervised pipeline to generate natural language\ndescriptions of human action sequences from in-the-wild videos is developed.\nFinally, all tasks are jointly trained. Extensive experiments show that\nAvatarGPT achieves SOTA on low-level tasks, and promising results on high-level\ntasks, demonstrating the effectiveness of our proposed All-in-One framework.", + "Finally, all tasks are jointly trained. Extensive experiments show that\nAvatarGPT achieves SOTA on low-level tasks, and promising results on high-level\ntasks, demonstrating the effectiveness of our proposed All-in-One framework.\nMoreover, for the first time, AvatarGPT enables a principled approach by\niterative traversal of the tasks within the closed-loop for unlimited\nlong-motion synthesis.", + "Recently, the proliferation of highly realistic synthetic images, facilitated\nthrough a variety of GANs and Diffusions, has significantly heightened the\nsusceptibility to misuse. While the primary focus of deepfake detection has\ntraditionally centered on the design of detection algorithms, an investigative\ninquiry into the generator architectures has remained conspicuously absent in\nrecent years. This paper contributes to this lacuna by rethinking the\narchitectures of CNN-based generators, thereby establishing a generalized\nrepresentation of synthetic artifacts. Our findings illuminate that the\nup-sampling operator can, beyond frequency-based artifacts, produce generalized\nforgery artifacts. In particular, the local interdependence among image pixels\ncaused by upsampling operators is significantly demonstrated in synthetic\nimages generated by GAN or diffusion. Building upon this observation, we\nintroduce the concept of Neighboring Pixel Relationships(NPR) as a means to\ncapture and characterize the generalized structural artifacts stemming from\nup-sampling operations. A comprehensive analysis is conducted on an open-world\ndataset, comprising samples generated by \\tft{28 distinct generative models}.", + "A comprehensive analysis is conducted on an open-world\ndataset, comprising samples generated by \\tft{28 distinct generative models}.\nThis analysis culminates in the establishment of a novel state-of-the-art\nperformance, showcasing a remarkable \\tft{11.6\\%} improvement over existing\nmethods. The code is available at\nhttps://github.com/chuangchuangtan/NPR-DeepfakeDetection.", + "Co-speech gestures, if presented in the lively form of videos, can achieve\nsuperior visual effects in human-machine interaction. While previous works\nmostly generate structural human skeletons, resulting in the omission of\nappearance information, we focus on the direct generation of audio-driven\nco-speech gesture videos in this work. There are two main challenges: 1) A\nsuitable motion feature is needed to describe complex human movements with\ncrucial appearance information. 2) Gestures and speech exhibit inherent\ndependencies and should be temporally aligned even of arbitrary length. To\nsolve these problems, we present a novel motion-decoupled framework to generate\nco-speech gesture videos. Specifically, we first introduce a well-designed\nnonlinear TPS transformation to obtain latent motion features preserving\nessential appearance information. Then a transformer-based diffusion model is\nproposed to learn the temporal correlation between gestures and speech, and\nperforms generation in the latent motion space, followed by an optimal motion\nselection module to produce long-term coherent and consistent gesture videos.\nFor better visual perception, we further design a refinement network focusing\non missing details of certain areas.", + "For better visual perception, we further design a refinement network focusing\non missing details of certain areas. Extensive experimental results show that\nour proposed framework significantly outperforms existing approaches in both\nmotion and video-related evaluations. Our code, demos, and more resources are\navailable at https://github.com/thuhcsi/S2G-MDDiffusion.", + "Existing Blind image Super-Resolution (BSR) methods focus on estimating\neither kernel or degradation information, but have long overlooked the\nessential content details. In this paper, we propose a novel BSR approach,\nContent-aware Degradation-driven Transformer (CDFormer), to capture both\ndegradation and content representations. However, low-resolution images cannot\nprovide enough content details, and thus we introduce a diffusion-based module\n$CDFormer_{diff}$ to first learn Content Degradation Prior (CDP) in both low-\nand high-resolution images, and then approximate the real distribution given\nonly low-resolution information. Moreover, we apply an adaptive SR network\n$CDFormer_{SR}$ that effectively utilizes CDP to refine features. Compared to\nprevious diffusion-based SR methods, we treat the diffusion model as an\nestimator that can overcome the limitations of expensive sampling time and\nexcessive diversity. Experiments show that CDFormer can outperform existing\nmethods, establishing a new state-of-the-art performance on various benchmarks\nunder blind settings. Codes and models will be available at\n\\href{https://github.com/I2-Multimedia-Lab/CDFormer}{https://github.com/I2-Multimedia-Lab/CDFormer}.", + "Generating a 3D human model from a single reference image is challenging\nbecause it requires inferring textures and geometries in invisible views while\nmaintaining consistency with the reference image. Previous methods utilizing 3D\ngenerative models are limited by the availability of 3D training data.\nOptimization-based methods that lift text-to-image diffusion models to 3D\ngeneration often fail to preserve the texture details of the reference image,\nresulting in inconsistent appearances in different views. In this paper, we\npropose HumanRef, a 3D human generation framework from a single-view input. To\nensure the generated 3D model is photorealistic and consistent with the input\nimage, HumanRef introduces a novel method called reference-guided score\ndistillation sampling (Ref-SDS), which effectively incorporates image guidance\ninto the generation process. Furthermore, we introduce region-aware attention\nto Ref-SDS, ensuring accurate correspondence between different body regions.\nExperimental results demonstrate that HumanRef outperforms state-of-the-art\nmethods in generating 3D clothed humans with fine geometry, photorealistic\ntextures, and view-consistent appearances.", + "Large multimodal models (LMMs) have evolved from large language models (LLMs)\nto integrate multiple input modalities, such as visual inputs. This integration\naugments the capacity of LLMs for tasks requiring visual comprehension and\nreasoning. However, the extent and limitations of their enhanced abilities are\nnot fully understood, especially when it comes to real-world tasks. To address\nthis gap, we introduce GlitchBench, a novel benchmark derived from video game\nquality assurance tasks, to test and evaluate the reasoning capabilities of\nLMMs. Our benchmark is curated from a variety of unusual and glitched scenarios\nfrom video games and aims to challenge both the visual and linguistic reasoning\npowers of LMMs in detecting and interpreting out-of-the-ordinary events. We\nevaluate multiple state-of-the-art LMMs, and we show that GlitchBench presents\na new challenge for these models. Code and data are available at:\nhttps://glitchbench.github.io/", + "The goal of interactive image segmentation is to delineate specific regions\nwithin an image via visual or language prompts. Low-latency and high-quality\ninteractive segmentation with diverse prompts remain challenging for existing\nspecialist and generalist models. Specialist models, with their limited prompts\nand task-specific designs, experience high latency because the image must be\nrecomputed every time the prompt is updated, due to the joint encoding of image\nand visual prompts. Generalist models, exemplified by the Segment Anything\nModel (SAM), have recently excelled in prompt diversity and efficiency, lifting\nimage segmentation to the foundation model era. However, for high-quality\nsegmentations, SAM still lags behind state-of-the-art specialist models despite\nSAM being trained with x100 more segmentation masks. In this work, we delve\ndeep into the architectural differences between the two types of models. We\nobserve that dense representation and fusion of visual prompts are the key\ndesign choices contributing to the high segmentation quality of specialist\nmodels. In light of this, we reintroduce this dense design into the generalist\nmodels, to facilitate the development of generalist models with high\nsegmentation quality.", + "We\nobserve that dense representation and fusion of visual prompts are the key\ndesign choices contributing to the high segmentation quality of specialist\nmodels. In light of this, we reintroduce this dense design into the generalist\nmodels, to facilitate the development of generalist models with high\nsegmentation quality. To densely represent diverse visual prompts, we propose\nto use a dense map to capture five types: clicks, boxes, polygons, scribbles,\nand masks. Thus, we propose SegNext, a next-generation interactive segmentation\napproach offering low latency, high quality, and diverse prompt support. Our\nmethod outperforms current state-of-the-art methods on HQSeg-44K and DAVIS,\nboth quantitatively and qualitatively.", + "We propose a novel concept of dual and integrated latent topologies (DITTO in\nshort) for implicit 3D reconstruction from noisy and sparse point clouds. Most\nexisting methods predominantly focus on single latent type, such as point or\ngrid latents. In contrast, the proposed DITTO leverages both point and grid\nlatents (i.e., dual latent) to enhance their strengths, the stability of grid\nlatents and the detail-rich capability of point latents. Concretely, DITTO\nconsists of dual latent encoder and integrated implicit decoder. In the dual\nlatent encoder, a dual latent layer, which is the key module block composing\nthe encoder, refines both latents in parallel, maintaining their distinct\nshapes and enabling recursive interaction. Notably, a newly proposed dynamic\nsparse point transformer within the dual latent layer effectively refines point\nlatents. Then, the integrated implicit decoder systematically combines these\nrefined latents, achieving high-fidelity 3D reconstruction and surpassing\nprevious state-of-the-art methods on object- and scene-level datasets,\nespecially in thin and detailed structures.", + "In the realm of video object tracking, auxiliary modalities such as depth,\nthermal, or event data have emerged as valuable assets to complement the RGB\ntrackers. In practice, most existing RGB trackers learn a single set of\nparameters to use them across datasets and applications. However, a similar\nsingle-model unification for multi-modality tracking presents several\nchallenges. These challenges stem from the inherent heterogeneity of inputs --\neach with modality-specific representations, the scarcity of multi-modal\ndatasets, and the absence of all the modalities at all times. In this work, we\nintroduce Un-Track, a Unified Tracker of a single set of parameters for any\nmodality. To handle any modality, our method learns their common latent space\nthrough low-rank factorization and reconstruction techniques. More importantly,\nwe use only the RGB-X pairs to learn the common latent space. This unique\nshared representation seamlessly binds all modalities together, enabling\neffective unification and accommodating any missing modality, all within a\nsingle transformer-based architecture.", + "More importantly,\nwe use only the RGB-X pairs to learn the common latent space. This unique\nshared representation seamlessly binds all modalities together, enabling\neffective unification and accommodating any missing modality, all within a\nsingle transformer-based architecture. Our Un-Track achieves +8.1 absolute\nF-score gain, on the DepthTrack dataset, by introducing only +2.14 (over 21.50)\nGFLOPs with +6.6M (over 93M) parameters, through a simple yet efficient\nprompting strategy. Extensive comparisons on five benchmark datasets with\ndifferent modalities show that Un-Track surpasses both SOTA unified trackers\nand modality-specific counterparts, validating our effectiveness and\npracticality. The source code is publicly available at\nhttps://github.com/Zongwei97/UnTrack.", + "Choreographers determine what the dances look like, while cameramen determine\nthe final presentation of dances. Recently, various methods and datasets have\nshowcased the feasibility of dance synthesis. However, camera movement\nsynthesis with music and dance remains an unsolved challenging problem due to\nthe scarcity of paired data. Thus, we present DCM, a new multi-modal 3D\ndataset, which for the first time combines camera movement with dance motion\nand music audio. This dataset encompasses 108 dance sequences (3.2 hours) of\npaired dance-camera-music data from the anime community, covering 4 music\ngenres. With this dataset, we uncover that dance camera movement is\nmultifaceted and human-centric, and possesses multiple influencing factors,\nmaking dance camera synthesis a more challenging task compared to camera or\ndance synthesis alone. To overcome these difficulties, we propose\nDanceCamera3D, a transformer-based diffusion model that incorporates a novel\nbody attention loss and a condition separation strategy. For evaluation, we\ndevise new metrics measuring camera movement quality, diversity, and dancer\nfidelity.", + "To overcome these difficulties, we propose\nDanceCamera3D, a transformer-based diffusion model that incorporates a novel\nbody attention loss and a condition separation strategy. For evaluation, we\ndevise new metrics measuring camera movement quality, diversity, and dancer\nfidelity. Utilizing these metrics, we conduct extensive experiments on our DCM\ndataset, providing both quantitative and qualitative evidence showcasing the\neffectiveness of our DanceCamera3D model. Code and video demos are available at\nhttps://github.com/Carmenw1203/DanceCamera3D-Official.", + "Vision language models (VLM) have demonstrated remarkable performance across\nvarious downstream tasks. However, understanding fine-grained visual-linguistic\nconcepts, such as attributes and inter-object relationships, remains a\nsignificant challenge. While several benchmarks aim to evaluate VLMs in finer\ngranularity, their primary focus remains on the linguistic aspect, neglecting\nthe visual dimension. Here, we highlight the importance of evaluating VLMs from\nboth a textual and visual perspective. We introduce a progressive pipeline to\nsynthesize images that vary in a specific attribute while ensuring consistency\nin all other aspects. Utilizing this data engine, we carefully design a\nbenchmark, SPEC, to diagnose the comprehension of object size, position,\nexistence, and count. Subsequently, we conduct a thorough evaluation of four\nleading VLMs on SPEC. Surprisingly, their performance is close to random guess,\nrevealing significant limitations. With this in mind, we propose a simple yet\neffective approach to optimize VLMs in fine-grained understanding, achieving\nsignificant improvements on SPEC without compromising the zero-shot\nperformance.", + "Surprisingly, their performance is close to random guess,\nrevealing significant limitations. With this in mind, we propose a simple yet\neffective approach to optimize VLMs in fine-grained understanding, achieving\nsignificant improvements on SPEC without compromising the zero-shot\nperformance. Results on two additional fine-grained benchmarks also show\nconsistent improvements, further validating the transferability of our\napproach. Code and data are available at https://github.com/wjpoom/SPEC.", + "3D synthetic-to-real unsupervised domain adaptive segmentation is crucial to\nannotating new domains. Self-training is a competitive approach for this task,\nbut its performance is limited by different sensor sampling patterns (i.e.,\nvariations in point density) and incomplete training strategies. In this work,\nwe propose a density-guided translator (DGT), which translates point density\nbetween domains, and integrates it into a two-stage self-training pipeline\nnamed DGT-ST. First, in contrast to existing works that simultaneously conduct\ndata generation and feature/output alignment within unstable adversarial\ntraining, we employ the non-learnable DGT to bridge the domain gap at the input\nlevel. Second, to provide a well-initialized model for self-training, we\npropose a category-level adversarial network in stage one that utilizes the\nprototype to prevent negative transfer. Finally, by leveraging the designs\nabove, a domain-mixed self-training method with source-aware consistency loss\nis proposed in stage two to narrow the domain gap further.", + "Finally, by leveraging the designs\nabove, a domain-mixed self-training method with source-aware consistency loss\nis proposed in stage two to narrow the domain gap further. Experiments on two\nsynthetic-to-real segmentation tasks (SynLiDAR $\\rightarrow$ semanticKITTI and\nSynLiDAR $\\rightarrow$ semanticPOSS) demonstrate that DGT-ST outperforms\nstate-of-the-art methods, achieving 9.4$\\%$ and 4.3$\\%$ mIoU improvements,\nrespectively. Code is available at \\url{https://github.com/yuan-zm/DGT-ST}.", + "We present VIDIM, a generative model for video interpolation, which creates\nshort videos given a start and end frame. In order to achieve high fidelity and\ngenerate motions unseen in the input data, VIDIM uses cascaded diffusion models\nto first generate the target video at low resolution, and then generate the\nhigh-resolution video conditioned on the low-resolution generated video. We\ncompare VIDIM to previous state-of-the-art methods on video interpolation, and\ndemonstrate how such works fail in most settings where the underlying motion is\ncomplex, nonlinear, or ambiguous while VIDIM can easily handle such cases. We\nadditionally demonstrate how classifier-free guidance on the start and end\nframe and conditioning the super-resolution model on the original\nhigh-resolution frames without additional parameters unlocks high-fidelity\nresults. VIDIM is fast to sample from as it jointly denoises all the frames to\nbe generated, requires less than a billion parameters per diffusion model to\nproduce compelling results, and still enjoys scalability and improved quality\nat larger parameter counts.", + "Each photo in an image burst can be considered a sample of a complex 3D\nscene: the product of parallax, diffuse and specular materials, scene motion,\nand illuminant variation. While decomposing all of these effects from a stack\nof misaligned images is a highly ill-conditioned task, the conventional\nalign-and-merge burst pipeline takes the other extreme: blending them into a\nsingle image. In this work, we propose a versatile intermediate representation:\na two-layer alpha-composited image plus flow model constructed with neural\nspline fields -- networks trained to map input coordinates to spline control\npoints. Our method is able to, during test-time optimization, jointly fuse a\nburst image capture into one high-resolution reconstruction and decompose it\ninto transmission and obstruction layers. Then, by discarding the obstruction\nlayer, we can perform a range of tasks including seeing through occlusions,\nreflection suppression, and shadow removal. Validated on complex synthetic and\nin-the-wild captures we find that, with no post-processing steps or learned\npriors, our generalizable model is able to outperform existing dedicated\nsingle-image and multi-view obstruction removal approaches.", + "The estimation of 3D human motion from video has progressed rapidly but\ncurrent methods still have several key limitations. First, most methods\nestimate the human in camera coordinates. Second, prior work on estimating\nhumans in global coordinates often assumes a flat ground plane and produces\nfoot sliding. Third, the most accurate methods rely on computationally\nexpensive optimization pipelines, limiting their use to offline applications.\nFinally, existing video-based methods are surprisingly less accurate than\nsingle-frame methods. We address these limitations with WHAM (World-grounded\nHumans with Accurate Motion), which accurately and efficiently reconstructs 3D\nhuman motion in a global coordinate system from video. WHAM learns to lift 2D\nkeypoint sequences to 3D using motion capture data and fuses this with video\nfeatures, integrating motion context and visual information. WHAM exploits\ncamera angular velocity estimated from a SLAM method together with human motion\nto estimate the body's global trajectory. We combine this with a contact-aware\ntrajectory refinement method that lets WHAM capture human motion in diverse\nconditions, such as climbing stairs.", + "WHAM exploits\ncamera angular velocity estimated from a SLAM method together with human motion\nto estimate the body's global trajectory. We combine this with a contact-aware\ntrajectory refinement method that lets WHAM capture human motion in diverse\nconditions, such as climbing stairs. WHAM outperforms all existing 3D human\nmotion recovery methods across multiple in-the-wild benchmarks. Code will be\navailable for research purposes at http://wham.is.tue.mpg.de/", + "This paper introduces Unified Language-driven Zero-shot Domain Adaptation\n(ULDA), a novel task setting that enables a single model to adapt to diverse\ntarget domains without explicit domain-ID knowledge. We identify the\nconstraints in the existing language-driven zero-shot domain adaptation task,\nparticularly the requirement for domain IDs and domain-specific models, which\nmay restrict flexibility and scalability. To overcome these issues, we propose\na new framework for ULDA, consisting of Hierarchical Context Alignment (HCA),\nDomain Consistent Representation Learning (DCRL), and Text-Driven Rectifier\n(TDR). These components work synergistically to align simulated features with\ntarget text across multiple visual levels, retain semantic correlations between\ndifferent regional representations, and rectify biases between simulated and\nreal target visual features, respectively. Our extensive empirical evaluations\ndemonstrate that this framework achieves competitive performance in both\nsettings, surpassing even the model that requires domain-ID, showcasing its\nsuperiority and generalization ability. The proposed method is not only\neffective but also maintains practicality and efficiency, as it does not\nintroduce additional computational costs during inference. Our project page is\nhttps://senqiaoyang.com/project/ULDA .", + "Shape assembly composes complex shapes geometries by arranging simple part\ngeometries and has wide applications in autonomous robotic assembly and CAD\nmodeling. Existing works focus on geometry reasoning and neglect the actual\nphysical assembly process of matching and fitting joints, which are the contact\nsurfaces connecting different parts. In this paper, we consider contacting\njoints for the task of multi-part assembly. A successful joint-optimized\nassembly needs to satisfy the bilateral objectives of shape structure and joint\nalignment. We propose a hierarchical graph learning approach composed of two\nlevels of graph representation learning. The part graph takes part geometries\nas input to build the desired shape structure. The joint-level graph uses part\njoints information and focuses on matching and aligning joints. The two kinds\nof information are combined to achieve the bilateral objectives. Extensive\nexperiments demonstrate that our method outperforms previous methods, achieving\nbetter shape structure and higher joint alignment accuracy.", + "Multi-modality image fusion is a technique that combines information from\ndifferent sensors or modalities, enabling the fused image to retain\ncomplementary features from each modality, such as functional highlights and\ntexture details. However, effective training of such fusion models is\nchallenging due to the scarcity of ground truth fusion data. To tackle this\nissue, we propose the Equivariant Multi-Modality imAge fusion (EMMA) paradigm\nfor end-to-end self-supervised learning. Our approach is rooted in the prior\nknowledge that natural imaging responses are equivariant to certain\ntransformations. Consequently, we introduce a novel training paradigm that\nencompasses a fusion module, a pseudo-sensing module, and an equivariant fusion\nmodule. These components enable the net training to follow the principles of\nthe natural sensing-imaging process while satisfying the equivariant imaging\nprior. Extensive experiments confirm that EMMA yields high-quality fusion\nresults for infrared-visible and medical images, concurrently facilitating\ndownstream multi-modal segmentation and detection tasks. The code is available\nat https://github.com/Zhaozixiang1228/MMIF-EMMA.", + "We introduce One-shot Open Affordance Learning (OOAL), where a model is\ntrained with just one example per base object category, but is expected to\nidentify novel objects and affordances. While vision-language models excel at\nrecognizing novel objects and scenes, they often struggle to understand finer\nlevels of granularity such as affordances. To handle this issue, we conduct a\ncomprehensive analysis of existing foundation models, to explore their inherent\nunderstanding of affordances and assess the potential for data-limited\naffordance learning. We then propose a vision-language framework with simple\nand effective designs that boost the alignment between visual features and\naffordance text embeddings. Experiments on two affordance segmentation\nbenchmarks show that the proposed method outperforms state-of-the-art models\nwith less than 1% of the full training data, and exhibits reasonable\ngeneralization capability on unseen objects and affordances.", + "Large-scale Text-to-Image (T2I) diffusion models have revolutionized image\ngeneration over the last few years. Although owning diverse and high-quality\ngeneration capabilities, translating these abilities to fine-grained image\nediting remains challenging. In this paper, we propose DiffEditor to rectify\ntwo weaknesses in existing diffusion-based image editing: (1) in complex\nscenarios, editing results often lack editing accuracy and exhibit unexpected\nartifacts; (2) lack of flexibility to harmonize editing operations, e.g.,\nimagine new content. In our solution, we introduce image prompts in\nfine-grained image editing, cooperating with the text prompt to better describe\nthe editing content. To increase the flexibility while maintaining content\nconsistency, we locally combine stochastic differential equation (SDE) into the\nordinary differential equation (ODE) sampling. In addition, we incorporate\nregional score-based gradient guidance and a time travel strategy into the\ndiffusion sampling, further improving the editing quality.", + "To increase the flexibility while maintaining content\nconsistency, we locally combine stochastic differential equation (SDE) into the\nordinary differential equation (ODE) sampling. In addition, we incorporate\nregional score-based gradient guidance and a time travel strategy into the\ndiffusion sampling, further improving the editing quality. Extensive\nexperiments demonstrate that our method can efficiently achieve\nstate-of-the-art performance on various fine-grained image editing tasks,\nincluding editing within a single image (e.g., object moving, resizing, and\ncontent dragging) and across images (e.g., appearance replacing and object\npasting). Our source code is released at\nhttps://github.com/MC-E/DragonDiffusion.", + "Solving image and video jigsaw puzzles poses the challenging task of\nrearranging image fragments or video frames from unordered sequences to restore\nmeaningful images and video sequences. Existing approaches often hinge on\ndiscriminative models tasked with predicting either the absolute positions of\npuzzle elements or the permutation actions applied to the original data.\nUnfortunately, these methods face limitations in effectively solving puzzles\nwith a large number of elements. In this paper, we propose JPDVT, an innovative\napproach that harnesses diffusion transformers to address this challenge.\nSpecifically, we generate positional information for image patches or video\nframes, conditioned on their underlying visual content. This information is\nthen employed to accurately assemble the puzzle pieces in their correct\npositions, even in scenarios involving missing pieces. Our method achieves\nstate-of-the-art performance on several datasets.", + "Diffusion models have emerged as the de facto paradigm for video generation.\nHowever, their reliance on web-scale data of varied quality often yields\nresults that are visually unappealing and misaligned with the textual prompts.\nTo tackle this problem, we propose InstructVideo to instruct text-to-video\ndiffusion models with human feedback by reward fine-tuning. InstructVideo has\ntwo key ingredients: 1) To ameliorate the cost of reward fine-tuning induced by\ngenerating through the full DDIM sampling chain, we recast reward fine-tuning\nas editing. By leveraging the diffusion process to corrupt a sampled video,\nInstructVideo requires only partial inference of the DDIM sampling chain,\nreducing fine-tuning cost while improving fine-tuning efficiency. 2) To\nmitigate the absence of a dedicated video reward model for human preferences,\nwe repurpose established image reward models, e.g., HPSv2. To this end, we\npropose Segmental Video Reward, a mechanism to provide reward signals based on\nsegmental sparse sampling, and Temporally Attenuated Reward, a method that\nmitigates temporal modeling degradation during fine-tuning.", + "To this end, we\npropose Segmental Video Reward, a mechanism to provide reward signals based on\nsegmental sparse sampling, and Temporally Attenuated Reward, a method that\nmitigates temporal modeling degradation during fine-tuning. Extensive\nexperiments, both qualitative and quantitative, validate the practicality and\nefficacy of using image reward models in InstructVideo, significantly enhancing\nthe visual quality of generated videos without compromising generalization\ncapabilities. Code and models will be made publicly available.", + "Deep unfolding networks (DUN) have emerged as a popular iterative framework\nfor accelerated magnetic resonance imaging (MRI) reconstruction. However,\nconventional DUN aims to reconstruct all the missing information within the\nentire null space in each iteration. Thus it could be challenging when dealing\nwith highly ill-posed degradation, usually leading to unsatisfactory\nreconstruction. In this work, we propose a Progressive Divide-And-Conquer\n(PDAC) strategy, aiming to break down the subsampling process in the actual\nsevere degradation and thus perform reconstruction sequentially. Starting from\ndecomposing the original maximum-a-posteriori problem of accelerated MRI, we\npresent a rigorous derivation of the proposed PDAC framework, which could be\nfurther unfolded into an end-to-end trainable network. Specifically, each\niterative stage in PDAC focuses on recovering a distinct moderate degradation\naccording to the decomposition. Furthermore, as part of the PDAC iteration,\nsuch decomposition is adaptively learned as an auxiliary task through a\ndegradation predictor which provides an estimation of the decomposed sampling\nmask.", + "Specifically, each\niterative stage in PDAC focuses on recovering a distinct moderate degradation\naccording to the decomposition. Furthermore, as part of the PDAC iteration,\nsuch decomposition is adaptively learned as an auxiliary task through a\ndegradation predictor which provides an estimation of the decomposed sampling\nmask. Following this prediction, the sampling mask is further integrated via a\nseverity conditioning module to ensure awareness of the degradation severity at\neach stage. Extensive experiments demonstrate that our proposed method achieves\nsuperior performance on the publicly available fastMRI and Stanford2D FSE\ndatasets in both multi-coil and single-coil settings.", + "In Multiple Object Tracking, objects often exhibit non-linear motion of\nacceleration and deceleration, with irregular direction changes.\nTacking-by-detection (TBD) trackers with Kalman Filter motion prediction work\nwell in pedestrian-dominant scenarios but fall short in complex situations when\nmultiple objects perform non-linear and diverse motion simultaneously. To\ntackle the complex non-linear motion, we propose a real-time diffusion-based\nMOT approach named DiffMOT. Specifically, for the motion predictor component,\nwe propose a novel Decoupled Diffusion-based Motion Predictor (D$^2$MP). It\nmodels the entire distribution of various motion presented by the data as a\nwhole. It also predicts an individual object's motion conditioning on an\nindividual's historical motion information. Furthermore, it optimizes the\ndiffusion process with much fewer sampling steps. As a MOT tracker, the DiffMOT\nis real-time at 22.7FPS, and also outperforms the state-of-the-art on\nDanceTrack and SportsMOT datasets with $62.3\\%$ and $76.2\\%$ in HOTA metrics,\nrespectively.", + "To the best of our knowledge, DiffMOT is the first to introduce a\ndiffusion probabilistic model into the MOT to tackle non-linear motion\nprediction.", + "Multi-view representation learning aims to derive robust representations that\nare both view-consistent and view-specific from diverse data sources. This\npaper presents an in-depth analysis of existing approaches in this domain,\nhighlighting a commonly overlooked aspect: the redundancy between\nview-consistent and view-specific representations. To this end, we propose an\ninnovative framework for multi-view representation learning, which incorporates\na technique we term 'distilled disentangling'. Our method introduces the\nconcept of masked cross-view prediction, enabling the extraction of compact,\nhigh-quality view-consistent representations from various sources without\nincurring extra computational overhead. Additionally, we develop a distilled\ndisentangling module that efficiently filters out consistency-related\ninformation from multi-view representations, resulting in purer view-specific\nrepresentations. This approach significantly reduces redundancy between\nview-consistent and view-specific representations, enhancing the overall\nefficiency of the learning process. Our empirical evaluations reveal that\nhigher mask ratios substantially improve the quality of view-consistent\nrepresentations. Moreover, we find that reducing the dimensionality of\nview-consistent representations relative to that of view-specific\nrepresentations further refines the quality of the combined representations.", + "Our empirical evaluations reveal that\nhigher mask ratios substantially improve the quality of view-consistent\nrepresentations. Moreover, we find that reducing the dimensionality of\nview-consistent representations relative to that of view-specific\nrepresentations further refines the quality of the combined representations.\nOur code is accessible at: https://github.com/Guanzhou-Ke/MRDD.", + "We revisit certain problems of pose estimation based on 3D--2D\ncorrespondences between features which may be points or lines. Specifically, we\naddress the two previously-studied minimal problems of estimating camera\nextrinsics from $p \\in \\{ 1, 2 \\}$ point--point correspondences and $l=3-p$\nline--line correspondences. To the best of our knowledge, all of the\npreviously-known practical solutions to these problems required computing the\nroots of degree $\\ge 4$ (univariate) polynomials when $p=2$, or degree $\\ge 8$\npolynomials when $p=1.$ We describe and implement two elementary solutions\nwhich reduce the degrees of the needed polynomials from $4$ to $2$ and from $8$\nto $4$, respectively. We show experimentally that the resulting solvers are\nnumerically stable and fast: when compared to the previous state-of-the art, we\nmay obtain nearly an order of magnitude speedup. The code is available at\n\\url{https://github.com/petrhruby97/efficient\\_absolute}", + "Automatic text-to-3D generation that combines Score Distillation Sampling\n(SDS) with the optimization of volume rendering has achieved remarkable\nprogress in synthesizing realistic 3D objects. Yet most existing text-to-3D\nmethods by SDS and volume rendering suffer from inaccurate geometry, e.g., the\nJanus issue, since it is hard to explicitly integrate 3D priors into implicit\n3D representations. Besides, it is usually time-consuming for them to generate\nelaborate 3D models with rich colors. In response, this paper proposes GSGEN, a\nnovel method that adopts Gaussian Splatting, a recent state-of-the-art\nrepresentation, to text-to-3D generation. GSGEN aims at generating high-quality\n3D objects and addressing existing shortcomings by exploiting the explicit\nnature of Gaussian Splatting that enables the incorporation of 3D prior.\nSpecifically, our method adopts a progressive optimization strategy, which\nincludes a geometry optimization stage and an appearance refinement stage.", + "GSGEN aims at generating high-quality\n3D objects and addressing existing shortcomings by exploiting the explicit\nnature of Gaussian Splatting that enables the incorporation of 3D prior.\nSpecifically, our method adopts a progressive optimization strategy, which\nincludes a geometry optimization stage and an appearance refinement stage. In\ngeometry optimization, a coarse representation is established under 3D point\ncloud diffusion prior along with the ordinary 2D SDS optimization, ensuring a\nsensible and 3D-consistent rough shape. Subsequently, the obtained Gaussians\nundergo an iterative appearance refinement to enrich texture details. In this\nstage, we increase the number of Gaussians by compactness-based densification\nto enhance continuity and improve fidelity. With these designs, our approach\ncan generate 3D assets with delicate details and accurate geometry. Extensive\nevaluations demonstrate the effectiveness of our method, especially for\ncapturing high-frequency components. Our code is available at\nhttps://github.com/gsgen3d/gsgen", + "Large multimodal models demonstrate remarkable generalist ability to perform\ndiverse multimodal tasks in a zero-shot manner. Large-scale web-based\nimage-text pairs contribute fundamentally to this success, but suffer from\nexcessive noise. Recent studies use alternative captions synthesized by\ncaptioning models and have achieved notable benchmark performance. However, our\nexperiments reveal significant Scalability Deficiency and World Knowledge Loss\nissues in models trained with synthetic captions, which have been largely\nobscured by their initial benchmark success. Upon closer examination, we\nidentify the root cause as the overly-simplified language structure and lack of\nknowledge details in existing synthetic captions. To provide higher-quality and\nmore scalable multimodal pretraining data, we propose CapsFusion, an advanced\nframework that leverages large language models to consolidate and refine\ninformation from both web-based image-text pairs and synthetic captions.", + "To provide higher-quality and\nmore scalable multimodal pretraining data, we propose CapsFusion, an advanced\nframework that leverages large language models to consolidate and refine\ninformation from both web-based image-text pairs and synthetic captions.\nExtensive experiments show that CapsFusion captions exhibit remarkable\nall-round superiority over existing captions in terms of model performance\n(e.g., 18.8 and 18.3 improvements in CIDEr score on COCO and NoCaps), sample\nefficiency (requiring 11-16 times less computation than baselines), world\nknowledge depth, and scalability. These effectiveness, efficiency and\nscalability advantages position CapsFusion as a promising candidate for future\nscaling of LMM training.", + "Fr\\'echet Video Distance (FVD), a prominent metric for evaluating video\ngeneration models, is known to conflict with human perception occasionally. In\nthis paper, we aim to explore the extent of FVD's bias toward per-frame quality\nover temporal realism and identify its sources. We first quantify the FVD's\nsensitivity to the temporal axis by decoupling the frame and motion quality and\nfind that the FVD increases only slightly with large temporal corruption. We\nthen analyze the generated videos and show that via careful sampling from a\nlarge set of generated videos that do not contain motions, one can drastically\ndecrease FVD without improving the temporal quality. Both studies suggest FVD's\nbias towards the quality of individual frames. We further observe that the bias\ncan be attributed to the features extracted from a supervised video classifier\ntrained on the content-biased dataset. We show that FVD with features extracted\nfrom the recent large-scale self-supervised video models is less biased toward\nimage quality. Finally, we revisit a few real-world examples to validate our\nhypothesis.", + "Generating dances that are both lifelike and well-aligned with music\ncontinues to be a challenging task in the cross-modal domain. This paper\nintroduces PopDanceSet, the first dataset tailored to the preferences of young\naudiences, enabling the generation of aesthetically oriented dances. And it\nsurpasses the AIST++ dataset in music genre diversity and the intricacy and\ndepth of dance movements. Moreover, the proposed POPDG model within the iDDPM\nframework enhances dance diversity and, through the Space Augmentation\nAlgorithm, strengthens spatial physical connections between human body joints,\nensuring that increased diversity does not compromise generation quality. A\nstreamlined Alignment Module is also designed to improve the temporal alignment\nbetween dance and music. Extensive experiments show that POPDG achieves SOTA\nresults on two datasets. Furthermore, the paper also expands on current\nevaluation metrics. The dataset and code are available at\nhttps://github.com/Luke-Luo1/POPDG.", + "Despite advancements in text-to-image generation (T2I), prior methods often\nface text-image misalignment problems such as relation confusion in generated\nimages. Existing solutions involve cross-attention manipulation for better\ncompositional understanding or integrating large language models for improved\nlayout planning. However, the inherent alignment capabilities of T2I models are\nstill inadequate. By reviewing the link between generative and discriminative\nmodeling, we posit that T2I models' discriminative abilities may reflect their\ntext-image alignment proficiency during generation. In this light, we advocate\nbolstering the discriminative abilities of T2I models to achieve more precise\ntext-to-image alignment for generation. We present a discriminative adapter\nbuilt on T2I models to probe their discriminative abilities on two\nrepresentative tasks and leverage discriminative fine-tuning to improve their\ntext-image alignment. As a bonus of the discriminative adapter, a\nself-correction mechanism can leverage discriminative gradients to better align\ngenerated images to text prompts during inference. Comprehensive evaluations\nacross three benchmark datasets, including both in-distribution and\nout-of-distribution scenarios, demonstrate our method's superior generation\nperformance.", + "Comprehensive evaluations\nacross three benchmark datasets, including both in-distribution and\nout-of-distribution scenarios, demonstrate our method's superior generation\nperformance. Meanwhile, it achieves state-of-the-art discriminative performance\non the two discriminative tasks compared to other generative models.", + "We introduce multi-slice reasoning, a new notion for single-view 3D\nreconstruction which challenges the current and prevailing belief that\nmulti-view synthesis is the most natural conduit between single-view and 3D.\nOur key observation is that object slicing is more advantageous than altering\nviews to reveal occluded structures. Specifically, slicing is more\nocclusion-revealing since it can peel through any occluders without\nobstruction. In the limit, i.e., with infinitely many slices, it is guaranteed\nto unveil all hidden object parts. We realize our idea by developing Slice3D, a\nnovel method for single-view 3D reconstruction which first predicts multi-slice\nimages from a single RGB image and then integrates the slices into a 3D model\nusing a coordinate-based transformer network for signed distance prediction.\nThe slice images can be regressed or generated, both through a U-Net based\nnetwork. For the former, we inject a learnable slice indicator code to\ndesignate each decoded image into a spatial slice location, while the slice\ngenerator is a denoising diffusion model operating on the entirety of slice\nimages stacked on the input channels.", + "For the former, we inject a learnable slice indicator code to\ndesignate each decoded image into a spatial slice location, while the slice\ngenerator is a denoising diffusion model operating on the entirety of slice\nimages stacked on the input channels. We conduct extensive evaluation against\nstate-of-the-art alternatives to demonstrate superiority of our method,\nespecially in recovering complex and severely occluded shape structures, amid\nambiguities. All Slice3D results were produced by networks trained on a single\nNvidia A40 GPU, with an inference time less than 20 seconds.", + "Large-scale high-resolution (HR) land-cover mapping is a vital task to survey\nthe Earth's surface and resolve many challenges facing humanity. However, it is\nstill a non-trivial task hindered by complex ground details, various landforms,\nand the scarcity of accurate training labels over a wide-span geographic area.\nIn this paper, we propose an efficient, weakly supervised framework\n(Paraformer) to guide large-scale HR land-cover mapping with easy-access\nhistorical land-cover data of low resolution (LR). Specifically, existing\nland-cover mapping approaches reveal the dominance of CNNs in preserving local\nground details but still suffer from insufficient global modeling in various\nlandforms. Therefore, we design a parallel CNN-Transformer feature extractor in\nParaformer, consisting of a downsampling-free CNN branch and a Transformer\nbranch, to jointly capture local and global contextual information. Besides,\nfacing the spatial mismatch of training data, a pseudo-label-assisted training\n(PLAT) module is adopted to reasonably refine LR labels for weakly supervised\nsemantic segmentation of HR images. Experiments on two large-scale datasets\ndemonstrate the superiority of Paraformer over other state-of-the-art methods\nfor automatically updating HR land-cover maps from LR historical labels.", + "We present GenesisTex, a novel method for synthesizing textures for 3D\ngeometries from text descriptions. GenesisTex adapts the pretrained image\ndiffusion model to texture space by texture space sampling. Specifically, we\nmaintain a latent texture map for each viewpoint, which is updated with\npredicted noise on the rendering of the corresponding viewpoint. The sampled\nlatent texture maps are then decoded into a final texture map. During the\nsampling process, we focus on both global and local consistency across multiple\nviewpoints: global consistency is achieved through the integration of style\nconsistency mechanisms within the noise prediction network, and low-level\nconsistency is achieved by dynamically aligning latent textures. Finally, we\napply reference-based inpainting and img2img on denser views for texture\nrefinement. Our approach overcomes the limitations of slow optimization in\ndistillation-based methods and instability in inpainting-based methods.\nExperiments on meshes from various sources demonstrate that our method\nsurpasses the baseline methods quantitatively and qualitatively.", + "Open-vocabulary semantic segmentation (OVS) aims to segment images of\narbitrary categories specified by class labels or captions. However, most\nprevious best-performing methods, whether pixel grouping methods or region\nrecognition methods, suffer from false matches between image features and\ncategory labels. We attribute this to the natural gap between the textual\nfeatures and visual features. In this work, we rethink how to mitigate false\nmatches from the perspective of image-to-image matching and propose a novel\nrelation-aware intra-modal matching (RIM) framework for OVS based on visual\nfoundation models. RIM achieves robust region classification by firstly\nconstructing diverse image-modal reference features and then matching them with\nregion features based on relation-aware ranking distribution. The proposed RIM\nenjoys several merits. First, the intra-modal reference features are better\naligned, circumventing potential ambiguities that may arise in cross-modal\nmatching. Second, the ranking-based matching process harnesses the structure\ninformation implicit in the inter-class relationships, making it more robust\nthan comparing individually.", + "First, the intra-modal reference features are better\naligned, circumventing potential ambiguities that may arise in cross-modal\nmatching. Second, the ranking-based matching process harnesses the structure\ninformation implicit in the inter-class relationships, making it more robust\nthan comparing individually. Extensive experiments on three benchmarks\ndemonstrate that RIM outperforms previous state-of-the-art methods by large\nmargins, obtaining a lead of more than 10% in mIoU on PASCAL VOC benchmark.", + "Gait recognition stands as one of the most pivotal remote identification\ntechnologies and progressively expands across research and industry\ncommunities. However, existing gait recognition methods heavily rely on\ntask-specific upstream driven by supervised learning to provide explicit gait\nrepresentations like silhouette sequences, which inevitably introduce expensive\nannotation costs and potential error accumulation. Escaping from this trend,\nthis work explores effective gait representations based on the all-purpose\nknowledge produced by task-agnostic Large Vision Models (LVMs) and proposes a\nsimple yet efficient gait framework, termed BigGait. Specifically, the Gait\nRepresentation Extractor (GRE) within BigGait draws upon design principles from\nestablished gait representations, effectively transforming all-purpose\nknowledge into implicit gait representations without requiring third-party\nsupervision signals. Experiments on CCPG, CAISA-B* and SUSTech1K indicate that\nBigGait significantly outperforms the previous methods in both within-domain\nand cross-domain tasks in most cases, and provides a more practical paradigm\nfor learning the next-generation gait representation.", + "Experiments on CCPG, CAISA-B* and SUSTech1K indicate that\nBigGait significantly outperforms the previous methods in both within-domain\nand cross-domain tasks in most cases, and provides a more practical paradigm\nfor learning the next-generation gait representation. Finally, we delve into\nprospective challenges and promising directions in LVMs-based gait recognition,\naiming to inspire future work in this emerging topic. The source code is\navailable at https://github.com/ShiqiYu/OpenGait.", + "Recently, the rise of query-based Transformer decoders is reshaping\ncamera-based 3D object detection. These query-based decoders are surpassing the\ntraditional dense BEV (Bird's Eye View)-based methods. However, we argue that\ndense BEV frameworks remain important due to their outstanding abilities in\ndepth estimation and object localization, depicting 3D scenes accurately and\ncomprehensively. This paper aims to address the drawbacks of the existing dense\nBEV-based 3D object detectors by introducing our proposed enhanced components,\nincluding a CRF-modulated depth estimation module enforcing object-level\nconsistencies, a long-term temporal aggregation module with extended receptive\nfields, and a two-stage object decoder combining perspective techniques with\nCRF-modulated depth embedding. These enhancements lead to a \"modernized\" dense\nBEV framework dubbed BEVNeXt. On the nuScenes benchmark, BEVNeXt outperforms\nboth BEV-based and query-based frameworks under various settings, achieving a\nstate-of-the-art result of 64.2 NDS on the nuScenes test set. Code will be\navailable at \\url{https://github.com/woxihuanjiangguo/BEVNeXt}.", + "Misinformation is a prevalent societal issue due to its potential high risks.\nOut-of-context (OOC) misinformation, where authentic images are repurposed with\nfalse text, is one of the easiest and most effective ways to mislead audiences.\nCurrent methods focus on assessing image-text consistency but lack convincing\nexplanations for their judgments, which is essential for debunking\nmisinformation. While Multimodal Large Language Models (MLLMs) have rich\nknowledge and innate capability for visual reasoning and explanation\ngeneration, they still lack sophistication in understanding and discovering the\nsubtle crossmodal differences. In this paper, we introduce SNIFFER, a novel\nmultimodal large language model specifically engineered for OOC misinformation\ndetection and explanation. SNIFFER employs two-stage instruction tuning on\nInstructBLIP. The first stage refines the model's concept alignment of generic\nobjects with news-domain entities and the second stage leverages language-only\nGPT-4 generated OOC-specific instruction data to fine-tune the model's\ndiscriminatory powers.", + "The first stage refines the model's concept alignment of generic\nobjects with news-domain entities and the second stage leverages language-only\nGPT-4 generated OOC-specific instruction data to fine-tune the model's\ndiscriminatory powers. Enhanced by external tools and retrieval, SNIFFER not\nonly detects inconsistencies between text and image but also utilizes external\nknowledge for contextual verification. Our experiments show that SNIFFER\nsurpasses the original MLLM by over 40% and outperforms state-of-the-art\nmethods in detection accuracy. SNIFFER also provides accurate and persuasive\nexplanations as validated by quantitative and human evaluations.", + "Semantic scene completion (SSC) aims to predict complete 3D voxel occupancy\nand semantics from a single-view RGB-D image, and recent SSC methods commonly\nadopt multi-modal inputs. However, our investigation reveals two limitations:\nineffective feature learning from single modalities and overfitting to limited\ndatasets. To address these issues, this paper proposes a novel SSC framework -\nAdversarial Modality Modulation Network (AMMNet) - with a fresh perspective of\noptimizing gradient updates. The proposed AMMNet introduces two core modules: a\ncross-modal modulation enabling the interdependence of gradient flows between\nmodalities, and a customized adversarial training scheme leveraging dynamic\ngradient competition. Specifically, the cross-modal modulation adaptively\nre-calibrates the features to better excite representation potentials from each\nsingle modality. The adversarial training employs a minimax game of evolving\ngradients, with customized guidance to strengthen the generator's perception of\nvisual fidelity from both geometric completeness and semantic correctness.\nExtensive experimental results demonstrate that AMMNet outperforms\nstate-of-the-art SSC methods by a large margin, providing a promising direction\nfor improving the effectiveness and generalization of SSC methods.", + "Despite great improvements in semantic segmentation, challenges persist\nbecause of the lack of local/global contexts and the relationship between them.\nIn this paper, we propose Contextrast, a contrastive learning-based semantic\nsegmentation method that allows to capture local/global contexts and comprehend\ntheir relationships. Our proposed method comprises two parts: a) contextual\ncontrastive learning (CCL) and b) boundary-aware negative (BANE) sampling.\nContextual contrastive learning obtains local/global context from multi-scale\nfeature aggregation and inter/intra-relationship of features for better\ndiscrimination capabilities. Meanwhile, BANE sampling selects embedding\nfeatures along the boundaries of incorrectly predicted regions to employ them\nas harder negative samples on our contrastive learning, resolving segmentation\nissues along the boundary region by exploiting fine-grained details. We\ndemonstrate that our Contextrast substantially enhances the performance of\nsemantic segmentation networks, outperforming state-of-the-art contrastive\nlearning approaches on diverse public datasets, e.g. Cityscapes, CamVid,\nPASCAL-C, COCO-Stuff, and ADE20K, without an increase in computational cost\nduring inference.", + "Monocular 3D detection is a challenging task due to the lack of accurate 3D\ninformation. Existing approaches typically rely on geometry constraints and\ndense depth estimates to facilitate the learning, but often fail to fully\nexploit the benefits of three-dimensional feature extraction in frustum and 3D\nspace. In this paper, we propose \\textbf{OccupancyM3D}, a method of learning\noccupancy for monocular 3D detection. It directly learns occupancy in frustum\nand 3D space, leading to more discriminative and informative 3D features and\nrepresentations. Specifically, by using synchronized raw sparse LiDAR point\nclouds, we define the space status and generate voxel-based occupancy labels.\nWe formulate occupancy prediction as a simple classification problem and design\nassociated occupancy losses. Resulting occupancy estimates are employed to\nenhance original frustum/3D features. As a result, experiments on KITTI and\nWaymo open datasets demonstrate that the proposed method achieves a new state\nof the art and surpasses other methods by a significant margin. Codes and\npre-trained models will be available at\n\\url{https://github.com/SPengLiang/OccupancyM3D}.", + "Universal Domain Adaptation (UniDA) targets knowledge transfer in the\npresence of both covariate and label shifts. Recently, Source-free Universal\nDomain Adaptation (SF-UniDA) has emerged to achieve UniDA without access to\nsource data, which tends to be more practical due to data protection policies.\nThe main challenge lies in determining whether covariate-shifted samples belong\nto target-private unknown categories. Existing methods tackle this either\nthrough hand-crafted thresholding or by developing time-consuming iterative\nclustering strategies. In this paper, we propose a new idea of LEArning\nDecomposition (LEAD), which decouples features into source-known and -unknown\ncomponents to identify target-private data. Technically, LEAD initially\nleverages the orthogonal decomposition analysis for feature decomposition.\nThen, LEAD builds instance-level decision boundaries to adaptively identify\ntarget-private data. Extensive experiments across various UniDA scenarios have\ndemonstrated the effectiveness and superiority of LEAD. Notably, in the OPDA\nscenario on VisDA dataset, LEAD outperforms GLC by 3.5% overall H-score and\nreduces 75% time to derive pseudo-labeling decision boundaries.", + "Notably, in the OPDA\nscenario on VisDA dataset, LEAD outperforms GLC by 3.5% overall H-score and\nreduces 75% time to derive pseudo-labeling decision boundaries. Besides, LEAD\nis also appealing in that it is complementary to most existing methods. The\ncode is available at https://github.com/ispc-lab/LEAD.", + "Facial action unit (AU) intensity plays a pivotal role in quantifying\nfine-grained expression behaviors, which is an effective condition for facial\nexpression manipulation. However, publicly available datasets containing\nintensity annotations for multiple AUs remain severely limited, often featuring\na restricted number of subjects. This limitation places challenges to the AU\nintensity manipulation in images due to disentanglement issues, leading\nresearchers to resort to other large datasets with pretrained AU intensity\nestimators for pseudo labels. In addressing this constraint and fully\nleveraging manual annotations of AU intensities for precise manipulation, we\nintroduce AUEditNet. Our proposed model achieves impressive intensity\nmanipulation across 12 AUs, trained effectively with only 18 subjects.\nUtilizing a dual-branch architecture, our approach achieves comprehensive\ndisentanglement of facial attributes and identity without necessitating\nadditional loss functions or implementing with large batch sizes. This approach\noffers a potential solution to achieve desired facial attribute editing despite\nthe dataset's limited subject count. Our experiments demonstrate AUEditNet's\nsuperior accuracy in editing AU intensities, affirming its capability in\ndisentangling facial attributes and identity within a limited subject pool.", + "This approach\noffers a potential solution to achieve desired facial attribute editing despite\nthe dataset's limited subject count. Our experiments demonstrate AUEditNet's\nsuperior accuracy in editing AU intensities, affirming its capability in\ndisentangling facial attributes and identity within a limited subject pool.\nAUEditNet allows conditioning by either intensity values or target images,\neliminating the need for constructing AU combinations for specific facial\nexpression synthesis. Moreover, AU intensity estimation, as a downstream task,\nvalidates the consistency between real and edited images, confirming the\neffectiveness of our proposed AU intensity manipulation method.", + "Multimodal large language models (MLLMs) have gained significant attention\ndue to their strong multimodal understanding capability. However, existing\nworks rely heavily on modality-specific encoders, which usually differ in\narchitecture and are limited to common modalities. In this paper, we present\nOneLLM, an MLLM that aligns eight modalities to language using a unified\nframework. We achieve this through a unified multimodal encoder and a\nprogressive multimodal alignment pipeline. In detail, we first train an image\nprojection module to connect a vision encoder with LLM. Then, we build a\nuniversal projection module (UPM) by mixing multiple image projection modules\nand dynamic routing. Finally, we progressively align more modalities to LLM\nwith the UPM. To fully leverage the potential of OneLLM in following\ninstructions, we also curated a comprehensive multimodal instruction dataset,\nincluding 2M items from image, audio, video, point cloud, depth/normal map, IMU\nand fMRI brain activity.", + "To fully leverage the potential of OneLLM in following\ninstructions, we also curated a comprehensive multimodal instruction dataset,\nincluding 2M items from image, audio, video, point cloud, depth/normal map, IMU\nand fMRI brain activity. OneLLM is evaluated on 25 diverse benchmarks,\nencompassing tasks such as multimodal captioning, question answering and\nreasoning, where it delivers excellent performance. Code, data, model and\nonline demo are available at https://github.com/csuhan/OneLLM", + "Adversarial patch attacks present a significant threat to real-world object\ndetectors due to their practical feasibility. Existing defense methods, which\nrely on attack data or prior knowledge, struggle to effectively address a wide\nrange of adversarial patches. In this paper, we show two inherent\ncharacteristics of adversarial patches, semantic independence and spatial\nheterogeneity, independent of their appearance, shape, size, quantity, and\nlocation. Semantic independence indicates that adversarial patches operate\nautonomously within their semantic context, while spatial heterogeneity\nmanifests as distinct image quality of the patch area that differs from\noriginal clean image due to the independent generation process. Based on these\nobservations, we propose PAD, a novel adversarial patch localization and\nremoval method that does not require prior knowledge or additional training.\nPAD offers patch-agnostic defense against various adversarial patches,\ncompatible with any pre-trained object detectors. Our comprehensive digital and\nphysical experiments involving diverse patch types, such as localized noise,\nprintable, and naturalistic patches, exhibit notable improvements over\nstate-of-the-art works. Our code is available at\nhttps://github.com/Lihua-Jing/PAD.", + "Text-to-image generation has achieved astonishing results, yet precise\nspatial controllability and prompt fidelity remain highly challenging. This\nlimitation is typically addressed through cumbersome prompt engineering, scene\nlayout conditioning, or image editing techniques which often require hand drawn\nmasks. Nonetheless, pre-existing works struggle to take advantage of the\nnatural instance-level compositionality of scenes due to the typically flat\nnature of rasterized RGB output images. Towards adressing this challenge, we\nintroduce MuLAn: a novel dataset comprising over 44K MUlti-Layer ANnotations of\nRGB images as multilayer, instance-wise RGBA decompositions, and over 100K\ninstance images. To build MuLAn, we developed a training free pipeline which\ndecomposes a monocular RGB image into a stack of RGBA layers comprising of\nbackground and isolated instances. We achieve this through the use of\npretrained general-purpose models, and by developing three modules: image\ndecomposition for instance discovery and extraction, instance completion to\nreconstruct occluded areas, and image re-assembly.", + "We achieve this through the use of\npretrained general-purpose models, and by developing three modules: image\ndecomposition for instance discovery and extraction, instance completion to\nreconstruct occluded areas, and image re-assembly. We use our pipeline to\ncreate MuLAn-COCO and MuLAn-LAION datasets, which contain a variety of image\ndecompositions in terms of style, composition and complexity. With MuLAn, we\nprovide the first photorealistic resource providing instance decomposition and\nocclusion information for high quality images, opening up new avenues for\ntext-to-image generative AI research. With this, we aim to encourage the\ndevelopment of novel generation and editing technology, in particular\nlayer-wise solutions. MuLAn data resources are available at\nhttps://MuLAn-dataset.github.io/.", + "This paper addresses complex challenges in histopathological image analysis\nthrough three key contributions. Firstly, it introduces a fast patch selection\nmethod, FPS, for whole-slide image (WSI) analysis, significantly reducing\ncomputational cost while maintaining accuracy. Secondly, it presents PathDino,\na lightweight histopathology feature extractor with a minimal configuration of\nfive Transformer blocks and only 9 million parameters, markedly fewer than\nalternatives. Thirdly, it introduces a rotation-agnostic representation\nlearning paradigm using self-supervised learning, effectively mitigating\noverfitting. We also show that our compact model outperforms existing\nstate-of-the-art histopathology-specific vision transformers on 12 diverse\ndatasets, including both internal datasets spanning four sites (breast, liver,\nskin, and colorectal) and seven public datasets (PANDA, CAMELYON16, BRACS,\nDigestPath, Kather, PanNuke, and WSSS4LUAD). Notably, even with a training\ndataset of 6 million histopathology patches from The Cancer Genome Atlas\n(TCGA), our approach demonstrates an average 8.5% improvement in patch-level\nmajority vote performance.", + "Notably, even with a training\ndataset of 6 million histopathology patches from The Cancer Genome Atlas\n(TCGA), our approach demonstrates an average 8.5% improvement in patch-level\nmajority vote performance. These contributions provide a robust framework for\nenhancing image analysis in digital pathology, rigorously validated through\nextensive evaluation. Project Page:\nhttps://kimialabmayo.github.io/PathDino-Page/", + "In the field of deep point cloud understanding, KPConv is a unique\narchitecture that uses kernel points to locate convolutional weights in space,\ninstead of relying on Multi-Layer Perceptron (MLP) encodings. While it\ninitially achieved success, it has since been surpassed by recent MLP networks\nthat employ updated designs and training strategies. Building upon the kernel\npoint principle, we present two novel designs: KPConvD (depthwise KPConv), a\nlighter design that enables the use of deeper architectures, and KPConvX, an\ninnovative design that scales the depthwise convolutional weights of KPConvD\nwith kernel attention values. Using KPConvX with a modern architecture and\ntraining strategy, we are able to outperform current state-of-the-art\napproaches on the ScanObjectNN, Scannetv2, and S3DIS datasets. We validate our\ndesign choices through ablation studies and release our code and models.", + "This work aims to improve the efficiency of text-to-image diffusion models.\nWhile diffusion models use computationally expensive UNet-based denoising\noperations in every generation step, we identify that not all operations are\nequally relevant for the final output quality. In particular, we observe that\nUNet layers operating on high-res feature maps are relatively sensitive to\nsmall perturbations. In contrast, low-res feature maps influence the semantic\nlayout of the final image and can often be perturbed with no noticeable change\nin the output. Based on this observation, we propose Clockwork Diffusion, a\nmethod that periodically reuses computation from preceding denoising steps to\napproximate low-res feature maps at one or more subsequent steps. For multiple\nbaselines, and for both text-to-image generation and image editing, we\ndemonstrate that Clockwork leads to comparable or improved perceptual scores\nwith drastically reduced computational complexity. As an example, for Stable\nDiffusion v1.5 with 8 DPM++ steps we save 32% of FLOPs with negligible FID and\nCLIP change.", + "Diffusion-based models have gained significant popularity for text-to-image\ngeneration due to their exceptional image-generation capabilities. A risk with\nthese models is the potential generation of inappropriate content, such as\nbiased or harmful images. However, the underlying reasons for generating such\nundesired content from the perspective of the diffusion model's internal\nrepresentation remain unclear. Previous work interprets vectors in an\ninterpretable latent space of diffusion models as semantic concepts. However,\nexisting approaches cannot discover directions for arbitrary concepts, such as\nthose related to inappropriate concepts. In this work, we propose a novel\nself-supervised approach to find interpretable latent directions for a given\nconcept. With the discovered vectors, we further propose a simple approach to\nmitigate inappropriate generation. Extensive experiments have been conducted to\nverify the effectiveness of our mitigation approach, namely, for fair\ngeneration, safe generation, and responsible text-enhancing generation. Project\npage: \\url{https://interpretdiffusion.github.io}.", + "Reconstructing 3D clothed human involves creating a detailed geometry of\nindividuals in clothing, with applications ranging from virtual try-on, movies,\nto games. To enable practical and widespread applications, recent advances\npropose to generate a clothed human from an RGB image. However, they struggle\nto reconstruct detailed and robust avatars simultaneously. We empirically find\nthat the high-frequency (HF) and low-frequency (LF) information from a\nparametric model has the potential to enhance geometry details and improve\nrobustness to noise, respectively. Based on this, we propose HiLo, namely\nclothed human reconstruction with high- and low-frequency information, which\ncontains two components. 1) To recover detailed geometry using HF information,\nwe propose a progressive HF Signed Distance Function to enhance the detailed 3D\ngeometry of a clothed human. We analyze that our progressive learning manner\nalleviates large gradients that hinder model convergence. 2) To achieve robust\nreconstruction against inaccurate estimation of the parametric model by using\nLF information, we propose a spatial interaction implicit function. This\nfunction effectively exploits the complementary spatial information from a\nlow-resolution voxel grid of the parametric model.", + "2) To achieve robust\nreconstruction against inaccurate estimation of the parametric model by using\nLF information, we propose a spatial interaction implicit function. This\nfunction effectively exploits the complementary spatial information from a\nlow-resolution voxel grid of the parametric model. Experimental results\ndemonstrate that HiLo outperforms the state-of-the-art methods by 10.43% and\n9.54% in terms of Chamfer distance on the Thuman2.0 and CAPE datasets,\nrespectively. Additionally, HiLo demonstrates robustness to noise from the\nparametric model, challenging poses, and various clothing styles.", + "Customizing robotic behaviors to be aligned with diverse human preferences is\nan underexplored challenge in the field of embodied AI. In this paper, we\npresent Promptable Behaviors, a novel framework that facilitates efficient\npersonalization of robotic agents to diverse human preferences in complex\nenvironments. We use multi-objective reinforcement learning to train a single\npolicy adaptable to a broad spectrum of preferences. We introduce three\ndistinct methods to infer human preferences by leveraging different types of\ninteractions: (1) human demonstrations, (2) preference feedback on trajectory\ncomparisons, and (3) language instructions. We evaluate the proposed method in\npersonalized object-goal navigation and flee navigation tasks in ProcTHOR and\nRoboTHOR, demonstrating the ability to prompt agent behaviors to satisfy human\npreferences in various scenarios. Project page:\nhttps://promptable-behaviors.github.io", + "Learning compatible representations enables the interchangeable use of\nsemantic features as models are updated over time. This is particularly\nrelevant in search and retrieval systems where it is crucial to avoid\nreprocessing of the gallery images with the updated model. While recent\nresearch has shown promising empirical evidence, there is still a lack of\ncomprehensive theoretical understanding about learning compatible\nrepresentations. In this paper, we demonstrate that the stationary\nrepresentations learned by the $d$-Simplex fixed classifier optimally\napproximate compatibility representation according to the two inequality\nconstraints of its formal definition. This not only establishes a solid\nfoundation for future works in this line of research but also presents\nimplications that can be exploited in practical learning scenarios. An\nexemplary application is the now-standard practice of downloading and\nfine-tuning new pre-trained models. Specifically, we show the strengths and\ncritical issues of stationary representations in the case in which a model\nundergoing sequential fine-tuning is asynchronously replaced by downloading a\nbetter-performing model pre-trained elsewhere.", + "Specifically, we show the strengths and\ncritical issues of stationary representations in the case in which a model\nundergoing sequential fine-tuning is asynchronously replaced by downloading a\nbetter-performing model pre-trained elsewhere. Such a representation enables\nseamless delivery of retrieval service (i.e., no reprocessing of gallery\nimages) and offers improved performance without operational disruptions during\nmodel replacement. Code available at: https://github.com/miccunifi/iamcl2r.", + "We propose SceneTex, a novel method for effectively generating high-quality\nand style-consistent textures for indoor scenes using depth-to-image diffusion\npriors. Unlike previous methods that either iteratively warp 2D views onto a\nmesh surface or distillate diffusion latent features without accurate geometric\nand style cues, SceneTex formulates the texture synthesis task as an\noptimization problem in the RGB space where style and geometry consistency are\nproperly reflected. At its core, SceneTex proposes a multiresolution texture\nfield to implicitly encode the mesh appearance. We optimize the target texture\nvia a score-distillation-based objective function in respective RGB renderings.\nTo further secure the style consistency across views, we introduce a\ncross-attention decoder to predict the RGB values by cross-attending to the\npre-sampled reference locations in each instance. SceneTex enables various and\naccurate texture synthesis for 3D-FRONT scenes, demonstrating significant\nimprovements in visual quality and prompt fidelity over the prior texture\ngeneration methods.", + "Cooperative perception offers several benefits for enhancing the capabilities\nof autonomous vehicles and improving road safety. Using roadside sensors in\naddition to onboard sensors increases reliability and extends the sensor range.\nExternal sensors offer higher situational awareness for automated vehicles and\nprevent occlusions. We propose CoopDet3D, a cooperative multi-modal fusion\nmodel, and TUMTraf-V2X, a perception dataset, for the cooperative 3D object\ndetection and tracking task. Our dataset contains 2,000 labeled point clouds\nand 5,000 labeled images from five roadside and four onboard sensors. It\nincludes 30k 3D boxes with track IDs and precise GPS and IMU data. We labeled\neight categories and covered occlusion scenarios with challenging driving\nmaneuvers, like traffic violations, near-miss events, overtaking, and U-turns.\nThrough multiple experiments, we show that our CoopDet3D camera-LiDAR fusion\nmodel achieves an increase of +14.36 3D mAP compared to a vehicle camera-LiDAR\nfusion model.", + "Through multiple experiments, we show that our CoopDet3D camera-LiDAR fusion\nmodel achieves an increase of +14.36 3D mAP compared to a vehicle camera-LiDAR\nfusion model. Finally, we make our dataset, model, labeling tool, and dev-kit\npublicly available on our website:\nhttps://tum-traffic-dataset.github.io/tumtraf-v2x.", + "This paper addresses the challenge of object-centric layout generation under\nspatial constraints, seen in multiple domains including floorplan design\nprocess. The design process typically involves specifying a set of spatial\nconstraints that include object attributes like size and inter-object relations\nsuch as relative positioning. Existing works, which typically represent objects\nas single nodes, lack the granularity to accurately model complex interactions\nbetween objects. For instance, often only certain parts of an object, like a\nroom's right wall, interact with adjacent objects. To address this gap, we\nintroduce a factor graph based approach with four latent variable nodes for\neach room, and a factor node for each constraint. The factor nodes represent\ndependencies among the variables to which they are connected, effectively\ncapturing constraints that are potentially of a higher order. We then develop\nmessage-passing on the bipartite graph, forming a factor graph neural network\nthat is trained to produce a floorplan that aligns with the desired\nrequirements. Our approach is simple and generates layouts faithful to the user\nrequirements, demonstrated by a large improvement in IOU scores over existing\nmethods.", + "Our approach is simple and generates layouts faithful to the user\nrequirements, demonstrated by a large improvement in IOU scores over existing\nmethods. Additionally, our approach, being inferential and accurate, is\nwell-suited to the practical human-in-the-loop design process where\nspecifications evolve iteratively, offering a practical and powerful tool for\nAI-guided design.", + "Open-set supervised anomaly detection (OSAD) - a recently emerging anomaly\ndetection area - aims at utilizing a few samples of anomaly classes seen during\ntraining to detect unseen anomalies (i.e., samples from open-set anomaly\nclasses), while effectively identifying the seen anomalies. Benefiting from the\nprior knowledge illustrated by the seen anomalies, current OSAD methods can\noften largely reduce false positive errors. However, these methods are trained\nin a closed-set setting and treat the anomaly examples as from a homogeneous\ndistribution, rendering them less effective in generalizing to unseen anomalies\nthat can be drawn from any distribution. This paper proposes to learn\nheterogeneous anomaly distributions using the limited anomaly examples to\naddress this issue. To this end, we introduce a novel approach, namely Anomaly\nHeterogeneity Learning (AHL), that simulates a diverse set of heterogeneous\nanomaly distributions and then utilizes them to learn a unified heterogeneous\nabnormality model in surrogate open-set environments. Further, AHL is a generic\nframework that existing OSAD models can plug and play for enhancing their\nabnormality modeling.", + "Further, AHL is a generic\nframework that existing OSAD models can plug and play for enhancing their\nabnormality modeling. Extensive experiments on nine real-world anomaly\ndetection datasets show that AHL can 1) substantially enhance different\nstate-of-the-art OSAD models in detecting seen and unseen anomalies, and 2)\neffectively generalize to unseen anomalies in new domains. Code is available at\nhttps://github.com/mala-lab/AHL.", + "White balance (WB) algorithms in many commercial cameras assume single and\nuniform illumination, leading to undesirable results when multiple lighting\nsources with different chromaticities exist in the scene. Prior research on\nmulti-illuminant WB typically predicts illumination at the pixel level without\nfully grasping the scene's actual lighting conditions, including the number and\ncolor of light sources. This often results in unnatural outcomes lacking in\noverall consistency. To handle this problem, we present a deep white balancing\nmodel that leverages the slot attention, where each slot is in charge of\nrepresenting individual illuminants. This design enables the model to generate\nchromaticities and weight maps for individual illuminants, which are then fused\nto compose the final illumination map. Furthermore, we propose the\ncentroid-matching loss, which regulates the activation of each slot based on\nthe color range, thereby enhancing the model to separate illumination more\neffectively. Our method achieves the state-of-the-art performance on both\nsingle- and multi-illuminant WB benchmarks, and also offers additional\ninformation such as the number of illuminants in the scene and their\nchromaticity. This capability allows for illumination editing, an application\nnot feasible with prior methods.", + "The paradigm of pre-training and fine-tuning has laid the foundation for\ndeploying deep learning models. However, most fine-tuning methods are designed\nto meet a specific resource budget. Recently, considering diverse deployment\nscenarios with various resource budgets, stitchable neural network (SN-Net) is\nintroduced to quickly obtain numerous new networks (stitches) from the\npre-trained models (anchors) in a model family via model stitching. Although\npromising, SN-Net confronts new challenges when adapting it to new target\ndomains, including huge memory and storage requirements and a long and\nsub-optimal multistage adaptation process. In this work, we present a novel\nframework, Efficient Stitchable Task Adaptation (ESTA), to efficiently produce\na palette of fine-tuned models that adhere to diverse resource constraints.\nSpecifically, we first tailor parameter-efficient fine-tuning to share low-rank\nupdates among the stitches while maintaining independent bias terms. In this\nway, we largely reduce fine-tuning memory burdens and mitigate the interference\namong stitches that arises in task adaptation.", + "Specifically, we first tailor parameter-efficient fine-tuning to share low-rank\nupdates among the stitches while maintaining independent bias terms. In this\nway, we largely reduce fine-tuning memory burdens and mitigate the interference\namong stitches that arises in task adaptation. Furthermore, we streamline a\nsimple yet effective one-stage deployment pipeline, which estimates the\nimportant stitches to deploy with training-time gradient statistics. By\nassigning higher sampling probabilities to important stitches, we also get a\nboosted Pareto frontier. Extensive experiments on 25 downstream visual\nrecognition tasks demonstrate that our ESTA is capable of generating stitches\nwith smooth accuracy-efficiency trade-offs and surpasses the direct SN-Net\nadaptation by remarkable margins with significantly lower training time and\nfewer trainable parameters. Furthermore, we demonstrate the flexibility and\nscalability of our ESTA framework by stitching LLMs from LLaMA family,\nobtaining chatbot stitches of assorted sizes.", + "Recent advancements in dynamic neural radiance field methods have yielded\nremarkable outcomes. However, these approaches rely on the assumption of sharp\ninput images. When faced with motion blur, existing dynamic NeRF methods often\nstruggle to generate high-quality novel views. In this paper, we propose\nDyBluRF, a dynamic radiance field approach that synthesizes sharp novel views\nfrom a monocular video affected by motion blur. To account for motion blur in\ninput images, we simultaneously capture the camera trajectory and object\nDiscrete Cosine Transform (DCT) trajectories within the scene. Additionally, we\nemploy a global cross-time rendering approach to ensure consistent temporal\ncoherence across the entire scene. We curate a dataset comprising diverse\ndynamic scenes that are specifically tailored for our task. Experimental\nresults on our dataset demonstrate that our method outperforms existing\napproaches in generating sharp novel views from motion-blurred inputs while\nmaintaining spatial-temporal consistency of the scene.", + "Recently, high-fidelity scene reconstruction with an optimized 3D Gaussian\nsplat representation has been introduced for novel view synthesis from sparse\nimage sets. Making such representations suitable for applications like network\nstreaming and rendering on low-power devices requires significantly reduced\nmemory consumption as well as improved rendering efficiency. We propose a\ncompressed 3D Gaussian splat representation that utilizes sensitivity-aware\nvector clustering with quantization-aware training to compress directional\ncolors and Gaussian parameters. The learned codebooks have low bitrates and\nachieve a compression rate of up to $31\\times$ on real-world scenes with only\nminimal degradation of visual quality. We demonstrate that the compressed splat\nrepresentation can be efficiently rendered with hardware rasterization on\nlightweight GPUs at up to $4\\times$ higher framerates than reported via an\noptimized GPU compute pipeline. Extensive experiments across multiple datasets\ndemonstrate the robustness and rendering speed of the proposed approach.", + "We approach the challenge of addressing semi-supervised domain generalization\n(SSDG). Specifically, our aim is to obtain a model that learns\ndomain-generalizable features by leveraging a limited subset of labelled data\nalongside a substantially larger pool of unlabeled data. Existing domain\ngeneralization (DG) methods which are unable to exploit unlabeled data perform\npoorly compared to semi-supervised learning (SSL) methods under SSDG setting.\nNevertheless, SSL methods have considerable room for performance improvement\nwhen compared to fully-supervised DG training. To tackle this underexplored,\nyet highly practical problem of SSDG, we make the following core contributions.\nFirst, we propose a feature-based conformity technique that matches the\nposterior distributions from the feature space with the pseudo-label from the\nmodel's output space. Second, we develop a semantics alignment loss to learn\nsemantically-compatible representations by regularizing the semantic structure\nin the feature space. Our method is plug-and-play and can be readily integrated\nwith different SSL-based SSDG baselines without introducing any additional\nparameters. Extensive experimental results across five challenging DG\nbenchmarks with four strong SSL baselines suggest that our method provides\nconsistent and notable gains in two different SSDG settings.", + "Interactive motion synthesis is essential in creating immersive experiences\nin entertainment applications, such as video games and virtual reality.\nHowever, generating animations that are both high-quality and contextually\nresponsive remains a challenge. Traditional techniques in the game industry can\nproduce high-fidelity animations but suffer from high computational costs and\npoor scalability. Trained neural network models alleviate the memory and speed\nissues, yet fall short on generating diverse motions. Diffusion models offer\ndiverse motion synthesis with low memory usage, but require expensive reverse\ndiffusion processes. This paper introduces the Accelerated Auto-regressive\nMotion Diffusion Model (AAMDM), a novel motion synthesis framework designed to\nachieve quality, diversity, and efficiency all together. AAMDM integrates\nDenoising Diffusion GANs as a fast Generation Module, and an Auto-regressive\nDiffusion Model as a Polishing Module. Furthermore, AAMDM operates in a\nlower-dimensional embedded space rather than the full-dimensional pose space,\nwhich reduces the training complexity as well as further improves the\nperformance.", + "Furthermore, AAMDM operates in a\nlower-dimensional embedded space rather than the full-dimensional pose space,\nwhich reduces the training complexity as well as further improves the\nperformance. We show that AAMDM outperforms existing methods in motion quality,\ndiversity, and runtime efficiency, through comprehensive quantitative analyses\nand visual comparisons. We also demonstrate the effectiveness of each\nalgorithmic component through ablation studies.", + "Deep Text-to-Image Synthesis (TIS) models such as Stable Diffusion have\nrecently gained significant popularity for creative Text-to-image generation.\nYet, for domain-specific scenarios, tuning-free Text-guided Image Editing (TIE)\nis of greater importance for application developers, which modify objects or\nobject properties in images by manipulating feature components in attention\nlayers during the generation process. However, little is known about what\nsemantic meanings these attention layers have learned and which parts of the\nattention maps contribute to the success of image editing. In this paper, we\nconduct an in-depth probing analysis and demonstrate that cross-attention maps\nin Stable Diffusion often contain object attribution information that can\nresult in editing failures. In contrast, self-attention maps play a crucial\nrole in preserving the geometric and shape details of the source image during\nthe transformation to the target image. Our analysis offers valuable insights\ninto understanding cross and self-attention maps in diffusion models. Moreover,\nbased on our findings, we simplify popular image editing methods and propose a\nmore straightforward yet more stable and efficient tuning-free procedure that\nonly modifies self-attention maps of the specified attention layers during the\ndenoising process.", + "Moreover,\nbased on our findings, we simplify popular image editing methods and propose a\nmore straightforward yet more stable and efficient tuning-free procedure that\nonly modifies self-attention maps of the specified attention layers during the\ndenoising process. Experimental results show that our simplified method\nconsistently surpasses the performance of popular approaches on multiple\ndatasets.", + "The primary focus of Neural Representation for Videos (NeRV) is to\neffectively model its spatiotemporal consistency. However, current NeRV systems\noften face a significant issue of spatial inconsistency, leading to decreased\nperceptual quality. To address this issue, we introduce the Pyramidal Neural\nRepresentation for Videos (PNeRV), which is built on a multi-scale information\nconnection and comprises a lightweight rescaling operator, Kronecker\nFully-connected layer (KFc), and a Benign Selective Memory (BSM) mechanism. The\nKFc, inspired by the tensor decomposition of the vanilla Fully-connected layer,\nfacilitates low-cost rescaling and global correlation modeling. BSM merges\nhigh-level features with granular ones adaptively. Furthermore, we provide an\nanalysis based on the Universal Approximation Theory of the NeRV system and\nvalidate the effectiveness of the proposed PNeRV.We conducted comprehensive\nexperiments to demonstrate that PNeRV surpasses the performance of contemporary\nNeRV models, achieving the best results in video regression on UVG and DAVIS\nunder various metrics (PSNR, SSIM, LPIPS, and FVD).", + "Compared to vanilla NeRV,\nPNeRV achieves a +4.49 dB gain in PSNR and a 231% increase in FVD on UVG, along\nwith a +3.28 dB PSNR and 634% FVD increase on DAVIS.", + "Long-tail recognition is challenging because it requires the model to learn\ngood representations from tail categories and address imbalances across all\ncategories. In this paper, we propose a novel generative and fine-tuning\nframework, LTGC, to handle long-tail recognition via leveraging generated\ncontent. Firstly, inspired by the rich implicit knowledge in large-scale models\n(e.g., large language models, LLMs), LTGC leverages the power of these models\nto parse and reason over the original tail data to produce diverse tail-class\ncontent. We then propose several novel designs for LTGC to ensure the quality\nof the generated data and to efficiently fine-tune the model using both the\ngenerated and original data. The visualization demonstrates the effectiveness\nof the generation module in LTGC, which produces accurate and diverse tail\ndata. Additionally, the experimental results demonstrate that our LTGC\noutperforms existing state-of-the-art methods on popular long-tailed\nbenchmarks.", + "Instance segmentation is data-hungry, and as model capacity increases, data\nscale becomes crucial for improving the accuracy. Most instance segmentation\ndatasets today require costly manual annotation, limiting their data scale.\nModels trained on such data are prone to overfitting on the training set,\nespecially for those rare categories. While recent works have delved into\nexploiting generative models to create synthetic datasets for data\naugmentation, these approaches do not efficiently harness the full potential of\ngenerative models.\n To address these issues, we introduce a more efficient strategy to construct\ngenerative datasets for data augmentation, termed DiverGen. Firstly, we provide\nan explanation of the role of generative data from the perspective of\ndistribution discrepancy. We investigate the impact of different data on the\ndistribution learned by the model. We argue that generative data can expand the\ndata distribution that the model can learn, thus mitigating overfitting.\nAdditionally, we find that the diversity of generative data is crucial for\nimproving model performance and enhance it through various strategies,\nincluding category diversity, prompt diversity, and generative model diversity.", + "Additionally, we find that the diversity of generative data is crucial for\nimproving model performance and enhance it through various strategies,\nincluding category diversity, prompt diversity, and generative model diversity.\nWith these strategies, we can scale the data to millions while maintaining the\ntrend of model performance improvement. On the LVIS dataset, DiverGen\nsignificantly outperforms the strong model X-Paste, achieving +1.1 box AP and\n+1.1 mask AP across all categories, and +1.9 box AP and +2.5 mask AP for rare\ncategories.", + "Absolute Pose Regression (APR) methods use deep neural networks to directly\nregress camera poses from RGB images. However, the predominant APR\narchitectures only rely on 2D operations during inference, resulting in limited\naccuracy of pose estimation due to the lack of 3D geometry constraints or\npriors. In this work, we propose a test-time refinement pipeline that leverages\nimplicit geometric constraints using a robust feature field to enhance the\nability of APR methods to use 3D information during inference. We also\nintroduce a novel Neural Feature Synthesizer (NeFeS) model, which encodes 3D\ngeometric features during training and directly renders dense novel view\nfeatures at test time to refine APR methods. To enhance the robustness of our\nmodel, we introduce a feature fusion module and a progressive training\nstrategy. Our proposed method achieves state-of-the-art single-image APR\naccuracy on indoor and outdoor datasets.", + "This study focuses on a novel task in text-to-image (T2I) generation, namely\naction customization. The objective of this task is to learn the co-existing\naction from limited data and generalize it to unseen humans or even animals.\nExperimental results show that existing subject-driven customization methods\nfail to learn the representative characteristics of actions and struggle in\ndecoupling actions from context features, including appearance. To overcome the\npreference for low-level features and the entanglement of high-level features,\nwe propose an inversion-based method Action-Disentangled Identifier (ADI) to\nlearn action-specific identifiers from the exemplar images. ADI first expands\nthe semantic conditioning space by introducing layer-wise identifier tokens,\nthereby increasing the representational richness while distributing the\ninversion across different features. Then, to block the inversion of\naction-agnostic features, ADI extracts the gradient invariance from the\nconstructed sample triples and masks the updates of irrelevant channels. To\ncomprehensively evaluate the task, we present an ActionBench that includes a\nvariety of actions, each accompanied by meticulously selected samples.", + "To\ncomprehensively evaluate the task, we present an ActionBench that includes a\nvariety of actions, each accompanied by meticulously selected samples. Both\nquantitative and qualitative results show that our ADI outperforms existing\nbaselines in action-customized T2I generation. Our project page is at\nhttps://adi-t2i.github.io/ADI.", + "We propose a framework for automatic colorization that allows for iterative\nediting and modifications. The core of our framework lies in an imagination\nmodule: by understanding the content within a grayscale image, we utilize a\npre-trained image generation model to generate multiple images that contain the\nsame content. These images serve as references for coloring, mimicking the\nprocess of human experts. As the synthesized images can be imperfect or\ndifferent from the original grayscale image, we propose a Reference Refinement\nModule to select the optimal reference composition. Unlike most previous\nend-to-end automatic colorization algorithms, our framework allows for\niterative and localized modifications of the colorization results because we\nexplicitly model the coloring samples. Extensive experiments demonstrate the\nsuperiority of our framework over existing automatic colorization algorithms in\neditability and flexibility. Project page:\nhttps://xy-cong.github.io/imagine-colorization.", + "This paper is not motivated to seek innovation within the attention\nmechanism. Instead, it focuses on overcoming the existing trade-offs between\naccuracy and efficiency within the context of point cloud processing,\nleveraging the power of scale. Drawing inspiration from recent advances in 3D\nlarge-scale representation learning, we recognize that model performance is\nmore influenced by scale than by intricate design. Therefore, we present Point\nTransformer V3 (PTv3), which prioritizes simplicity and efficiency over the\naccuracy of certain mechanisms that are minor to the overall performance after\nscaling, such as replacing the precise neighbor search by KNN with an efficient\nserialized neighbor mapping of point clouds organized with specific patterns.\nThis principle enables significant scaling, expanding the receptive field from\n16 to 1024 points while remaining efficient (a 3x increase in processing speed\nand a 10x improvement in memory efficiency compared with its predecessor,\nPTv2). PTv3 attains state-of-the-art results on over 20 downstream tasks that\nspan both indoor and outdoor scenarios. Further enhanced with multi-dataset\njoint training, PTv3 pushes these results to a higher level.", + "Precipitation nowcasting is an important spatio-temporal prediction task to\npredict the radar echoes sequences based on current observations, which can\nserve both meteorological science and smart city applications. Due to the\nchaotic evolution nature of the precipitation systems, it is a very challenging\nproblem. Previous studies address the problem either from the perspectives of\ndeterministic modeling or probabilistic modeling. However, their predictions\nsuffer from the blurry, high-value echoes fading away and position inaccurate\nissues. The root reason of these issues is that the chaotic evolutionary\nprecipitation systems are not appropriately modeled. Inspired by the nature of\nthe systems, we propose to decompose and model them from the perspective of\nglobal deterministic motion and local stochastic variations with residual\nmechanism. A unified and flexible framework that can equip any type of\nspatio-temporal models is proposed based on residual diffusion, which\neffectively tackles the shortcomings of previous methods. Extensive\nexperimental results on four publicly available radar datasets demonstrate the\neffectiveness and superiority of the proposed framework, compared to\nstate-of-the-art techniques. Our code is publicly available at\nhttps://github.com/DeminYu98/DiffCast.", + "Pre-training a model and then fine-tuning it on downstream tasks has\ndemonstrated significant success in the 2D image and NLP domains. However, due\nto the unordered and non-uniform density characteristics of point clouds, it is\nnon-trivial to explore the prior knowledge of point clouds and pre-train a\npoint cloud backbone. In this paper, we propose a novel pre-training method\ncalled Point cloud Diffusion pre-training (PointDif). We consider the point\ncloud pre-training task as a conditional point-to-point generation problem and\nintroduce a conditional point generator. This generator aggregates the features\nextracted by the backbone and employs them as the condition to guide the\npoint-to-point recovery from the noisy point cloud, thereby assisting the\nbackbone in capturing both local and global geometric priors as well as the\nglobal point density distribution of the object. We also present a recurrent\nuniform sampling optimization strategy, which enables the model to uniformly\nrecover from various noise levels and learn from balanced supervision. Our\nPointDif achieves substantial improvement across various real-world datasets\nfor diverse downstream tasks such as classification, segmentation and\ndetection.", + "We also present a recurrent\nuniform sampling optimization strategy, which enables the model to uniformly\nrecover from various noise levels and learn from balanced supervision. Our\nPointDif achieves substantial improvement across various real-world datasets\nfor diverse downstream tasks such as classification, segmentation and\ndetection. Specifically, PointDif attains 70.0% mIoU on S3DIS Area 5 for the\nsegmentation task and achieves an average improvement of 2.4% on ScanObjectNN\nfor the classification task compared to TAP. Furthermore, our pre-training\nframework can be flexibly applied to diverse point cloud backbones and bring\nconsiderable gains.", + "To satisfy the rapidly increasing demands on the large image (2K-8K)\nsuper-resolution (SR), prevailing methods follow two independent tracks: 1)\naccelerate existing networks by content-aware routing, and 2) design better\nsuper-resolution networks via token mixer refining. Despite directness, they\nencounter unavoidable defects (e.g., inflexible route or non-discriminative\nprocessing) limiting further improvements of quality-complexity trade-off. To\nerase the drawbacks, we integrate these schemes by proposing a content-aware\nmixer (CAMixer), which assigns convolution for simple contexts and additional\ndeformable window-attention for sparse textures. Specifically, the CAMixer uses\na learnable predictor to generate multiple bootstraps, including offsets for\nwindows warping, a mask for classifying windows, and convolutional attentions\nfor endowing convolution with the dynamic property, which modulates attention\nto include more useful textures self-adaptively and improves the representation\ncapability of convolution. We further introduce a global classification loss to\nimprove the accuracy of predictors. By simply stacking CAMixers, we obtain\nCAMixerSR which achieves superior performance on large-image SR, lightweight\nSR, and omnidirectional-image SR.", + "This paper explores the possibility of extending the capability of\npre-trained neural image compressors (e.g., adapting to new data or target\nbitrates) without breaking backward compatibility, the ability to decode\nbitstreams encoded by the original model. We refer to this problem as continual\nlearning of image compression. Our initial findings show that baseline\nsolutions, such as end-to-end fine-tuning, do not preserve the desired backward\ncompatibility. To tackle this, we propose a knowledge replay training strategy\nthat effectively addresses this issue. We also design a new model architecture\nthat enables more effective continual learning than existing baselines.\nExperiments are conducted for two scenarios: data-incremental learning and\nrate-incremental learning. The main conclusion of this paper is that neural\nimage compressors can be fine-tuned to achieve better performance (compared to\ntheir pre-trained version) on new data and rates without compromising backward\ncompatibility. Our code is available at\nhttps://gitlab.com/viper-purdue/continual-compression", + "The recent work Local Implicit Image Function (LIIF) and subsequent Implicit\nNeural Representation (INR) based works have achieved remarkable success in\nArbitrary-Scale Super-Resolution (ASSR) by using MLP to decode Low-Resolution\n(LR) features. However, these continuous image representations typically\nimplement decoding in High-Resolution (HR) High-Dimensional (HD) space, leading\nto a quadratic increase in computational cost and seriously hindering the\npractical applications of ASSR. To tackle this problem, we propose a novel\nLatent Modulated Function (LMF), which decouples the HR-HD decoding process\ninto shared latent decoding in LR-HD space and independent rendering in HR\nLow-Dimensional (LD) space, thereby realizing the first computational optimal\nparadigm of continuous image representation. Specifically, LMF utilizes an HD\nMLP in latent space to generate latent modulations of each LR feature vector.\nThis enables a modulated LD MLP in render space to quickly adapt to any input\nfeature vector and perform rendering at arbitrary resolution.", + "Specifically, LMF utilizes an HD\nMLP in latent space to generate latent modulations of each LR feature vector.\nThis enables a modulated LD MLP in render space to quickly adapt to any input\nfeature vector and perform rendering at arbitrary resolution. Furthermore, we\nleverage the positive correlation between modulation intensity and input image\ncomplexity to design a Controllable Multi-Scale Rendering (CMSR) algorithm,\noffering the flexibility to adjust the decoding efficiency based on the\nrendering precision. Extensive experiments demonstrate that converting existing\nINR-based ASSR methods to LMF can reduce the computational cost by up to 99.9%,\naccelerate inference by up to 57 times, and save up to 76% of parameters, while\nmaintaining competitive performance. The code is available at\nhttps://github.com/HeZongyao/LMF.", + "In this work, we tackle the problem of unsupervised domain adaptation (UDA)\nfor video action recognition. Our approach, which we call UNITE, uses an image\nteacher model to adapt a video student model to the target domain. UNITE first\nemploys self-supervised pre-training to promote discriminative feature learning\non target domain videos using a teacher-guided masked distillation objective.\nWe then perform self-training on masked target data, using the video student\nmodel and image teacher model together to generate improved pseudolabels for\nunlabeled target videos. Our self-training process successfully leverages the\nstrengths of both models to achieve strong transfer performance across domains.\nWe evaluate our approach on multiple video domain adaptation benchmarks and\nobserve significant improvements upon previously reported results.", + "Accurate monocular metric depth estimation (MMDE) is crucial to solving\ndownstream tasks in 3D perception and modeling. However, the remarkable\naccuracy of recent MMDE methods is confined to their training domains. These\nmethods fail to generalize to unseen domains even in the presence of moderate\ndomain gaps, which hinders their practical applicability. We propose a new\nmodel, UniDepth, capable of reconstructing metric 3D scenes from solely single\nimages across domains. Departing from the existing MMDE methods, UniDepth\ndirectly predicts metric 3D points from the input image at inference time\nwithout any additional information, striving for a universal and flexible MMDE\nsolution. In particular, UniDepth implements a self-promptable camera module\npredicting dense camera representation to condition depth features. Our model\nexploits a pseudo-spherical output representation, which disentangles camera\nand depth representations. In addition, we propose a geometric invariance loss\nthat promotes the invariance of camera-prompted depth features.", + "Our model\nexploits a pseudo-spherical output representation, which disentangles camera\nand depth representations. In addition, we propose a geometric invariance loss\nthat promotes the invariance of camera-prompted depth features. Thorough\nevaluations on ten datasets in a zero-shot regime consistently demonstrate the\nsuperior performance of UniDepth, even when compared with methods directly\ntrained on the testing domains. Code and models are available at:\nhttps://github.com/lpiccinelli-eth/unidepth", + "Head avatars animated by visual signals have gained popularity, particularly\nin cross-driving synthesis where the driver differs from the animated\ncharacter, a challenging but highly practical approach. The recently presented\nMegaPortraits model has demonstrated state-of-the-art results in this domain.\nWe conduct a deep examination and evaluation of this model, with a particular\nfocus on its latent space for facial expression descriptors, and uncover\nseveral limitations with its ability to express intense face motions. To\naddress these limitations, we propose substantial changes in both training\npipeline and model architecture, to introduce our EMOPortraits model, where we:\n Enhance the model's capability to faithfully support intense, asymmetric face\nexpressions, setting a new state-of-the-art result in the emotion transfer\ntask, surpassing previous methods in both metrics and quality.\n Incorporate speech-driven mode to our model, achieving top-tier performance\nin audio-driven facial animation, making it possible to drive source identity\nthrough diverse modalities, including visual signal, audio, or a blend of both.\n We propose a novel multi-view video dataset featuring a wide range of intense\nand asymmetric facial expressions, filling the gap with absence of such data in\nexisting datasets.", + "Existing approaches to unsupervised video instance segmentation typically\nrely on motion estimates and experience difficulties tracking small or\ndivergent motions. We present VideoCutLER, a simple method for unsupervised\nmulti-instance video segmentation without using motion-based learning signals\nlike optical flow or training on natural videos. Our key insight is that using\nhigh-quality pseudo masks and a simple video synthesis method for model\ntraining is surprisingly sufficient to enable the resulting video model to\neffectively segment and track multiple instances across video frames. We show\nthe first competitive unsupervised learning results on the challenging\nYouTubeVIS-2019 benchmark, achieving 50.7% APvideo^50 , surpassing the previous\nstate-of-the-art by a large margin. VideoCutLER can also serve as a strong\npretrained model for supervised video instance segmentation tasks, exceeding\nDINO by 15.9% on YouTubeVIS-2019 in terms of APvideo.", + "Single-modal object re-identification (ReID) faces great challenges in\nmaintaining robustness within complex visual scenarios. In contrast,\nmulti-modal object ReID utilizes complementary information from diverse\nmodalities, showing great potentials for practical applications. However,\nprevious methods may be easily affected by irrelevant backgrounds and usually\nignore the modality gaps. To address above issues, we propose a novel learning\nframework named \\textbf{EDITOR} to select diverse tokens from vision\nTransformers for multi-modal object ReID. We begin with a shared vision\nTransformer to extract tokenized features from different input modalities.\nThen, we introduce a Spatial-Frequency Token Selection (SFTS) module to\nadaptively select object-centric tokens with both spatial and frequency\ninformation. Afterwards, we employ a Hierarchical Masked Aggregation (HMA)\nmodule to facilitate feature interactions within and across modalities.\nFinally, to further reduce the effect of backgrounds, we propose a Background\nConsistency Constraint (BCC) and an Object-Centric Feature Refinement (OCFR).\nThey are formulated as two new loss functions, which improve the feature\ndiscrimination with background suppression.", + "Finally, to further reduce the effect of backgrounds, we propose a Background\nConsistency Constraint (BCC) and an Object-Centric Feature Refinement (OCFR).\nThey are formulated as two new loss functions, which improve the feature\ndiscrimination with background suppression. As a result, our framework can\ngenerate more discriminative features for multi-modal object ReID. Extensive\nexperiments on three multi-modal ReID benchmarks verify the effectiveness of\nour methods. The code is available at https://github.com/924973292/EDITOR.", + "We introduce Open3DIS, a novel solution designed to tackle the problem of\nOpen-Vocabulary Instance Segmentation within 3D scenes. Objects within 3D\nenvironments exhibit diverse shapes, scales, and colors, making precise\ninstance-level identification a challenging task. Recent advancements in\nOpen-Vocabulary scene understanding have made significant strides in this area\nby employing class-agnostic 3D instance proposal networks for object\nlocalization and learning queryable features for each 3D mask. While these\nmethods produce high-quality instance proposals, they struggle with identifying\nsmall-scale and geometrically ambiguous objects. The key idea of our method is\na new module that aggregates 2D instance masks across frames and maps them to\ngeometrically coherent point cloud regions as high-quality object proposals\naddressing the above limitations. These are then combined with 3D\nclass-agnostic instance proposals to include a wide range of objects in the\nreal world. To validate our approach, we conducted experiments on three\nprominent datasets, including ScanNet200, S3DIS, and Replica, demonstrating\nsignificant performance gains in segmenting objects with diverse categories\nover the state-of-the-art approaches.", + "Accurate data association is crucial in reducing confusion, such as ID\nswitches and assignment errors, in multi-object tracking (MOT). However,\nexisting advanced methods often overlook the diversity among trajectories and\nthe ambiguity and conflicts present in motion and appearance cues, leading to\nconfusion among detections, trajectories, and associations when performing\nsimple global data association. To address this issue, we propose a simple,\nversatile, and highly interpretable data association approach called Decomposed\nData Association (DDA). DDA decomposes the traditional association problem into\nmultiple sub-problems using a series of non-learning-based modules and\nselectively addresses the confusion in each sub-problem by incorporating\ntargeted exploitation of new cues. Additionally, we introduce Occlusion-aware\nNon-Maximum Suppression (ONMS) to retain more occluded detections, thereby\nincreasing opportunities for association with trajectories and indirectly\nreducing the confusion caused by missed detections. Finally, based on DDA and\nONMS, we design a powerful multi-object tracker named DeconfuseTrack,\nspecifically focused on resolving confusion in MOT.", + "Finally, based on DDA and\nONMS, we design a powerful multi-object tracker named DeconfuseTrack,\nspecifically focused on resolving confusion in MOT. Extensive experiments\nconducted on the MOT17 and MOT20 datasets demonstrate that our proposed DDA and\nONMS significantly enhance the performance of several popular trackers.\nMoreover, DeconfuseTrack achieves state-of-the-art performance on the MOT17 and\nMOT20 test sets, significantly outperforms the baseline tracker ByteTrack in\nmetrics such as HOTA, IDF1, AssA. This validates that our tracking design\neffectively reduces confusion caused by simple global association.", + "RGBT multispectral pedestrian detection has emerged as a promising solution\nfor safety-critical applications that require day/night operations. However,\nthe modality bias problem remains unsolved as multispectral pedestrian\ndetectors learn the statistical bias in datasets. Specifically, datasets in\nmultispectral pedestrian detection mainly distribute between ROTO (day) and\nRXTO (night) data; the majority of the pedestrian labels statistically co-occur\nwith their thermal features. As a result, multispectral pedestrian detectors\nshow poor generalization ability on examples beyond this statistical\ncorrelation, such as ROTX data. To address this problem, we propose a novel\nCausal Mode Multiplexer (CMM) framework that effectively learns the causalities\nbetween multispectral inputs and predictions. Moreover, we construct a new\ndataset (ROTX-MP) to evaluate modality bias in multispectral pedestrian\ndetection. ROTX-MP mainly includes ROTX examples not presented in previous\ndatasets. Extensive experiments demonstrate that our proposed CMM framework\ngeneralizes well on existing datasets (KAIST, CVC-14, FLIR) and the new\nROTX-MP. We will release our new dataset to the public for future research.", + "Vectorized High-Definition (HD) map construction requires predictions of the\ncategory and point coordinates of map elements (e.g. road boundary, lane\ndivider, pedestrian crossing, etc.). State-of-the-art methods are mainly based\non point-level representation learning for regressing accurate point\ncoordinates. However, this pipeline has limitations in obtaining element-level\ninformation and handling element-level failures, e.g. erroneous element shape\nor entanglement between elements. To tackle the above issues, we propose a\nsimple yet effective HybrId framework named HIMap to sufficiently learn and\ninteract both point-level and element-level information. Concretely, we\nintroduce a hybrid representation called HIQuery to represent all map elements,\nand propose a point-element interactor to interactively extract and encode the\nhybrid information of elements, e.g. point position and element shape, into the\nHIQuery. Additionally, we present a point-element consistency constraint to\nenhance the consistency between the point-level and element-level information.\nFinally, the output point-element integrated HIQuery can be directly converted\ninto map elements' class, point coordinates, and mask.", + "point position and element shape, into the\nHIQuery. Additionally, we present a point-element consistency constraint to\nenhance the consistency between the point-level and element-level information.\nFinally, the output point-element integrated HIQuery can be directly converted\ninto map elements' class, point coordinates, and mask. We conduct extensive\nexperiments and consistently outperform previous methods on both nuScenes and\nArgoverse2 datasets. Notably, our method achieves $77.8$ mAP on the nuScenes\ndataset, remarkably superior to previous SOTAs by $8.3$ mAP at least.", + "In this paper, we present ShapeMatcher, a unified self-supervised learning\nframework for joint shape canonicalization, segmentation, retrieval and\ndeformation. Given a partially-observed object in an arbitrary pose, we first\ncanonicalize the object by extracting point-wise affine-invariant features,\ndisentangling inherent structure of the object with its pose and size. These\nlearned features are then leveraged to predict semantically consistent part\nsegmentation and corresponding part centers. Next, our lightweight retrieval\nmodule aggregates the features within each part as its retrieval token and\ncompare all the tokens with source shapes from a pre-established database to\nidentify the most geometrically similar shape. Finally, we deform the retrieved\nshape in the deformation module to tightly fit the input object by harnessing\npart center guided neural cage deformation. The key insight of ShapeMaker is\nthe simultaneous training of the four highly-associated processes:\ncanonicalization, segmentation, retrieval, and deformation, leveraging\ncross-task consistency losses for mutual supervision. Extensive experiments on\nsynthetic datasets PartNet, ComplementMe, and real-world dataset Scan2CAD\ndemonstrate that ShapeMaker surpasses competitors by a large margin.", + "Post-training Sparsity (PTS) is a recently emerged avenue that chases\nefficient network sparsity with limited data in need. Existing PTS methods,\nhowever, undergo significant performance degradation compared with traditional\nmethods that retrain the sparse networks via the whole dataset, especially at\nhigh sparsity ratios. In this paper, we attempt to reconcile this disparity by\ntransposing three cardinal factors that profoundly alter the performance of\nconventional sparsity into the context of PTS. Our endeavors particularly\ncomprise (1) A base-decayed sparsity objective that promotes efficient\nknowledge transferring from dense network to the sparse counterpart. (2) A\nreducing-regrowing search algorithm designed to ascertain the optimal sparsity\ndistribution while circumventing overfitting to the small calibration set in\nPTS. (3) The employment of dynamic sparse training predicated on the preceding\naspects, aimed at comprehensively optimizing the sparsity structure while\nensuring training stability. Our proposed framework, termed UniPTS, is\nvalidated to be much superior to existing PTS methods across extensive\nbenchmarks.", + "(3) The employment of dynamic sparse training predicated on the preceding\naspects, aimed at comprehensively optimizing the sparsity structure while\nensuring training stability. Our proposed framework, termed UniPTS, is\nvalidated to be much superior to existing PTS methods across extensive\nbenchmarks. As an illustration, it amplifies the performance of POT, a recently\nproposed recipe, from 3.9% to 68.6% when pruning ResNet-50 at 90% sparsity\nratio on ImageNet. We release the code of our paper at\nhttps://github.com/xjjxmu/UniPTS.", + "Recent text-to-3D methods employing diffusion models have made significant\nadvancements in 3D human generation. However, these approaches face challenges\ndue to the limitations of text-to-image diffusion models, which lack an\nunderstanding of 3D structures. Consequently, these methods struggle to achieve\nhigh-quality human generation, resulting in smooth geometry and cartoon-like\nappearances. In this paper, we propose HumanNorm, a novel approach for\nhigh-quality and realistic 3D human generation. The main idea is to enhance the\nmodel's 2D perception of 3D geometry by learning a normal-adapted diffusion\nmodel and a normal-aligned diffusion model. The normal-adapted diffusion model\ncan generate high-fidelity normal maps corresponding to user prompts with\nview-dependent and body-aware text. The normal-aligned diffusion model learns\nto generate color images aligned with the normal maps, thereby transforming\nphysical geometry details into realistic appearance. Leveraging the proposed\nnormal diffusion model, we devise a progressive geometry generation strategy\nand a multi-step Score Distillation Sampling (SDS) loss to enhance the\nperformance of 3D human generation.", + "Leveraging the proposed\nnormal diffusion model, we devise a progressive geometry generation strategy\nand a multi-step Score Distillation Sampling (SDS) loss to enhance the\nperformance of 3D human generation. Comprehensive experiments substantiate\nHumanNorm's ability to generate 3D humans with intricate geometry and realistic\nappearances. HumanNorm outperforms existing text-to-3D methods in both geometry\nand texture quality. The project page of HumanNorm is\nhttps://humannorm.github.io/.", + "This paper investigates the effective utilization of unlabeled data for\nlarge-area cross-view geo-localization (CVGL), encompassing both unsupervised\nand semi-supervised settings. Common approaches to CVGL rely on\nground-satellite image pairs and employ label-driven supervised training.\nHowever, the cost of collecting precise cross-view image pairs hinders the\ndeployment of CVGL in real-life scenarios. Without the pairs, CVGL will be more\nchallenging to handle the significant imaging and spatial gaps between ground\nand satellite images. To this end, we propose an unsupervised framework\nincluding a cross-view projection to guide the model for retrieving initial\npseudo-labels and a fast re-ranking mechanism to refine the pseudo-labels by\nleveraging the fact that ``the perfectly paired ground-satellite image is\nlocated in a unique and identical scene\". The framework exhibits competitive\nperformance compared with supervised works on three open-source benchmarks. Our\ncode and models will be released on https://github.com/liguopeng0923/UCVGL.", + "A recent trend among generalizable novel view synthesis methods is to learn a\nrendering operator acting over single camera rays. This approach is promising\nbecause it removes the need for explicit volumetric rendering, but it\neffectively treats target images as collections of independent pixels. Here, we\npropose to learn a global rendering operator acting over all camera rays\njointly. We show that the right representation to enable such rendering is a\n5-dimensional plane sweep volume consisting of the projection of the input\nimages on a set of planes facing the target camera. Based on this\nunderstanding, we introduce our Convolutional Global Latent Renderer (ConvGLR),\nan efficient convolutional architecture that performs the rendering operation\nglobally in a low-resolution latent space. Experiments on various datasets\nunder sparse and generalizable setups show that our approach consistently\noutperforms existing methods by significant margins.", + "Comprehensive modeling of the surrounding 3D world is key to the success of\nautonomous driving. However, existing perception tasks like object detection,\nroad structure segmentation, depth & elevation estimation, and open-set object\nlocalization each only focus on a small facet of the holistic 3D scene\nunderstanding task. This divide-and-conquer strategy simplifies the algorithm\ndevelopment procedure at the cost of losing an end-to-end unified solution to\nthe problem. In this work, we address this limitation by studying camera-based\n3D panoptic segmentation, aiming to achieve a unified occupancy representation\nfor camera-only 3D scene understanding. To achieve this, we introduce a novel\nmethod called PanoOcc, which utilizes voxel queries to aggregate spatiotemporal\ninformation from multi-frame and multi-view images in a coarse-to-fine scheme,\nintegrating feature learning and scene representation into a unified occupancy\nrepresentation. We have conducted extensive ablation studies to verify the\neffectiveness and efficiency of the proposed method. Our approach achieves new\nstate-of-the-art results for camera-based semantic segmentation and panoptic\nsegmentation on the nuScenes dataset.", + "We have conducted extensive ablation studies to verify the\neffectiveness and efficiency of the proposed method. Our approach achieves new\nstate-of-the-art results for camera-based semantic segmentation and panoptic\nsegmentation on the nuScenes dataset. Furthermore, our method can be easily\nextended to dense occupancy prediction and has shown promising performance on\nthe Occ3D benchmark. The code will be released at\nhttps://github.com/Robertwyq/PanoOcc.", + "Neural approaches have shown a significant progress on camera-based\nreconstruction. But they require either a fairly dense sampling of the viewing\nsphere, or pre-training on an existing dataset, thereby limiting their\ngeneralizability. In contrast, photometric stereo (PS) approaches have shown\ngreat potential for achieving high-quality reconstruction under sparse\nviewpoints. Yet, they are impractical because they typically require tedious\nlaboratory conditions, are restricted to dark rooms, and often multi-staged,\nmaking them subject to accumulated errors. To address these shortcomings, we\npropose an end-to-end uncalibrated multi-view PS framework for reconstructing\nhigh-resolution shapes acquired from sparse viewpoints in a real-world\nenvironment. We relax the dark room assumption, and allow a combination of\nstatic ambient lighting and dynamic near LED lighting, thereby enabling easy\ndata capture outside the lab. Experimental validation confirms that it\noutperforms existing baseline approaches in the regime of sparse viewpoints by\na large margin. This allows to bring high-accuracy 3D reconstruction from the\ndark room to the real world, while maintaining a reasonable data capture\ncomplexity.", + "Category-agnostic pose estimation (CAPE) aims to predict keypoints for\narbitrary classes given a few support images annotated with keypoints. Existing\nmethods only rely on the features extracted at support keypoints to predict or\nrefine the keypoints on query image, but a few support feature vectors are\nlocal and inadequate for CAPE. Considering that human can quickly perceive\npotential keypoints of arbitrary objects, we propose a novel framework for CAPE\nbased on such potential keypoints (named as meta-points). Specifically, we\nmaintain learnable embeddings to capture inherent information of various\nkeypoints, which interact with image feature maps to produce meta-points\nwithout any support. The produced meta-points could serve as meaningful\npotential keypoints for CAPE. Due to the inevitable gap between inherency and\nannotation, we finally utilize the identities and details offered by support\nkeypoints to assign and refine meta-points to desired keypoints in query image.\nIn addition, we propose a progressive deformable point decoder and a slacked\nregression loss for better prediction and supervision. Our novel framework not\nonly reveals the inherency of keypoints but also outperforms existing methods\nof CAPE. Comprehensive experiments and in-depth studies on large-scale MP-100\ndataset demonstrate the effectiveness of our framework.", + "Human perception and understanding is a major domain of computer vision\nwhich, like many other vision subdomains recently, stands to gain from the use\nof large models pre-trained on large datasets. We hypothesize that the most\ncommon pre-training strategy of relying on general purpose, object-centric\nimage datasets such as ImageNet, is limited by an important domain shift. On\nthe other hand, collecting domain-specific ground truth such as 2D or 3D labels\ndoes not scale well. Therefore, we propose a pre-training approach based on\nself-supervised learning that works on human-centric data using only images.\nOur method uses pairs of images of humans: the first is partially masked and\nthe model is trained to reconstruct the masked parts given the visible ones and\na second image. It relies on both stereoscopic (cross-view) pairs, and temporal\n(cross-pose) pairs taken from videos, in order to learn priors about 3D as well\nas human motion. We pre-train a model for body-centric tasks and one for\nhand-centric tasks.", + "It relies on both stereoscopic (cross-view) pairs, and temporal\n(cross-pose) pairs taken from videos, in order to learn priors about 3D as well\nas human motion. We pre-train a model for body-centric tasks and one for\nhand-centric tasks. With a generic transformer architecture, these models\noutperform existing self-supervised pre-training methods on a wide set of\nhuman-centric downstream tasks, and obtain state-of-the-art performance for\ninstance when fine-tuning for model-based and model-free human mesh recovery.", + "In this paper, we target the adaptive source driven 3D scene editing task by\nproposing a CustomNeRF model that unifies a text description or a reference\nimage as the editing prompt. However, obtaining desired editing results\nconformed with the editing prompt is nontrivial since there exist two\nsignificant challenges, including accurate editing of only foreground regions\nand multi-view consistency given a single-view reference image. To tackle the\nfirst challenge, we propose a Local-Global Iterative Editing (LGIE) training\nscheme that alternates between foreground region editing and full-image\nediting, aimed at foreground-only manipulation while preserving the background.\nFor the second challenge, we also design a class-guided regularization that\nexploits class priors within the generation model to alleviate the\ninconsistency problem among different views in image-driven editing. Extensive\nexperiments show that our CustomNeRF produces precise editing results under\nvarious real scenes for both text- and image-driven settings.", + "In order to mimic the human few-shot learning (FSL) ability better and to\nmake FSL closer to real-world applications, this paper proposes a practical FSL\n(pFSL) setting. pFSL is based on unsupervised pretrained models (analogous to\nhuman prior knowledge) and recognizes many novel classes simultaneously.\nCompared to traditional FSL, pFSL is simpler in its formulation, easier to\nevaluate, more challenging and more practical. To cope with the rarity of\ntraining examples, this paper proposes IbM2, an instance-based max-margin\nmethod not only for the new pFSL setting, but also works well in traditional\nFSL scenarios. Based on the Gaussian Annulus Theorem, IbM2 converts random\nnoise applied to the instances into a mechanism to achieve maximum margin in\nthe many-way pFSL (or traditional FSL) recognition task. Experiments with\nvarious self-supervised pretraining methods and diverse many- or few-way FSL\ntasks show that IbM2 almost always leads to improvements compared to its\nrespective baseline methods, and in most cases the improvements are\nsignificant.", + "Experiments with\nvarious self-supervised pretraining methods and diverse many- or few-way FSL\ntasks show that IbM2 almost always leads to improvements compared to its\nrespective baseline methods, and in most cases the improvements are\nsignificant. With both the new pFSL setting and novel IbM2 method, this paper\nshows that practical few-shot learning is both viable and promising.", + "Coarse-to-fine 3D instance segmentation methods show weak performances\ncompared to recent Grouping-based, Kernel-based and Transformer-based methods.\nWe argue that this is due to two limitations: 1) Instance size overestimation\nby axis-aligned bounding box(AABB) 2) False negative error accumulation from\ninaccurate box to the refinement phase. In this work, we introduce Spherical\nMask, a novel coarse-to-fine approach based on spherical representation,\novercoming those two limitations with several benefits. Specifically, our\ncoarse detection estimates each instance with a 3D polygon using a center and\nradial distance predictions, which avoids excessive size estimation of AABB. To\ncut the error propagation in the existing coarse-to-fine approaches, we\nvirtually migrate points based on the polygon, allowing all foreground points,\nincluding false negatives, to be refined. During inference, the proposal and\npoint migration modules run in parallel and are assembled to form binary masks\nof instances. We also introduce two margin-based losses for the point migration\nto enforce corrections for the false positives/negatives and cohesion of\nforeground points, significantly improving the performance.", + "During inference, the proposal and\npoint migration modules run in parallel and are assembled to form binary masks\nof instances. We also introduce two margin-based losses for the point migration\nto enforce corrections for the false positives/negatives and cohesion of\nforeground points, significantly improving the performance. Experimental\nresults from three datasets, such as ScanNetV2, S3DIS, and STPLS3D, show that\nour proposed method outperforms existing works, demonstrating the effectiveness\nof the new instance representation with spherical coordinates.", + "The task of face reenactment is to transfer the head motion and facial\nexpressions from a driving video to the appearance of a source image, which may\nbe of a different person (cross-reenactment). Most existing methods are\nCNN-based and estimate optical flow from the source image to the current\ndriving frame, which is then inpainted and refined to produce the output\nanimation. We propose a transformer-based encoder for computing a set-latent\nrepresentation of the source image(s). We then predict the output color of a\nquery pixel using a transformer-based decoder, which is conditioned with\nkeypoints and a facial expression vector extracted from the driving frame.\nLatent representations of the source person are learned in a self-supervised\nmanner that factorize their appearance, head pose, and facial expressions.\nThus, they are perfectly suited for cross-reenactment. In contrast to most\nrelated work, our method naturally extends to multiple source images and can\nthus adapt to person-specific facial dynamics. We also propose data\naugmentation and regularization schemes that are necessary to prevent\noverfitting and support generalizability of the learned representations. We\nevaluated our approach in a randomized user study.", + "We also propose data\naugmentation and regularization schemes that are necessary to prevent\noverfitting and support generalizability of the learned representations. We\nevaluated our approach in a randomized user study. The results indicate\nsuperior performance compared to the state-of-the-art in terms of motion\ntransfer quality and temporal consistency.", + "Multi-task learning for dense prediction has emerged as a pivotal area in\ncomputer vision, enabling simultaneous processing of diverse yet interrelated\npixel-wise prediction tasks. However, the substantial computational demands of\nstate-of-the-art (SoTA) models often limit their widespread deployment. This\npaper addresses this challenge by introducing network binarization to compress\nresource-intensive multi-task dense predictors. Specifically, our goal is to\nsignificantly accelerate multi-task dense prediction models via Binary Neural\nNetworks (BNNs) while maintaining and even improving model performance at the\nsame time. To reach this goal, we propose a Binary Multi-task Dense Predictor,\nBi-MTDP, and several variants of Bi-MTDP, in which a multi-task dense predictor\nis constructed via specified binarized modules. Our systematical analysis of\nthis predictor reveals that performance drop from binarization is primarily\ncaused by severe information degradation. To address this issue, we introduce a\ndeep information bottleneck layer that enforces representations for downstream\ntasks satisfying Gaussian distribution in forward propagation. Moreover, we\nintroduce a knowledge distillation mechanism to correct the direction of\ninformation flow in backward propagation.", + "To address this issue, we introduce a\ndeep information bottleneck layer that enforces representations for downstream\ntasks satisfying Gaussian distribution in forward propagation. Moreover, we\nintroduce a knowledge distillation mechanism to correct the direction of\ninformation flow in backward propagation. Intriguingly, one variant of Bi-MTDP\noutperforms full-precision (FP) multi-task dense prediction SoTAs, ARTC\n(CNN-based) and InvPT (ViT-Based). This result indicates that Bi-MTDP is not\nmerely a naive trade-off between performance and efficiency, but is rather a\nbenefit of the redundant information flow thanks to the multi-task\narchitecture. Code is available at https://github.com/42Shawn/BiMTDP.", + "The value of roadside perception, which could extend the boundaries of\nautonomous driving and traffic management, has gradually become more prominent\nand acknowledged in recent years. However, existing roadside perception\napproaches only focus on the single-infrastructure sensor system, which cannot\nrealize a comprehensive understanding of a traffic area because of the limited\nsensing range and blind spots. Orienting high-quality roadside perception, we\nneed Roadside Cooperative Perception (RCooper) to achieve practical\narea-coverage roadside perception for restricted traffic areas. Rcooper has its\nown domain-specific challenges, but further exploration is hindered due to the\nlack of datasets. We hence release the first real-world, large-scale RCooper\ndataset to bloom the research on practical roadside cooperative perception,\nincluding detection and tracking. The manually annotated dataset comprises 50k\nimages and 30k point clouds, including two representative traffic scenes (i.e.,\nintersection and corridor). The constructed benchmarks prove the effectiveness\nof roadside cooperation perception and demonstrate the direction of further\nresearch. Codes and dataset can be accessed at:\nhttps://github.com/AIR-THU/DAIR-RCooper.", + "Synthesizing natural human motions that enable a 3D human avatar to walk and\nreach for arbitrary goals in 3D space remains an unsolved problem with many\napplications. Existing methods (data-driven or using reinforcement learning)\nare limited in terms of generalization and motion naturalness. A primary\nobstacle is the scarcity of training data that combines locomotion with goal\nreaching. To address this, we introduce WANDR, a data-driven model that takes\nan avatar's initial pose and a goal's 3D position and generates natural human\nmotions that place the end effector (wrist) on the goal location. To solve\nthis, we introduce novel intention features that drive rich goal-oriented\nmovement. Intention guides the agent to the goal, and interactively adapts the\ngeneration to novel situations without needing to define sub-goals or the\nentire motion path. Crucially, intention allows training on datasets that have\ngoal-oriented motions as well as those that do not. WANDR is a conditional\nVariational Auto-Encoder (c-VAE), which we train using the AMASS and CIRCLE\ndatasets.", + "Crucially, intention allows training on datasets that have\ngoal-oriented motions as well as those that do not. WANDR is a conditional\nVariational Auto-Encoder (c-VAE), which we train using the AMASS and CIRCLE\ndatasets. We evaluate our method extensively and demonstrate its ability to\ngenerate natural and long-term motions that reach 3D goals and generalize to\nunseen goal locations. Our models and code are available for research purposes\nat wandr.is.tue.mpg.de.", + "Structural model pruning is a prominent approach used for reducing the\ncomputational cost of Convolutional Neural Networks (CNNs) before their\ndeployment on resource-constrained devices. Yet, the majority of proposed ideas\nrequire a pretrained model before pruning, which is costly to secure. In this\npaper, we propose a novel structural pruning approach to jointly learn the\nweights and structurally prune architectures of CNN models. The core element of\nour method is a Reinforcement Learning (RL) agent whose actions determine the\npruning ratios of the CNN model's layers, and the resulting model's accuracy\nserves as its reward. We conduct the joint training and pruning by iteratively\ntraining the model's weights and the agent's policy, and we regularize the\nmodel's weights to align with the selected structure by the agent. The evolving\nmodel's weights result in a dynamic reward function for the agent, which\nprevents using prominent episodic RL methods with stationary environment\nassumption for our purpose. We address this challenge by designing a mechanism\nto model the complex changing dynamics of the reward function and provide a\nrepresentation of it to the RL agent.", + "We address this challenge by designing a mechanism\nto model the complex changing dynamics of the reward function and provide a\nrepresentation of it to the RL agent. To do so, we take a learnable embedding\nfor each training epoch and employ a recurrent model to calculate a\nrepresentation of the changing environment. We train the recurrent model and\nembeddings using a decoder model to reconstruct observed rewards. Such a design\nempowers our agent to effectively leverage episodic observations along with the\nenvironment representations to learn a proper policy to determine performant\nsub-networks of the CNN model. Our extensive experiments on CIFAR-10 and\nImageNet using ResNets and MobileNets demonstrate the effectiveness of our\nmethod.", + "In noisy label learning, estimating noisy class posteriors plays a\nfundamental role for developing consistent classifiers, as it forms the basis\nfor estimating clean class posteriors and the transition matrix. Existing\nmethods typically learn noisy class posteriors by training a classification\nmodel with noisy labels. However, when labels are incorrect, these models may\nbe misled to overemphasize the feature parts that do not reflect the instance\ncharacteristics, resulting in significant errors in estimating noisy class\nposteriors. To address this issue, this paper proposes to augment the\nsupervised information with part-level labels, encouraging the model to focus\non and integrate richer information from various parts. Specifically, our\nmethod first partitions features into distinct parts by cropping instances,\nyielding part-level labels associated with these various parts. Subsequently,\nwe introduce a novel single-to-multiple transition matrix to model the\nrelationship between the noisy and part-level labels, which incorporates\npart-level labels into a classifier-consistent framework. Utilizing this\nframework with part-level labels, we can learn the noisy class posteriors more\nprecisely by guiding the model to integrate information from various parts,\nultimately improving the classification performance.", + "Utilizing this\nframework with part-level labels, we can learn the noisy class posteriors more\nprecisely by guiding the model to integrate information from various parts,\nultimately improving the classification performance. Our method is\ntheoretically sound, while experiments show that it is empirically effective in\nsynthetic and real-world noisy benchmarks.", + "Vision-Language Models (VLMs) such as CLIP are trained on large amounts of\nimage-text pairs, resulting in remarkable generalization across several data\ndistributions. However, in several cases, their expensive training and data\ncollection/curation costs do not justify the end application. This motivates a\nvendor-client paradigm, where a vendor trains a large-scale VLM and grants only\ninput-output access to clients on a pay-per-query basis in a black-box setting.\nThe client aims to minimize inference cost by distilling the VLM to a student\nmodel using the limited available task-specific data, and further deploying\nthis student model in the downstream application. While naive distillation\nlargely improves the In-Domain (ID) accuracy of the student, it fails to\ntransfer the superior out-of-distribution (OOD) generalization of the VLM\nteacher using the limited available labeled images.", + "While naive distillation\nlargely improves the In-Domain (ID) accuracy of the student, it fails to\ntransfer the superior out-of-distribution (OOD) generalization of the VLM\nteacher using the limited available labeled images. To mitigate this, we\npropose Vision-Language to Vision - Align, Distill, Predict (VL2V-ADiP), which\nfirst aligns the vision and language modalities of the teacher model with the\nvision modality of a pre-trained student model, and further distills the\naligned VLM representations to the student. This maximally retains the\npre-trained features of the student, while also incorporating the rich\nrepresentations of the VLM image encoder and the superior generalization of the\ntext embeddings. The proposed approach achieves state-of-the-art results on the\nstandard Domain Generalization benchmarks in a black-box teacher setting as\nwell as a white-box setting where the weights of the VLM are accessible.", + "Pre-trained vision-language models have shown impressive success on various\ncomputer vision tasks with their zero-shot generalizability. Recently, prompt\nlearning approaches have been explored to efficiently and effectively adapt the\nvision-language models to a variety of downstream tasks. However, most existing\nprompt learning methods suffer from task overfitting since the general\nknowledge of the pre-trained vision language models is forgotten while the\nprompts are finetuned on a small data set from a specific target task. To\naddress this issue, we propose a Prompt Meta-Regularization (ProMetaR) to\nimprove the generalizability of prompt learning for vision-language models.\nSpecifically, ProMetaR meta-learns both the regularizer and the soft prompts to\nharness the task-specific knowledge from the downstream tasks and task-agnostic\ngeneral knowledge from the vision-language models. Further, ProMetaR augments\nthe task to generate multiple virtual tasks to alleviate the meta-overfitting.\nIn addition, we provide the analysis to comprehend how ProMetaR improves the\ngeneralizability of prompt tuning in the perspective of the gradient alignment.", + "Further, ProMetaR augments\nthe task to generate multiple virtual tasks to alleviate the meta-overfitting.\nIn addition, we provide the analysis to comprehend how ProMetaR improves the\ngeneralizability of prompt tuning in the perspective of the gradient alignment.\nOur extensive experiments demonstrate that our ProMetaR improves the\ngeneralizability of conventional prompt learning methods under\nbase-to-base/base-to-new and domain generalization settings. The code of\nProMetaR is available at https://github.com/mlvlab/ProMetaR.", + "Vision-Language Models (VLMs), such as CLIP, exhibit strong image-text\ncomprehension abilities, facilitating advances in several downstream tasks such\nas zero-shot image classification, image-text retrieval, and text-to-image\ngeneration. However, the compositional reasoning abilities of existing VLMs\nremains subpar. The root of this limitation lies in the inadequate alignment\nbetween the images and captions in the pretraining datasets. Additionally, the\ncurrent contrastive learning objective fails to focus on fine-grained grounding\ncomponents like relations, actions, and attributes, resulting in \"bag-of-words\"\nrepresentations. We introduce a simple and effective method to improve\ncompositional reasoning in VLMs. Our method better leverages available datasets\nby refining and expanding the standard image-text contrastive learning\nframework. Our approach does not require specific annotations and does not\nincur extra parameters. When integrated with CLIP, our technique yields notable\nimprovement over state-of-the-art baselines across five vision-language\ncompositional benchmarks. We open-source our code at\nhttps://github.com/lezhang7/Enhance-FineGrained.", + "While large language models (LLMs) excel in a simulated world of texts, they\nstruggle to interact with the more realistic world without perceptions of other\nmodalities such as visual or audio signals. Although vision-language models\n(VLMs) integrate LLM modules (1) aligned with static image features, and (2)\nmay possess prior knowledge of world dynamics (as demonstrated in the text\nworld), they have not been trained in an embodied visual world and thus cannot\nalign with its dynamics. On the other hand, training an embodied agent in a\nnoisy visual world without expert guidance is often challenging and\ninefficient. In this paper, we train a VLM agent living in a visual world using\nan LLM agent excelling in a parallel text world. Specifically, we distill LLM's\nreflection outcomes (improved actions by analyzing mistakes) in a text world's\ntasks to finetune the VLM on the same tasks of the visual world, resulting in\nan Embodied Multi-Modal Agent (EMMA) quickly adapting to the visual world\ndynamics.", + "Such cross-modality imitation learning between the two parallel\nworlds is achieved by a novel DAgger-DPO algorithm, enabling EMMA to generalize\nto a broad scope of new tasks without any further guidance from the LLM expert.\nExtensive evaluations on the ALFWorld benchmark's diverse tasks highlight\nEMMA's superior performance to SOTA VLM-based agents, e.g., 20%-70% improvement\nin the success rate.", + "The booming use of text-to-image generative models has raised concerns about\ntheir high risk of producing copyright-infringing content. While probabilistic\ncopyright protection methods provide a probabilistic guarantee against such\ninfringement, in this paper, we introduce Virtually Assured Amplification\nAttack (VA3), a novel online attack framework that exposes the vulnerabilities\nof these protection mechanisms. The proposed framework significantly amplifies\nthe probability of generating infringing content on the sustained interactions\nwith generative models and a non-trivial lower-bound on the success probability\nof each engagement. Our theoretical and experimental results demonstrate the\neffectiveness of our approach under various scenarios. These findings highlight\nthe potential risk of implementing probabilistic copyright protection in\npractical applications of text-to-image generative models. Code is available at\nhttps://github.com/South7X/VA3.", + "Denoising probabilistic diffusion models have shown breakthrough performance\nto generate more photo-realistic images or human-level illustrations than the\nprior models such as GANs. This high image-generation capability has stimulated\nthe creation of many downstream applications in various areas. However, we find\nthat this technology is actually a double-edged sword: We identify a new type\nof attack, called the Natural Denoising Diffusion (NDD) attack based on the\nfinding that state-of-the-art deep neural network (DNN) models still hold their\nprediction even if we intentionally remove their robust features, which are\nessential to the human visual system (HVS), through text prompts. The NDD\nattack shows a significantly high capability to generate low-cost,\nmodel-agnostic, and transferable adversarial attacks by exploiting the natural\nattack capability in diffusion models. To systematically evaluate the risk of\nthe NDD attack, we perform a large-scale empirical study with our newly created\ndataset, the Natural Denoising Diffusion Attack (NDDA) dataset. We evaluate the\nnatural attack capability by answering 6 research questions.", + "To systematically evaluate the risk of\nthe NDD attack, we perform a large-scale empirical study with our newly created\ndataset, the Natural Denoising Diffusion Attack (NDDA) dataset. We evaluate the\nnatural attack capability by answering 6 research questions. Through a user\nstudy, we find that it can achieve an 88% detection rate while being stealthy\nto 93% of human subjects; we also find that the non-robust features embedded by\ndiffusion models contribute to the natural attack capability. To confirm the\nmodel-agnostic and transferable attack capability, we perform the NDD attack\nagainst the Tesla Model 3 and find that 73% of the physically printed attacks\ncan be detected as stop signs. Our hope is that the study and dataset can help\nour community be aware of the risks in diffusion models and facilitate further\nresearch toward robust DNN models.", + "Self-supervised 3D representation learning aims to learn effective\nrepresentations from large-scale unlabeled point clouds. Most existing\napproaches adopt point discrimination as the pretext task, which assigns\nmatched points in two distinct views as positive pairs and unmatched points as\nnegative pairs. However, this approach often results in semantically identical\npoints having dissimilar representations, leading to a high number of false\nnegatives and introducing a \"semantic conflict\" problem. To address this issue,\nwe propose GroupContrast, a novel approach that combines segment grouping and\nsemantic-aware contrastive learning. Segment grouping partitions points into\nsemantically meaningful regions, which enhances semantic coherence and provides\nsemantic guidance for the subsequent contrastive representation learning.\nSemantic-aware contrastive learning augments the semantic information extracted\nfrom segment grouping and helps to alleviate the issue of \"semantic conflict\".\nWe conducted extensive experiments on multiple 3D scene understanding tasks.\nThe results demonstrate that GroupContrast learns semantically meaningful\nrepresentations and achieves promising transfer learning performance.", + "The widespread adoption of face recognition has led to increasing privacy\nconcerns, as unauthorized access to face images can expose sensitive personal\ninformation. This paper explores face image protection against viewing and\nrecovery attacks. Inspired by image compression, we propose creating a visually\nuninformative face image through feature subtraction between an original face\nand its model-produced regeneration. Recognizable identity features within the\nimage are encouraged by co-training a recognition model on its high-dimensional\nfeature representation. To enhance privacy, the high-dimensional representation\nis crafted through random channel shuffling, resulting in randomized\nrecognizable images devoid of attacker-leverageable texture details. We distill\nour methodologies into a novel privacy-preserving face recognition method,\nMinusFace. Experiments demonstrate its high recognition accuracy and effective\nprivacy protection. Its code is available at https://github.com/Tencent/TFace.", + "Self-attention mechanism is the key of the Transformer but often criticized\nfor its computation demands. Previous token pruning works motivate their\nmethods from the view of computation redundancy but still need to load the full\nnetwork and require same memory costs. This paper introduces a novel strategy\nthat simplifies vision transformers and reduces computational load through the\nselective removal of non-essential attention layers, guided by entropy\nconsiderations. We identify that regarding the attention layer in bottom\nblocks, their subsequent MLP layers, i.e. two feed-forward layers, can elicit\nthe same entropy quantity. Meanwhile, the accompanied MLPs are under-exploited\nsince they exhibit smaller feature entropy compared to those MLPs in the top\nblocks. Therefore, we propose to integrate the uninformative attention layers\ninto their subsequent counterparts by degenerating them into identical mapping,\nyielding only MLP in certain transformer blocks. Experimental results on\nImageNet-1k show that the proposed method can remove 40% attention layer of\nDeiT-B, improving throughput and memory bound without performance compromise.\nCode is available at https://github.com/sihaoevery/lambda_vit.", + "As pretrained text-to-image diffusion models become increasingly powerful,\nrecent efforts have been made to distill knowledge from these text-to-image\npretrained models for optimizing a text-guided 3D model. Most of the existing\nmethods generate a holistic 3D model from a plain text input. This can be\nproblematic when the text describes a complex scene with multiple objects,\nbecause the vectorized text embeddings are inherently unable to capture a\ncomplex description with multiple entities and relationships. Holistic 3D\nmodeling of the entire scene further prevents accurate grounding of text\nentities and concepts. To address this limitation, we propose GraphDreamer, a\nnovel framework to generate compositional 3D scenes from scene graphs, where\nobjects are represented as nodes and their interactions as edges. By exploiting\nnode and edge information in scene graphs, our method makes better use of the\npretrained text-to-image diffusion model and is able to fully disentangle\ndifferent objects without image-level supervision. To facilitate modeling of\nobject-wise relationships, we use signed distance fields as representation and\nimpose a constraint to avoid inter-penetration of objects.", + "To facilitate modeling of\nobject-wise relationships, we use signed distance fields as representation and\nimpose a constraint to avoid inter-penetration of objects. To avoid manual\nscene graph creation, we design a text prompt for ChatGPT to generate scene\ngraphs based on text inputs. We conduct both qualitative and quantitative\nexperiments to validate the effectiveness of GraphDreamer in generating\nhigh-fidelity compositional 3D scenes with disentangled object entities.", + "Generative Zero-shot learning (ZSL) learns a generator to synthesize visual\nsamples for unseen classes, which is an effective way to advance ZSL. However,\nexisting generative methods rely on the conditions of Gaussian noise and the\npredefined semantic prototype, which limit the generator only optimized on\nspecific seen classes rather than characterizing each visual instance,\nresulting in poor generalizations (\\textit{e.g.}, overfitting to seen classes).\nTo address this issue, we propose a novel Visual-Augmented Dynamic Semantic\nprototype method (termed VADS) to boost the generator to learn accurate\nsemantic-visual mapping by fully exploiting the visual-augmented knowledge into\nsemantic conditions. In detail, VADS consists of two modules: (1) Visual-aware\nDomain Knowledge Learning module (VDKL) learns the local bias and global prior\nof the visual features (referred to as domain visual knowledge), which replace\npure Gaussian noise to provide richer prior noise information; (2)\nVision-Oriented Semantic Updation module (VOSU) updates the semantic prototype\naccording to the visual representations of the samples.", + "Ultimately, we\nconcatenate their output as a dynamic semantic prototype, which serves as the\ncondition of the generator. Extensive experiments demonstrate that our VADS\nachieves superior CZSL and GZSL performances on three prominent datasets and\noutperforms other state-of-the-art methods with averaging increases by 6.4\\%,\n5.9\\% and 4.2\\% on SUN, CUB and AWA2, respectively.", + "Text-to-image generative models, specifically those based on diffusion models\nlike Imagen and Stable Diffusion, have made substantial advancements. Recently,\nthere has been a surge of interest in the delicate refinement of text prompts.\nUsers assign weights or alter the injection time steps of certain words in the\ntext prompts to improve the quality of generated images. However, the success\nof fine-control prompts depends on the accuracy of the text prompts and the\ncareful selection of weights and time steps, which requires significant manual\nintervention. To address this, we introduce the \\textbf{P}rompt\n\\textbf{A}uto-\\textbf{E}diting (PAE) method. Besides refining the original\nprompts for image generation, we further employ an online reinforcement\nlearning strategy to explore the weights and injection time steps of each word,\nleading to the dynamic fine-control prompts. The reward function during\ntraining encourages the model to consider aesthetic score, semantic\nconsistency, and user preferences. Experimental results demonstrate that our\nproposed method effectively improves the original prompts, generating visually\nmore appealing images while maintaining semantic alignment. Code is available\nat https://github.com/Mowenyii/PAE.", + "Portable 360$^\\circ$ cameras are becoming a cheap and efficient tool to\nestablish large visual databases. By capturing omnidirectional views of a\nscene, these cameras could expedite building environment models that are\nessential for visual localization. However, such an advantage is often\noverlooked due to the lack of valuable datasets. This paper introduces a new\nbenchmark dataset, 360Loc, composed of 360$^\\circ$ images with ground truth\nposes for visual localization. We present a practical implementation of\n360$^\\circ$ mapping combining 360$^\\circ$ images with lidar data to generate\nthe ground truth 6DoF poses. 360Loc is the first dataset and benchmark that\nexplores the challenge of cross-device visual positioning, involving\n360$^\\circ$ reference frames, and query frames from pinhole, ultra-wide FoV\nfisheye, and 360$^\\circ$ cameras. We propose a virtual camera approach to\ngenerate lower-FoV query frames from 360$^\\circ$ images, which ensures a fair\ncomparison of performance among different query types in visual localization\ntasks.", + "We propose a virtual camera approach to\ngenerate lower-FoV query frames from 360$^\\circ$ images, which ensures a fair\ncomparison of performance among different query types in visual localization\ntasks. We also extend this virtual camera approach to feature matching-based\nand pose regression-based methods to alleviate the performance loss caused by\nthe cross-device domain gap, and evaluate its effectiveness against\nstate-of-the-art baselines. We demonstrate that omnidirectional visual\nlocalization is more robust in challenging large-scale scenes with symmetries\nand repetitive structures. These results provide new insights into 360-camera\nmapping and omnidirectional visual localization with cross-device queries.", + "Zero-shot 3D point cloud understanding can be achieved via 2D Vision-Language\nModels (VLMs). Existing strategies directly map Vision-Language Models from 2D\npixels of rendered or captured views to 3D points, overlooking the inherent and\nexpressible point cloud geometric structure. Geometrically similar or close\nregions can be exploited for bolstering point cloud understanding as they are\nlikely to share semantic information. To this end, we introduce the first\ntraining-free aggregation technique that leverages the point cloud's 3D\ngeometric structure to improve the quality of the transferred Vision-Language\nModels. Our approach operates iteratively, performing local-to-global\naggregation based on geometric and semantic point-level reasoning. We benchmark\nour approach on three downstream tasks, including classification, part\nsegmentation, and semantic segmentation, with a variety of datasets\nrepresenting both synthetic/real-world, and indoor/outdoor scenarios. Our\napproach achieves new state-of-the-art results in all benchmarks. Our approach\noperates iteratively, performing local-to-global aggregation based on geometric\nand semantic point-level reasoning. Code and dataset are available at\nhttps://luigiriz.github.io/geoze-website/", + "Images suffer from heavy spatial redundancy because pixels in neighboring\nregions are spatially correlated. Existing approaches strive to overcome this\nlimitation by reducing less meaningful image regions. However, current leading\nmethods rely on supervisory signals. They may compel models to preserve content\nthat aligns with labeled categories and discard content belonging to unlabeled\ncategories. This categorical inductive bias makes these methods less effective\nin real-world scenarios. To address this issue, we propose a self-supervised\nframework for image redundancy reduction called Learning to Rank Patches\n(LTRP). We observe that image reconstruction of masked image modeling models is\nsensitive to the removal of visible patches when the masking ratio is high\n(e.g., 90\\%). Building upon it, we implement LTRP via two steps: inferring the\nsemantic density score of each patch by quantifying variation between\nreconstructions with and without this patch, and learning to rank the patches\nwith the pseudo score. The entire process is self-supervised, thus getting out\nof the dilemma of categorical inductive bias. We design extensive experiments\non different datasets and tasks.", + "The entire process is self-supervised, thus getting out\nof the dilemma of categorical inductive bias. We design extensive experiments\non different datasets and tasks. The results demonstrate that LTRP outperforms\nboth supervised and other self-supervised methods due to the fair assessment of\nimage content.", + "Detecting human-object interaction (HOI) has long been limited by the amount\nof supervised data available. Recent approaches address this issue by\npre-training according to pseudo-labels, which align object regions with HOI\ntriplets parsed from image captions. However, pseudo-labeling is tricky and\nnoisy, making HOI pre-training a complex process. Therefore, we propose an\nefficient disentangled pre-training method for HOI detection (DP-HOI) to\naddress this problem. First, DP-HOI utilizes object detection and action\nrecognition datasets to pre-train the detection and interaction decoder layers,\nrespectively. Then, we arrange these decoder layers so that the pre-training\narchitecture is consistent with the downstream HOI detection task. This\nfacilitates efficient knowledge transfer. Specifically, the detection decoder\nidentifies reliable human instances in each action recognition dataset image,\ngenerates one corresponding query, and feeds it into the interaction decoder\nfor verb classification. Next, we combine the human instance verb predictions\nin the same image and impose image-level supervision. The DP-HOI structure can\nbe easily adapted to the HOI detection task, enabling effective model parameter\ninitialization.", + "Next, we combine the human instance verb predictions\nin the same image and impose image-level supervision. The DP-HOI structure can\nbe easily adapted to the HOI detection task, enabling effective model parameter\ninitialization. Therefore, it significantly enhances the performance of\nexisting HOI detection models on a broad range of rare categories. The code and\npre-trained weight are available at https://github.com/xingaoli/DP-HOI.", + "Vision-centric perception systems for autonomous driving have gained\nconsiderable attention recently due to their cost-effectiveness and\nscalability, especially compared to LiDAR-based systems. However, these systems\noften struggle in low-light conditions, potentially compromising their\nperformance and safety. To address this, our paper introduces LightDiff, a\ndomain-tailored framework designed to enhance the low-light image quality for\nautonomous driving applications. Specifically, we employ a multi-condition\ncontrolled diffusion model. LightDiff works without any human-collected paired\ndata, leveraging a dynamic data degradation process instead. It incorporates a\nnovel multi-condition adapter that adaptively controls the input weights from\ndifferent modalities, including depth maps, RGB images, and text captions, to\neffectively illuminate dark scenes while maintaining context consistency.\nFurthermore, to align the enhanced images with the detection model's knowledge,\nLightDiff employs perception-specific scores as rewards to guide the diffusion\ntraining process through reinforcement learning. Extensive experiments on the\nnuScenes datasets demonstrate that LightDiff can significantly improve the\nperformance of several state-of-the-art 3D detectors in night-time conditions\nwhile achieving high visual quality scores, highlighting its potential to\nsafeguard autonomous driving.", + "Text-to-image diffusion models allow seamless generation of personalized\nimages from scant reference photos. Yet, these tools, in the wrong hands, can\nfabricate misleading or harmful content, endangering individuals. To address\nthis problem, existing poisoning-based approaches perturb user images in an\nimperceptible way to render them \"unlearnable\" from malicious uses. We identify\ntwo limitations of these defending approaches: i) sub-optimal due to the\nhand-crafted heuristics for solving the intractable bilevel optimization and\nii) lack of robustness against simple data transformations like Gaussian\nfiltering. To solve these challenges, we propose MetaCloak, which solves the\nbi-level poisoning problem with a meta-learning framework with an additional\ntransformation sampling process to craft transferable and robust perturbation.\nSpecifically, we employ a pool of surrogate diffusion models to craft\ntransferable and model-agnostic perturbation. Furthermore, by incorporating an\nadditional transformation process, we design a simple denoising-error\nmaximization loss that is sufficient for causing transformation-robust semantic\ndistortion and degradation in a personalized generation.", + "Furthermore, by incorporating an\nadditional transformation process, we design a simple denoising-error\nmaximization loss that is sufficient for causing transformation-robust semantic\ndistortion and degradation in a personalized generation. Extensive experiments\non the VGGFace2 and CelebA-HQ datasets show that MetaCloak outperforms existing\napproaches. Notably, MetaCloak can successfully fool online training services\nlike Replicate, in a black-box manner, demonstrating the effectiveness of\nMetaCloak in real-world scenarios. Our code is available at\nhttps://github.com/liuyixin-louis/MetaCloak.", + "We propose a self-supervised approach for learning physics-based subspaces\nfor real-time simulation. Existing learning-based methods construct subspaces\nby approximating pre-defined simulation data in a purely geometric way.\nHowever, this approach tends to produce high-energy configurations, leads to\nentangled latent space dimensions, and generalizes poorly beyond the training\nset. To overcome these limitations, we propose a self-supervised approach that\ndirectly minimizes the system's mechanical energy during training. We show that\nour method leads to learned subspaces that reflect physical equilibrium\nconstraints, resolve overfitting issues of previous methods, and offer\ninterpretable latent space parameters.", + "Neural fields (NeFs) have recently emerged as a versatile method for modeling\nsignals of various modalities, including images, shapes, and scenes.\nSubsequently, a number of works have explored the use of NeFs as\nrepresentations for downstream tasks, e.g. classifying an image based on the\nparameters of a NeF that has been fit to it. However, the impact of the NeF\nhyperparameters on their quality as downstream representation is scarcely\nunderstood and remains largely unexplored. This is in part caused by the large\namount of time required to fit datasets of neural fields.\n In this work, we propose a JAX-based library that leverages parallelization\nto enable fast optimization of large-scale NeF datasets, resulting in a\nsignificant speed-up. With this library, we perform a comprehensive study that\ninvestigates the effects of different hyperparameters on fitting NeFs for\ndownstream tasks. In particular, we explore the use of a shared initialization,\nthe effects of overtraining, and the expressiveness of the network\narchitectures used. Our study provides valuable insights on how to train NeFs\nand offers guidance for optimizing their effectiveness in downstream\napplications.", + "In particular, we explore the use of a shared initialization,\nthe effects of overtraining, and the expressiveness of the network\narchitectures used. Our study provides valuable insights on how to train NeFs\nand offers guidance for optimizing their effectiveness in downstream\napplications. Finally, based on the proposed library and our analysis, we\npropose Neural Field Arena, a benchmark consisting of neural field variants of\npopular vision datasets, including MNIST, CIFAR, variants of ImageNet, and\nShapeNetv2. Our library and the Neural Field Arena will be open-sourced to\nintroduce standardized benchmarking and promote further research on neural\nfields.", + "Multiple Object Tracking (MOT) is a critical area within computer vision,\nwith a broad spectrum of practical implementations. Current research has\nprimarily focused on the development of tracking algorithms and enhancement of\npost-processing techniques. Yet, there has been a lack of thorough examination\nconcerning the nature of tracking data it self. In this study, we pioneer an\nexploration into the distribution patterns of tracking data and identify a\npronounced long-tail distribution issue within existing MOT datasets. We note a\nsignificant imbalance in the distribution of trajectory lengths across\ndifferent pedestrians, a phenomenon we refer to as ``pedestrians trajectory\nlong-tail distribution''. Addressing this challenge, we introduce a bespoke\nstrategy designed to mitigate the effects of this skewed distribution.\nSpecifically, we propose two data augmentation strategies, including Stationary\nCamera View Data Augmentation (SVA) and Dynamic Camera View Data Augmentation\n(DVA) , designed for viewpoint states and the Group Softmax (GS) module for\nRe-ID. SVA is to backtrack and predict the pedestrian trajectory of tail\nclasses, and DVA is to use diffusion model to change the background of the\nscene.", + "SVA is to backtrack and predict the pedestrian trajectory of tail\nclasses, and DVA is to use diffusion model to change the background of the\nscene. GS divides the pedestrians into unrelated groups and performs softmax\noperation on each group individually. Our proposed strategies can be integrated\ninto numerous existing tracking systems, and extensive experimentation\nvalidates the efficacy of our method in reducing the influence of long-tail\ndistribution on multi-object tracking performance. The code is available at\nhttps://github.com/chen-si-jia/Trajectory-Long-tail-Distribution-for-MOT.", + "Information retrieval is an ever-evolving and crucial research domain. The\nsubstantial demand for high-quality human motion data especially in online\nacquirement has led to a surge in human motion research works. Prior works have\nmainly concentrated on dual-modality learning, such as text and motion tasks,\nbut three-modality learning has been rarely explored. Intuitively, an extra\nintroduced modality can enrich a model's application scenario, and more\nimportantly, an adequate choice of the extra modality can also act as an\nintermediary and enhance the alignment between the other two disparate\nmodalities. In this work, we introduce LAVIMO (LAnguage-VIdeo-MOtion\nalignment), a novel framework for three-modality learning integrating\nhuman-centric videos as an additional modality, thereby effectively bridging\nthe gap between text and motion. Moreover, our approach leverages a specially\ndesigned attention mechanism to foster enhanced alignment and synergistic\neffects among text, video, and motion modalities.", + "Moreover, our approach leverages a specially\ndesigned attention mechanism to foster enhanced alignment and synergistic\neffects among text, video, and motion modalities. Empirically, our results on\nthe HumanML3D and KIT-ML datasets show that LAVIMO achieves state-of-the-art\nperformance in various motion-related cross-modal retrieval tasks, including\ntext-to-motion, motion-to-text, video-to-motion and motion-to-video.", + "State-of-the-art single-view 360-degree room layout reconstruction methods\nformulate the problem as a high-level 1D (per-column) regression task. On the\nother hand, traditional low-level 2D layout segmentation is simpler to learn\nand can represent occluded regions, but it requires complex post-processing for\nthe targeting layout polygon and sacrifices accuracy. We present Seg2Reg to\nrender 1D layout depth regression from the 2D segmentation map in a\ndifferentiable and occlusion-aware way, marrying the merits of both sides.\nSpecifically, our model predicts floor-plan density for the input\nequirectangular 360-degree image. Formulating the 2D layout representation as a\ndensity field enables us to employ `flattened' volume rendering to form 1D\nlayout depth regression. In addition, we propose a novel 3D warping\naugmentation on layout to improve generalization. Finally, we re-implement\nrecent room layout reconstruction methods into our codebase for benchmarking\nand explore modern backbones and training techniques to serve as the strong\nbaseline. Our model significantly outperforms previous arts. The code will be\nmade available upon publication.", + "Strong adversarial examples are crucial for evaluating and enhancing the\nrobustness of deep neural networks. However, the performance of popular attacks\nis usually sensitive, for instance, to minor image transformations, stemming\nfrom limited information -- typically only one input example, a handful of\nwhite-box source models, and undefined defense strategies. Hence, the crafted\nadversarial examples are prone to overfit the source model, which hampers their\ntransferability to unknown architectures. In this paper, we propose an approach\nnamed Multiple Asymptotically Normal Distribution Attacks (MultiANDA) which\nexplicitly characterize adversarial perturbations from a learned distribution.\nSpecifically, we approximate the posterior distribution over the perturbations\nby taking advantage of the asymptotic normality property of stochastic gradient\nascent (SGA), then employ the deep ensemble strategy as an effective proxy for\nBayesian marginalization in this process, aiming to estimate a mixture of\nGaussians that facilitates a more thorough exploration of the potential\noptimization space. The approximated posterior essentially describes the\nstationary distribution of SGA iterations, which captures the geometric\ninformation around the local optimum.", + "The approximated posterior essentially describes the\nstationary distribution of SGA iterations, which captures the geometric\ninformation around the local optimum. Thus, MultiANDA allows drawing an\nunlimited number of adversarial perturbations for each input and reliably\nmaintains the transferability. Our proposed method outperforms ten\nstate-of-the-art black-box attacks on deep learning models with or without\ndefenses through extensive experiments on seven normally trained and seven\ndefense models.", + "Dataset pruning aims to construct a coreset capable of achieving performance\ncomparable to the original, full dataset. Most existing dataset pruning methods\nrely on snapshot-based criteria to identify representative samples, often\nresulting in poor generalization across various pruning and cross-architecture\nscenarios. Recent studies have addressed this issue by expanding the scope of\ntraining dynamics considered, including factors such as forgetting event and\nprobability change, typically using an averaging approach. However, these works\nstruggle to integrate a broader range of training dynamics without overlooking\nwell-generalized samples, which may not be sufficiently highlighted in an\naveraging manner. In this study, we propose a novel dataset pruning method\ntermed as Temporal Dual-Depth Scoring (TDDS), to tackle this problem. TDDS\nutilizes a dual-depth strategy to achieve a balance between incorporating\nextensive training dynamics and identifying representative samples for dataset\npruning. In the first depth, we estimate the series of each sample's individual\ncontributions spanning the training progress, ensuring comprehensive\nintegration of training dynamics. In the second depth, we focus on the\nvariability of the sample-wise contributions identified in the first depth to\nhighlight well-generalized samples.", + "In the first depth, we estimate the series of each sample's individual\ncontributions spanning the training progress, ensuring comprehensive\nintegration of training dynamics. In the second depth, we focus on the\nvariability of the sample-wise contributions identified in the first depth to\nhighlight well-generalized samples. Extensive experiments conducted on CIFAR\nand ImageNet datasets verify the superiority of TDDS over previous SOTA\nmethods. Specifically on CIFAR-100, our method achieves 54.51% accuracy with\nonly 10% training data, surpassing random selection by 7.83% and other\ncomparison methods by at least 12.69%.", + "LiDAR semantic segmentation (LSS) is a critical task in autonomous driving\nand has achieved promising progress. However, prior LSS methods are\nconventionally investigated and evaluated on datasets within the same domain in\nclear weather. The robustness of LSS models in unseen scenes and all weather\nconditions is crucial for ensuring safety and reliability in real applications.\nTo this end, we propose UniMix, a universal method that enhances the\nadaptability and generalizability of LSS models. UniMix first leverages\nphysically valid adverse weather simulation to construct a Bridge Domain, which\nserves to bridge the domain gap between the clear weather scenes and the\nadverse weather scenes. Then, a Universal Mixing operator is defined regarding\nspatial, intensity, and semantic distributions to create the intermediate\ndomain with mixed samples from given domains. Integrating the proposed two\ntechniques into a teacher-student framework, UniMix efficiently mitigates the\ndomain gap and enables LSS models to learn weather-robust and domain-invariant\nrepresentations.", + "Integrating the proposed two\ntechniques into a teacher-student framework, UniMix efficiently mitigates the\ndomain gap and enables LSS models to learn weather-robust and domain-invariant\nrepresentations. We devote UniMix to two main setups: 1) unsupervised domain\nadaption, adapting the model from the clear weather source domain to the\nadverse weather target domain; 2) domain generalization, learning a model that\ngeneralizes well to unseen scenes in adverse weather. Extensive experiments\nvalidate the effectiveness of UniMix across different tasks and datasets, all\nachieving superior performance over state-of-the-art methods. The code will be\nreleased.", + "Composed Image Retrieval (CIR) is a task that retrieves images similar to a\nquery, based on a provided textual modification. Current techniques rely on\nsupervised learning for CIR models using labeled triplets of the reference\nimage, text, target image. These specific triplets are not as commonly\navailable as simple image-text pairs, limiting the widespread use of CIR and\nits scalability. On the other hand, zero-shot CIR can be relatively easily\ntrained with image-caption pairs without considering the image-to-image\nrelation, but this approach tends to yield lower accuracy. We propose a new\nsemi-supervised CIR approach where we search for a reference and its related\ntarget images in auxiliary data and learn our large language model-based Visual\nDelta Generator (VDG) to generate text describing the visual difference (i.e.,\nvisual delta) between the two. VDG, equipped with fluent language knowledge and\nbeing model agnostic, can generate pseudo triplets to boost the performance of\nCIR models. Our approach significantly improves the existing supervised\nlearning approaches and achieves state-of-the-art results on the CIR\nbenchmarks.", + "Concerns for the privacy of individuals captured in public imagery have led\nto privacy-preserving action recognition. Existing approaches often suffer from\nissues arising through obfuscation being applied globally and a lack of\ninterpretability. Global obfuscation hides privacy sensitive regions, but also\ncontextual regions important for action recognition. Lack of interpretability\nerodes trust in these new technologies. We highlight the limitations of current\nparadigms and propose a solution: Human selected privacy templates that yield\ninterpretability by design, an obfuscation scheme that selectively hides\nattributes and also induces temporal consistency, which is important in action\nrecognition. Our approach is architecture agnostic and directly modifies input\nimagery, while existing approaches generally require architecture training. Our\napproach offers more flexibility, as no retraining is required, and outperforms\nalternatives on three widely used datasets.", + "Existing super-resolution (SR) models primarily focus on restoring local\ntexture details, often neglecting the global semantic information within the\nscene. This oversight can lead to the omission of crucial semantic details or\nthe introduction of inaccurate textures during the recovery process. In our\nwork, we introduce the Cognitive Super-Resolution (CoSeR) framework, empowering\nSR models with the capacity to comprehend low-resolution images. We achieve\nthis by marrying image appearance and language understanding to generate a\ncognitive embedding, which not only activates prior information from large\ntext-to-image diffusion models but also facilitates the generation of\nhigh-quality reference images to optimize the SR process. To further improve\nimage fidelity, we propose a novel condition injection scheme called\n\"All-in-Attention\", consolidating all conditional information into a single\nmodule. Consequently, our method successfully restores semantically correct and\nphotorealistic details, demonstrating state-of-the-art performance across\nmultiple benchmarks. Code: https://github.com/VINHYU/CoSeR", + "Generalizable NeRF aims to synthesize novel views for unseen scenes. Common\npractices involve constructing variance-based cost volumes for geometry\nreconstruction and encoding 3D descriptors for decoding novel views. However,\nexisting methods show limited generalization ability in challenging conditions\ndue to inaccurate geometry, sub-optimal descriptors, and decoding strategies.\nWe address these issues point by point. First, we find the variance-based cost\nvolume exhibits failure patterns as the features of pixels corresponding to the\nsame point can be inconsistent across different views due to occlusions or\nreflections. We introduce an Adaptive Cost Aggregation (ACA) approach to\namplify the contribution of consistent pixel pairs and suppress inconsistent\nones. Unlike previous methods that solely fuse 2D features into descriptors,\nour approach introduces a Spatial-View Aggregator (SVA) to incorporate 3D\ncontext into descriptors through spatial and inter-view interaction. When\ndecoding the descriptors, we observe the two existing decoding strategies excel\nin different areas, which are complementary. A Consistency-Aware Fusion (CAF)\nstrategy is proposed to leverage the advantages of both.", + "When\ndecoding the descriptors, we observe the two existing decoding strategies excel\nin different areas, which are complementary. A Consistency-Aware Fusion (CAF)\nstrategy is proposed to leverage the advantages of both. We incorporate the\nabove ACA, SVA, and CAF into a coarse-to-fine framework, termed Geometry-aware\nReconstruction and Fusion-refined Rendering (GeFu). GeFu attains\nstate-of-the-art performance across multiple datasets. Code is available at\nhttps://github.com/TQTQliu/GeFu .", + "Inferring scene geometry from images via Structure from Motion is a\nlong-standing and fundamental problem in computer vision. While classical\napproaches and, more recently, depth map predictions only focus on the visible\nparts of a scene, the task of scene completion aims to reason about geometry\neven in occluded regions. With the popularity of neural radiance fields\n(NeRFs), implicit representations also became popular for scene completion by\npredicting so-called density fields. Unlike explicit approaches. e.g.\nvoxel-based methods, density fields also allow for accurate depth prediction\nand novel-view synthesis via image-based rendering. In this work, we propose to\nfuse the scene reconstruction from multiple images and distill this knowledge\ninto a more accurate single-view scene reconstruction. To this end, we propose\nMulti-View Behind the Scenes (MVBTS) to fuse density fields from multiple posed\nimages, trained fully self-supervised only from image data. Using knowledge\ndistillation, we use MVBTS to train a single-view scene completion network via\ndirect supervision called KDBTS. It achieves state-of-the-art performance on\noccupancy prediction, especially in occluded regions.", + "Prompt learning has emerged as a valuable technique in enhancing\nvision-language models (VLMs) such as CLIP for downstream tasks in specific\ndomains. Existing work mainly focuses on designing various learning forms of\nprompts, neglecting the potential of prompts as effective distillers for\nlearning from larger teacher models. In this paper, we introduce an\nunsupervised domain prompt distillation framework, which aims to transfer the\nknowledge of a larger teacher model to a lightweight target model through\nprompt-driven imitation using unlabeled domain images. Specifically, our\nframework consists of two distinct stages. In the initial stage, we pre-train a\nlarge CLIP teacher model using domain (few-shot) labels. After pre-training, we\nleverage the unique decoupled-modality characteristics of CLIP by pre-computing\nand storing the text features as class vectors only once through the teacher\ntext encoder. In the subsequent stage, the stored class vectors are shared\nacross teacher and student image encoders for calculating the predicted logits.", + "In the subsequent stage, the stored class vectors are shared\nacross teacher and student image encoders for calculating the predicted logits.\nFurther, we align the logits of both the teacher and student models via KL\ndivergence, encouraging the student image encoder to generate similar\nprobability distributions to the teacher through the learnable prompts. The\nproposed prompt distillation process eliminates the reliance on labeled data,\nenabling the algorithm to leverage a vast amount of unlabeled images within the\ndomain. Finally, the well-trained student image encoders and pre-stored text\nfeatures (class vectors) are utilized for inference. To our best knowledge, we\nare the first to (1) perform unsupervised domain-specific prompt-driven\nknowledge distillation for CLIP, and (2) establish a practical pre-storing\nmechanism of text features as shared class vectors between teacher and student.\nExtensive experiments on 11 datasets demonstrate the effectiveness of our\nmethod.", + "Text-driven video generation witnesses rapid progress. However, merely using\ntext prompts is not enough to depict the desired subject appearance that\naccurately aligns with users' intents, especially for customized content\ncreation. In this paper, we study the task of video generation with image\nprompts, which provide more accurate and direct content control beyond the text\nprompts. Specifically, we propose a feed-forward framework VideoBooth, with two\ndedicated designs: 1) We propose to embed image prompts in a coarse-to-fine\nmanner. Coarse visual embeddings from image encoder provide high-level\nencodings of image prompts, while fine visual embeddings from the proposed\nattention injection module provide multi-scale and detailed encoding of image\nprompts. These two complementary embeddings can faithfully capture the desired\nappearance. 2) In the attention injection module at fine level, multi-scale\nimage prompts are fed into different cross-frame attention layers as additional\nkeys and values. This extra spatial information refines the details in the\nfirst frame and then it is propagated to the remaining frames, which maintains\ntemporal consistency.", + "2) In the attention injection module at fine level, multi-scale\nimage prompts are fed into different cross-frame attention layers as additional\nkeys and values. This extra spatial information refines the details in the\nfirst frame and then it is propagated to the remaining frames, which maintains\ntemporal consistency. Extensive experiments demonstrate that VideoBooth\nachieves state-of-the-art performance in generating customized high-quality\nvideos with subjects specified in image prompts. Notably, VideoBooth is a\ngeneralizable framework where a single model works for a wide range of image\nprompts with feed-forward pass.", + "Numerous studies have demonstrated the susceptibility of deep neural networks\n(DNNs) to subtle adversarial perturbations, prompting the development of many\nadvanced adversarial defense methods aimed at mitigating adversarial attacks.\nCurrent defense strategies usually train DNNs for a specific adversarial attack\nmethod and can achieve good robustness in defense against this type of\nadversarial attack. Nevertheless, when subjected to evaluations involving\nunfamiliar attack modalities, empirical evidence reveals a pronounced\ndeterioration in the robustness of DNNs. Meanwhile, there is a trade-off\nbetween the classification accuracy of clean examples and adversarial examples.\nMost defense methods often sacrifice the accuracy of clean examples in order to\nimprove the adversarial robustness of DNNs. To alleviate these problems and\nenhance the overall robust generalization of DNNs, we propose the Test-Time\nPixel-Level Adversarial Purification (TPAP) method. This approach is based on\nthe robust overfitting characteristic of DNNs to the fast gradient sign method\n(FGSM) on training and test datasets.", + "This approach is based on\nthe robust overfitting characteristic of DNNs to the fast gradient sign method\n(FGSM) on training and test datasets. It utilizes FGSM for adversarial\npurification, to process images for purifying unknown adversarial perturbations\nfrom pixels at testing time in a \"counter changes with changelessness\" manner,\nthereby enhancing the defense capability of DNNs against various unknown\nadversarial attacks. Extensive experimental results show that our method can\neffectively improve both overall robust generalization of DNNs, notably over\nprevious methods.", + "Large motion poses a critical challenge in Video Frame Interpolation (VFI)\ntask. Existing methods are often constrained by limited receptive fields,\nresulting in sub-optimal performance when handling scenarios with large motion.\nIn this paper, we introduce a new pipeline for VFI, which can effectively\nintegrate global-level information to alleviate issues associated with large\nmotion. Specifically, we first estimate a pair of initial intermediate flows\nusing a high-resolution feature map for extracting local details. Then, we\nincorporate a sparse global matching branch to compensate for flow estimation,\nwhich consists of identifying flaws in initial flows and generating sparse flow\ncompensation with a global receptive field. Finally, we adaptively merge the\ninitial flow estimation with global flow compensation, yielding a more accurate\nintermediate flow. To evaluate the effectiveness of our method in handling\nlarge motion, we carefully curate a more challenging subset from commonly used\nbenchmarks. Our method demonstrates the state-of-the-art performance on these\nVFI subsets with large motion.", + "We present SCULPT, a novel 3D generative model for clothed and textured 3D\nmeshes of humans. Specifically, we devise a deep neural network that learns to\nrepresent the geometry and appearance distribution of clothed human bodies.\nTraining such a model is challenging, as datasets of textured 3D meshes for\nhumans are limited in size and accessibility. Our key observation is that there\nexist medium-sized 3D scan datasets like CAPE, as well as large-scale 2D image\ndatasets of clothed humans and multiple appearances can be mapped to a single\ngeometry. To effectively learn from the two data modalities, we propose an\nunpaired learning procedure for pose-dependent clothed and textured human\nmeshes. Specifically, we learn a pose-dependent geometry space from 3D scan\ndata. We represent this as per vertex displacements w.r.t. the SMPL model.\nNext, we train a geometry conditioned texture generator in an unsupervised way\nusing the 2D image data. We use intermediate activations of the learned\ngeometry model to condition our texture generator.", + "We represent this as per vertex displacements w.r.t. the SMPL model.\nNext, we train a geometry conditioned texture generator in an unsupervised way\nusing the 2D image data. We use intermediate activations of the learned\ngeometry model to condition our texture generator. To alleviate entanglement\nbetween pose and clothing type, and pose and clothing appearance, we condition\nboth the texture and geometry generators with attribute labels such as clothing\ntypes for the geometry, and clothing colors for the texture generator. We\nautomatically generated these conditioning labels for the 2D images based on\nthe visual question answering model BLIP and CLIP. We validate our method on\nthe SCULPT dataset, and compare to state-of-the-art 3D generative models for\nclothed human bodies. Our code and data can be found at\nhttps://sculpt.is.tue.mpg.de.", + "Class-agnostic object counting aims to count all objects in an image with\nrespect to example boxes or class names, \\emph{a.k.a} few-shot and zero-shot\ncounting. In this paper, we propose a generalized framework for both few-shot\nand zero-shot object counting based on detection. Our framework combines the\nsuperior advantages of two foundation models without compromising their\nzero-shot capability: (\\textbf{i}) SAM to segment all possible objects as mask\nproposals, and (\\textbf{ii}) CLIP to classify proposals to obtain accurate\nobject counts. However, this strategy meets the obstacles of efficiency\noverhead and the small crowded objects that cannot be localized and\ndistinguished. To address these issues, our framework, termed PseCo, follows\nthree steps: point, segment, and count. Specifically, we first propose a\nclass-agnostic object localization to provide accurate but least point prompts\nfor SAM, which consequently not only reduces computation costs but also avoids\nmissing small objects. Furthermore, we propose a generalized object\nclassification that leverages CLIP image/text embeddings as the classifier,\nfollowing a hierarchical knowledge distillation to obtain discriminative\nclassifications among hierarchical mask proposals.", + "Furthermore, we propose a generalized object\nclassification that leverages CLIP image/text embeddings as the classifier,\nfollowing a hierarchical knowledge distillation to obtain discriminative\nclassifications among hierarchical mask proposals. Extensive experimental\nresults on FSC-147, COCO, and LVIS demonstrate that PseCo achieves\nstate-of-the-art performance in both few-shot/zero-shot object\ncounting/detection. Code: https://github.com/Hzzone/PseCo", + "Conventional Unsupervised Domain Adaptation (UDA) strives to minimize\ndistribution discrepancy between domains, which neglects to harness rich\nsemantics from data and struggles to handle complex domain shifts. A promising\ntechnique is to leverage the knowledge of large-scale pre-trained\nvision-language models for more guided adaptation. Despite some endeavors,\ncurrent methods often learn textual prompts to embed domain semantics for\nsource and target domains separately and perform classification within each\ndomain, limiting cross-domain knowledge transfer. Moreover, prompting only the\nlanguage branch lacks flexibility to adapt both modalities dynamically. To\nbridge this gap, we propose Domain-Agnostic Mutual Prompting (DAMP) to exploit\ndomain-invariant semantics by mutually aligning visual and textual embeddings.\nSpecifically, the image contextual information is utilized to prompt the\nlanguage branch in a domain-agnostic and instance-conditioned way. Meanwhile,\nvisual prompts are imposed based on the domain-agnostic textual prompt to\nelicit domain-invariant visual embeddings. These two branches of prompts are\nlearned mutually with a cross-attention module and regularized with a\nsemantic-consistency loss and an instance-discrimination contrastive loss.", + "Meanwhile,\nvisual prompts are imposed based on the domain-agnostic textual prompt to\nelicit domain-invariant visual embeddings. These two branches of prompts are\nlearned mutually with a cross-attention module and regularized with a\nsemantic-consistency loss and an instance-discrimination contrastive loss.\nExperiments on three UDA benchmarks demonstrate the superiority of DAMP over\nstate-of-the-art approaches.", + "Recent temporal LiDAR-based 3D object detectors achieve promising performance\nbased on the two-stage proposal-based approach. They generate 3D box candidates\nfrom the first-stage dense detector, followed by different temporal aggregation\nmethods. However, these approaches require per-frame objects or whole point\nclouds, posing challenges related to memory bank utilization. Moreover, point\nclouds and trajectory features are combined solely based on concatenation,\nwhich may neglect effective interactions between them. In this paper, we\npropose a point-trajectory transformer with long short-term memory for\nefficient temporal 3D object detection. To this end, we only utilize point\nclouds of current-frame objects and their historical trajectories as input to\nminimize the memory bank storage requirement. Furthermore, we introduce modules\nto encode trajectory features, focusing on long short-term and future-aware\nperspectives, and then effectively aggregate them with point cloud features. We\nconduct extensive experiments on the large-scale Waymo dataset to demonstrate\nthat our approach performs well against state-of-the-art methods. Code and\nmodels will be made publicly available at https://github.com/kuanchihhuang/PTT.", + "We investigate whether region-based representations are effective for\nrecognition. Regions were once a mainstay in recognition approaches, but pixel\nand patch-based features are now used almost exclusively. We show that recent\nclass-agnostic segmenters like SAM can be effectively combined with strong\nunsupervised representations like DINOv2 and used for a wide variety of tasks,\nincluding semantic segmentation, object-based image retrieval, and multi-image\nanalysis. Once the masks and features are extracted, these representations,\neven with linear decoders, enable competitive performance, making them well\nsuited to applications that require custom queries. The compactness of the\nrepresentation also makes it well-suited to video analysis and other problems\nrequiring inference across many images.", + "This paper presents GenH2R, a framework for learning generalizable\nvision-based human-to-robot (H2R) handover skills. The goal is to equip robots\nwith the ability to reliably receive objects with unseen geometry handed over\nby humans in various complex trajectories. We acquire such generalizability by\nlearning H2R handover at scale with a comprehensive solution including\nprocedural simulation assets creation, automated demonstration generation, and\neffective imitation learning. We leverage large-scale 3D model repositories,\ndexterous grasp generation methods, and curve-based 3D animation to create an\nH2R handover simulation environment named \\simabbns, surpassing the number of\nscenes in existing simulators by three orders of magnitude. We further\nintroduce a distillation-friendly demonstration generation method that\nautomatically generates a million high-quality demonstrations suitable for\nlearning. Finally, we present a 4D imitation learning method augmented by a\nfuture forecasting objective to distill demonstrations into a visuo-motor\nhandover policy. Experimental evaluations in both simulators and the real world\ndemonstrate significant improvements (at least +10\\% success rate) over\nbaselines in all cases.", + "Experimental evaluations in both simulators and the real world\ndemonstrate significant improvements (at least +10\\% success rate) over\nbaselines in all cases. The project page is https://GenH2R.github.io/.", + "Establishing dense anatomical correspondence across distinct imaging\nmodalities is a foundational yet challenging procedure for numerous medical\nimage analysis studies and image-guided radiotherapy. Existing multi-modality\nimage registration algorithms rely on statistical-based similarity measures or\nlocal structural image representations. However, the former is sensitive to\nlocally varying noise, while the latter is not discriminative enough to cope\nwith complex anatomical structures in multimodal scans, causing ambiguity in\ndetermining the anatomical correspondence across scans with different\nmodalities. In this paper, we propose a modality-agnostic structural\nrepresentation learning method, which leverages Deep Neighbourhood\nSelf-similarity (DNS) and anatomy-aware contrastive learning to learn\ndiscriminative and contrast-invariance deep structural image representations\n(DSIR) without the need for anatomical delineations or pre-aligned training\nimages. We evaluate our method on multiphase CT, abdomen MR-CT, and brain MR\nT1w-T2w registration. Comprehensive results demonstrate that our method is\nsuperior to the conventional local structural representation and\nstatistical-based similarity measures in terms of discriminability and\naccuracy.", + "Image-language models with prompt learning have shown remarkable advances in\nnumerous downstream vision tasks. Nevertheless, conventional prompt learning\nmethods overfit their training distribution and lose the generalization ability\non test distributions. To improve generalization across various distribution\nshifts, we propose any-shift prompting: a general probabilistic inference\nframework that considers the relationship between training and test\ndistributions during prompt learning. We explicitly connect training and test\ndistributions in the latent space by constructing training and test prompts in\na hierarchical architecture. Within this framework, the test prompt exploits\nthe distribution relationships to guide the generalization of the CLIP\nimage-language model from training to any test distribution. To effectively\nencode the distribution information and their relationships, we further\nintroduce a transformer inference network with a pseudo-shift training\nmechanism. The network generates the tailored test prompt with both training\nand test information in a feedforward pass, avoiding extra training costs at\ntest time. Extensive experiments on twenty-three datasets demonstrate the\neffectiveness of any-shift prompting on the generalization over various\ndistribution shifts.", + "We present InterHandGen, a novel framework that learns the generative prior\nof two-hand interaction. Sampling from our model yields plausible and diverse\ntwo-hand shapes in close interaction with or without an object. Our prior can\nbe incorporated into any optimization or learning methods to reduce ambiguity\nin an ill-posed setup. Our key observation is that directly modeling the joint\ndistribution of multiple instances imposes high learning complexity due to its\ncombinatorial nature. Thus, we propose to decompose the modeling of joint\ndistribution into the modeling of factored unconditional and conditional single\ninstance distribution. In particular, we introduce a diffusion model that\nlearns the single-hand distribution unconditional and conditional to another\nhand via conditioning dropout. For sampling, we combine anti-penetration and\nclassifier-free guidance to enable plausible generation. Furthermore, we\nestablish the rigorous evaluation protocol of two-hand synthesis, where our\nmethod significantly outperforms baseline generative models in terms of\nplausibility and diversity. We also demonstrate that our diffusion prior can\nboost the performance of two-hand reconstruction from monocular in-the-wild\nimages, achieving new state-of-the-art accuracy.", + "Creating high-quality and interactive virtual environments, such as games and\nsimulators, often involves complex and costly manual modeling processes. In\nthis paper, we present Video2Game, a novel approach that automatically converts\nvideos of real-world scenes into realistic and interactive game environments.\nAt the heart of our system are three core components:(i) a neural radiance\nfields (NeRF) module that effectively captures the geometry and visual\nappearance of the scene; (ii) a mesh module that distills the knowledge from\nNeRF for faster rendering; and (iii) a physics module that models the\ninteractions and physical dynamics among the objects. By following the\ncarefully designed pipeline, one can construct an interactable and actionable\ndigital replica of the real world. We benchmark our system on both indoor and\nlarge-scale outdoor scenes. We show that we can not only produce\nhighly-realistic renderings in real-time, but also build interactive games on\ntop.", + "Most diffusion models assume that the reverse process adheres to a Gaussian\ndistribution. However, this approximation has not been rigorously validated,\nespecially at singularities, where t=0 and t=1. Improperly dealing with such\nsingularities leads to an average brightness issue in applications, and limits\nthe generation of images with extreme brightness or darkness. We primarily\nfocus on tackling singularities from both theoretical and practical\nperspectives. Initially, we establish the error bounds for the reverse process\napproximation, and showcase its Gaussian characteristics at singularity time\nsteps. Based on this theoretical insight, we confirm the singularity at t=1 is\nconditionally removable while it at t=0 is an inherent property. Upon these\nsignificant conclusions, we propose a novel plug-and-play method SingDiffusion\nto address the initial singular time step sampling, which not only effectively\nresolves the average brightness issue for a wide range of diffusion models\nwithout extra training efforts, but also enhances their generation capability\nin achieving notable lower FID scores.", + "We introduce MatSynth, a dataset of 4,000+ CC0 ultra-high resolution PBR\nmaterials. Materials are crucial components of virtual relightable assets,\ndefining the interaction of light at the surface of geometries. Given their\nimportance, significant research effort was dedicated to their representation,\ncreation and acquisition. However, in the past 6 years, most research in\nmaterial acquisiton or generation relied either on the same unique dataset, or\non company-owned huge library of procedural materials. With this dataset we\npropose a significantly larger, more diverse, and higher resolution set of\nmaterials than previously publicly available. We carefully discuss the data\ncollection process and demonstrate the benefits of this dataset on material\nacquisition and generation applications. The complete data further contains\nmetadata with each material's origin, license, category, tags, creation method\nand, when available, descriptions and physical size, as well as 3M+ renderings\nof the augmented materials, in 1K, under various environment lightings. The\nMatSynth dataset is released through the project page at:\nhttps://www.gvecchio.com/matsynth.", + "Generative Adversarial Networks (GANs) significantly advanced image\ngeneration but their performance heavily depends on abundant training data. In\nscenarios with limited data, GANs often struggle with discriminator overfitting\nand unstable training. Batch Normalization (BN), despite being known for\nenhancing generalization and training stability, has rarely been used in the\ndiscriminator of Data-Efficient GANs. Our work addresses this gap by\nidentifying a critical flaw in BN: the tendency for gradient explosion during\nthe centering and scaling steps. To tackle this issue, we present CHAIN\n(lipsCHitz continuity constrAIned Normalization), which replaces the\nconventional centering step with zero-mean regularization and integrates a\nLipschitz continuity constraint in the scaling step. CHAIN further enhances GAN\ntraining by adaptively interpolating the normalized and unnormalized features,\neffectively avoiding discriminator overfitting. Our theoretical analyses firmly\nestablishes CHAIN's effectiveness in reducing gradients in latent features and\nweights, improving stability and generalization in GAN training. Empirical\nevidence supports our theory.", + "Our theoretical analyses firmly\nestablishes CHAIN's effectiveness in reducing gradients in latent features and\nweights, improving stability and generalization in GAN training. Empirical\nevidence supports our theory. CHAIN achieves state-of-the-art results in\ndata-limited scenarios on CIFAR-10/100, ImageNet, five low-shot and seven\nhigh-resolution few-shot image datasets. Code:\nhttps://github.com/MaxwellYaoNi/CHAIN", + "Existing tracking methods mainly focus on learning better target\nrepresentation or developing more robust prediction models to improve tracking\nperformance. While tracking performance has significantly improved, the target\nloss issue occurs frequently due to tracking failures, complete occlusion, or\nout-of-view situations. However, considerably less attention is paid to the\nself-recovery issue of tracking methods, which is crucial for practical\napplications. To this end, we propose a recoverable tracking framework,\nRTracker, that uses a tree-structured memory to dynamically associate a tracker\nand a detector to enable self-recovery ability. Specifically, we propose a\nPositive-Negative Tree-structured memory to chronologically store and maintain\npositive and negative target samples. Upon the PN tree memory, we develop\ncorresponding walking rules for determining the state of the target and define\na set of control flows to unite the tracker and the detector in different\ntracking scenarios. Our core idea is to use the support samples of positive and\nnegative target categories to establish a relative distance-based criterion for\na reliable assessment of target loss. The favorable performance in comparison\nagainst the state-of-the-art methods on numerous challenging benchmarks\ndemonstrates the effectiveness of the proposed algorithm.", + "Facial geometry and appearance capture have demonstrated tremendous success\nin 3D scanning real humans in studios. Recent works propose to democratize this\ntechnique while keeping the results high quality. However, they are still\ninconvenient for daily usage. In addition, they focus on an easier problem of\nonly capturing facial skin. This paper proposes a novel method for high-quality\nface capture, featuring an easy-to-use system and the capability to model the\ncomplete face with skin, mouth interior, hair, and eyes. We reconstruct facial\ngeometry and appearance from a single co-located smartphone flashlight sequence\ncaptured in a dim room where the flashlight is the dominant light source (e.g.\nrooms with curtains or at night). To model the complete face, we propose a\nnovel hybrid representation to effectively model both eyes and other facial\nregions, along with novel techniques to learn it from images. We apply a\ncombined lighting model to compactly represent real illuminations and exploit a\nmorphable face albedo model as a reflectance prior to disentangle diffuse and\nspecular. Experiments show that our method can capture high-quality 3D\nrelightable scans.", + "We propose a novel approach to video anomaly detection: we treat feature\nvectors extracted from videos as realizations of a random variable with a fixed\ndistribution and model this distribution with a neural network. This lets us\nestimate the likelihood of test videos and detect video anomalies by\nthresholding the likelihood estimates. We train our video anomaly detector\nusing a modification of denoising score matching, a method that injects\ntraining data with noise to facilitate modeling its distribution. To eliminate\nhyperparameter selection, we model the distribution of noisy video features\nacross a range of noise levels and introduce a regularizer that tends to align\nthe models for different levels of noise. At test time, we combine anomaly\nindications at multiple noise scales with a Gaussian mixture model. Running our\nvideo anomaly detector induces minimal delays as inference requires merely\nextracting the features and forward-propagating them through a shallow neural\nnetwork and a Gaussian mixture model. Our experiments on five popular video\nanomaly detection benchmarks demonstrate state-of-the-art performance, both in\nthe object-centric and in the frame-centric setup.", + "The landscape of deep learning research is moving towards innovative\nstrategies to harness the true potential of data. Traditionally, emphasis has\nbeen on scaling model architectures, resulting in large and complex neural\nnetworks, which can be difficult to train with limited computational resources.\nHowever, independently of the model size, data quality (i.e. amount and\nvariability) is still a major factor that affects model generalization. In this\nwork, we propose a novel technique to exploit available data through the use of\nautomatic data augmentation for the tasks of image classification and semantic\nsegmentation. We introduce the first Differentiable Augmentation Search method\n(DAS) to generate variations of images that can be processed as videos.\nCompared to previous approaches, DAS is extremely fast and flexible, allowing\nthe search on very large search spaces in less than a GPU day. Our intuition is\nthat the increased receptive field in the temporal dimension provided by DAS\ncould lead to benefits also to the spatial receptive field. More specifically,\nwe leverage DAS to guide the reshaping of the spatial receptive field by\nselecting task-dependant transformations.", + "Our intuition is\nthat the increased receptive field in the temporal dimension provided by DAS\ncould lead to benefits also to the spatial receptive field. More specifically,\nwe leverage DAS to guide the reshaping of the spatial receptive field by\nselecting task-dependant transformations. As a result, compared to standard\naugmentation alternatives, we improve in terms of accuracy on ImageNet,\nCifar10, Cifar100, Tiny-ImageNet, Pascal-VOC-2012 and CityScapes datasets when\nplugging-in our DAS over different light-weight video backbones.", + "Segment Anything Model (SAM) has achieved impressive performance in many\ncomputer vision tasks. However, as a large-scale model, the immense memory and\ncomputation costs hinder its practical deployment. In this paper, we propose a\npost-training quantization (PTQ) framework for Segment Anything Model, namely\nPTQ4SAM. First, we investigate the inherent bottleneck of SAM quantization\nattributed to the bimodal distribution in post-Key-Linear activations. We\nanalyze its characteristics from both per-tensor and per-channel perspectives,\nand propose a Bimodal Integration strategy, which utilizes a mathematically\nequivalent sign operation to transform the bimodal distribution into a\nrelatively easy-quantized normal distribution offline. Second, SAM encompasses\ndiverse attention mechanisms (i.e., self-attention and two-way\ncross-attention), resulting in substantial variations in the post-Softmax\ndistributions. Therefore, we introduce an Adaptive Granularity Quantization for\nSoftmax through searching the optimal power-of-two base, which is\nhardware-friendly.", + "Therefore, we introduce an Adaptive Granularity Quantization for\nSoftmax through searching the optimal power-of-two base, which is\nhardware-friendly. Extensive experimental results across various vision tasks\n(instance segmentation, semantic segmentation and object detection), datasets\nand model variants show the superiority of PTQ4SAM. For example, when\nquantizing SAM-L to 6-bit, we achieve lossless accuracy for instance\nsegmentation, about 0.5\\% drop with theoretical 3.9$\\times$ acceleration. The\ncode is available at \\url{https://github.com/chengtao-lv/PTQ4SAM}.", + "The remarkable success of Vision Transformers in Artificial Neural Networks\n(ANNs) has led to a growing interest in incorporating the self-attention\nmechanism and transformer-based architecture into Spiking Neural Networks\n(SNNs). While existing methods propose spiking self-attention mechanisms that\nare compatible with SNNs, they lack reasonable scaling methods, and the overall\narchitectures proposed by these methods suffer from a bottleneck in effectively\nextracting local features. To address these challenges, we propose a novel\nspiking self-attention mechanism named Dual Spike Self-Attention (DSSA) with a\nreasonable scaling method. Based on DSSA, we propose a novel spiking Vision\nTransformer architecture called SpikingResformer, which combines the\nResNet-based multi-stage architecture with our proposed DSSA to improve both\nperformance and energy efficiency while reducing parameters. Experimental\nresults show that SpikingResformer achieves higher accuracy with fewer\nparameters and lower energy consumption than other spiking Vision Transformer\ncounterparts. Notably, our SpikingResformer-L achieves 79.40% top-1 accuracy on\nImageNet with 4 time-steps, which is the state-of-the-art result in the SNN\nfield.", + "While recent Transformer-based approaches have shown impressive performances\non event-based object detection tasks, their high computational costs still\ndiminish the low power consumption advantage of event cameras. Image-based\nworks attempt to reduce these costs by introducing sparse Transformers.\nHowever, they display inadequate sparsity and adaptability when applied to\nevent-based object detection, since these approaches cannot balance the fine\ngranularity of token-level sparsification and the efficiency of window-based\nTransformers, leading to reduced performance and efficiency. Furthermore, they\nlack scene-specific sparsity optimization, resulting in information loss and a\nlower recall rate. To overcome these limitations, we propose the Scene Adaptive\nSparse Transformer (SAST). SAST enables window-token co-sparsification,\nsignificantly enhancing fault tolerance and reducing computational overhead.\nLeveraging the innovative scoring and selection modules, along with the Masked\nSparse Window Self-Attention, SAST showcases remarkable scene-aware\nadaptability: It focuses only on important objects and dynamically optimizes\nsparsity level according to scene complexity, maintaining a remarkable balance\nbetween performance and computational cost.", + "Leveraging the innovative scoring and selection modules, along with the Masked\nSparse Window Self-Attention, SAST showcases remarkable scene-aware\nadaptability: It focuses only on important objects and dynamically optimizes\nsparsity level according to scene complexity, maintaining a remarkable balance\nbetween performance and computational cost. The evaluation results show that\nSAST outperforms all other dense and sparse networks in both performance and\nefficiency on two large-scale event-based object detection datasets (1Mpx and\nGen1). Code: https://github.com/Peterande/SAST", + "Neural character models can now reconstruct detailed geometry and texture\nfrom video, but they lack explicit shadows and shading, leading to artifacts\nwhen generating novel views and poses or during relighting. It is particularly\ndifficult to include shadows as they are a global effect and the required\ncasting of secondary rays is costly. We propose a new shadow model using a\nGaussian density proxy that replaces sampling with a simple analytic formula.\nIt supports dynamic motion and is tailored for shadow computation, thereby\navoiding the affine projection approximation and sorting required by the\nclosely related Gaussian splatting. Combined with a deferred neural rendering\nmodel, our Gaussian shadows enable Lambertian shading and shadow casting with\nminimal overhead. We demonstrate improved reconstructions, with better\nseparation of albedo, shading, and shadows in challenging outdoor scenes with\ndirect sun light and hard shadows. Our method is able to optimize the light\ndirection without any input from the user. As a result, novel poses have fewer\nshadow artifacts and relighting in novel scenes is more realistic compared to\nthe state-of-the-art methods, providing new ways to pose neural characters in\nnovel environments, increasing their applicability.", + "To achieve greater accuracy, hypergraph matching algorithms require\nexponential increases in computational resources. Recent kd-tree-based\napproximate nearest neighbor (ANN) methods, despite the sparsity of their\ncompatibility tensor, still require exhaustive calculations for large-scale\ngraph matching. This work utilizes CUR tensor decomposition and introduces a\nnovel cascaded second and third-order hypergraph matching framework (CURSOR)\nfor efficient hypergraph matching. A CUR-based second-order graph matching\nalgorithm is used to provide a rough match, and then the core of CURSOR, a\nfiber-CUR-based tensor generation method, directly calculates entries of the\ncompatibility tensor by leveraging the initial second-order match result. This\nsignificantly decreases the time complexity and tensor density. A probability\nrelaxation labeling (PRL)-based matching algorithm, especially suitable for\nsparse tensors, is developed. Experiment results on large-scale synthetic\ndatasets and widely-adopted benchmark sets demonstrate the superiority of\nCURSOR over existing methods. The tensor generation method in CURSOR can be\nintegrated seamlessly into existing hypergraph matching methods to improve\ntheir performance and lower their computational costs.", + "We introduce a novel approach for adapting deep stereo networks in a\ncollaborative manner. By building over principles of federated learning, we\ndevelop a distributed framework allowing for demanding the optimization process\nto a number of clients deployed in different environments. This makes it\npossible, for a deep stereo network running on resourced-constrained devices,\nto capitalize on the adaptation process carried out by other instances of the\nsame architecture, and thus improve its accuracy in challenging environments\neven when it cannot carry out adaptation on its own. Experimental results show\nhow federated adaptation performs equivalently to on-device adaptation, and\neven better when dealing with challenging environments.", + "We introduce a novel sequential modeling approach which enables learning a\nLarge Vision Model (LVM) without making use of any linguistic data. To do this,\nwe define a common format, \"visual sentences\", in which we can represent raw\nimages and videos as well as annotated data sources such as semantic\nsegmentations and depth reconstructions without needing any meta-knowledge\nbeyond the pixels. Once this wide variety of visual data (comprising 420\nbillion tokens) is represented as sequences, the model can be trained to\nminimize a cross-entropy loss for next token prediction. By training across\nvarious scales of model architecture and data diversity, we provide empirical\nevidence that our models scale effectively. Many different vision tasks can be\nsolved by designing suitable visual prompts at test time.", + "Learning-based isosurface extraction methods have recently emerged as a\nrobust and efficient alternative to axiomatic techniques. However, the vast\nmajority of such approaches rely on supervised training with axiomatically\ncomputed ground truths, thus potentially inheriting biases and data artifacts\nof the corresponding axiomatic methods. Steering away from such dependencies,\nwe propose a self-supervised training scheme for the Neural Dual Contouring\nmeshing framework, resulting in our method: Self-Supervised Dual Contouring\n(SDC). Instead of optimizing predicted mesh vertices with supervised training,\nwe use two novel self-supervised loss functions that encourage the consistency\nbetween distances to the generated mesh up to the first order. Meshes\nreconstructed by SDC surpass existing data-driven methods in capturing\nintricate details while being more robust to possible irregularities in the\ninput. Furthermore, we use the same self-supervised training objective linking\ninferred mesh and input SDF, to regularize the training process of Deep\nImplicit Networks (DINs). We demonstrate that the resulting DINs produce\nhigher-quality implicit functions, ultimately leading to more accurate and\ndetail-preserving surfaces compared to prior baselines for different input\nmodalities.", + "We demonstrate that the resulting DINs produce\nhigher-quality implicit functions, ultimately leading to more accurate and\ndetail-preserving surfaces compared to prior baselines for different input\nmodalities. Finally, we demonstrate that our self-supervised losses improve\nmeshing performance in the single-view reconstruction task by enabling joint\ntraining of predicted SDF and resulting output mesh. We open-source our code at\nhttps://github.com/Sentient07/SDC", + "Generalized Referring Expression Segmentation (GRES) extends the scope of\nclassic RES to refer to multiple objects in one expression or identify the\nempty targets absent in the image. GRES poses challenges in modeling the\ncomplex spatial relationships of the instances in the image and identifying\nnon-existing referents. Multimodal Large Language Models (MLLMs) have recently\nshown tremendous progress in these complicated vision-language tasks.\nConnecting Large Language Models (LLMs) and vision models, MLLMs are proficient\nin understanding contexts with visual inputs. Among them, LISA, as a\nrepresentative, adopts a special [SEG] token to prompt a segmentation mask\ndecoder, e.g., SAM, to enable MLLMs in the RES task. However, existing\nsolutions to GRES remain unsatisfactory since current segmentation MLLMs cannot\ncorrectly handle the cases where users might reference multiple subjects in a\nsingular prompt or provide descriptions incongruent with any image target. In\nthis paper, we propose Generalized Segmentation Vision Assistant (GSVA) to\naddress this gap.", + "In\nthis paper, we propose Generalized Segmentation Vision Assistant (GSVA) to\naddress this gap. Specifically, GSVA reuses the [SEG] token to prompt the\nsegmentation model towards supporting multiple mask references simultaneously\nand innovatively learns to generate a [REJ] token to reject the null targets\nexplicitly. Experiments validate GSVA's efficacy in resolving the GRES issue,\nmarking a notable enhancement and setting a new record on the GRES benchmark\ngRefCOCO dataset. GSVA also proves effective across various classic referring\nsegmentation and comprehension tasks.", + "Although image super-resolution (SR) problem has experienced unprecedented\nrestoration accuracy with deep neural networks, it has yet limited versatile\napplications due to the substantial computational costs. Since different input\nimages for SR face different restoration difficulties, adapting computational\ncosts based on the input image, referred to as adaptive inference, has emerged\nas a promising solution to compress SR networks. Specifically, adapting the\nquantization bit-widths has successfully reduced the inference and memory cost\nwithout sacrificing the accuracy. However, despite the benefits of the\nresultant adaptive network, existing works rely on time-intensive\nquantization-aware training with full access to the original training pairs to\nlearn the appropriate bit allocation policies, which limits its ubiquitous\nusage. To this end, we introduce the first on-the-fly adaptive quantization\nframework that accelerates the processing time from hours to seconds. We\nformulate the bit allocation problem with only two bit mapping modules: one to\nmap the input image to the image-wise bit adaptation factor and one to obtain\nthe layer-wise adaptation factors. These bit mappings are calibrated and\nfine-tuned using only a small number of calibration images.", + "We\nformulate the bit allocation problem with only two bit mapping modules: one to\nmap the input image to the image-wise bit adaptation factor and one to obtain\nthe layer-wise adaptation factors. These bit mappings are calibrated and\nfine-tuned using only a small number of calibration images. We achieve\ncompetitive performance with the previous adaptive quantization methods, while\nthe processing time is accelerated by x2000. Codes are available at\nhttps://github.com/Cheeun/AdaBM.", + "Recently, text-guided scalable vector graphics (SVGs) synthesis has shown\npromise in domains such as iconography and sketch. However, existing\ntext-to-SVG generation methods lack editability and struggle with visual\nquality and result diversity. To address these limitations, we propose a novel\ntext-guided vector graphics synthesis method called SVGDreamer. SVGDreamer\nincorporates a semantic-driven image vectorization (SIVE) process that enables\nthe decomposition of synthesis into foreground objects and background, thereby\nenhancing editability. Specifically, the SIVE process introduces\nattention-based primitive control and an attention-mask loss function for\neffective control and manipulation of individual elements. Additionally, we\npropose a Vectorized Particle-based Score Distillation (VPSD) approach to\naddress issues of shape over-smoothing, color over-saturation, limited\ndiversity, and slow convergence of the existing text-to-SVG generation methods\nby modeling SVGs as distributions of control points and colors. Furthermore,\nVPSD leverages a reward model to re-weight vector particles, which improves\naesthetic appeal and accelerates convergence.", + "Furthermore,\nVPSD leverages a reward model to re-weight vector particles, which improves\naesthetic appeal and accelerates convergence. Extensive experiments are\nconducted to validate the effectiveness of SVGDreamer, demonstrating its\nsuperiority over baseline methods in terms of editability, visual quality, and\ndiversity. Project page:\n\\href{https://ximinng.github.io/SVGDreamer-project/}{https://ximinng.github.io/SVGDreamer-project/}", + "Large multimodal models (LMM) have recently shown encouraging progress with\nvisual instruction tuning. In this note, we show that the fully-connected\nvision-language cross-modal connector in LLaVA is surprisingly powerful and\ndata-efficient. With simple modifications to LLaVA, namely, using\nCLIP-ViT-L-336px with an MLP projection and adding academic-task-oriented VQA\ndata with simple response formatting prompts, we establish stronger baselines\nthat achieve state-of-the-art across 11 benchmarks. Our final 13B checkpoint\nuses merely 1.2M publicly available data, and finishes full training in ~1 day\non a single 8-A100 node. We hope this can make state-of-the-art LMM research\nmore accessible. Code and model will be publicly available.", + "Diffusion models have demonstrated exceptional efficacy in various generative\napplications. While existing models focus on minimizing a weighted sum of\ndenoising score matching losses for data distribution modeling, their training\nprimarily emphasizes instance-level optimization, overlooking valuable\nstructural information within each mini-batch, indicative of pair-wise\nrelationships among samples. To address this limitation, we introduce\nStructure-guided Adversarial training of Diffusion Models (SADM). In this\npioneering approach, we compel the model to learn manifold structures between\nsamples in each training batch. To ensure the model captures authentic manifold\nstructures in the data distribution, we advocate adversarial training of the\ndiffusion generator against a novel structure discriminator in a minimax game,\ndistinguishing real manifold structures from the generated ones. SADM\nsubstantially improves existing diffusion transformers (DiT) and outperforms\nexisting methods in image generation and cross-domain fine-tuning tasks across\n12 datasets, establishing a new state-of-the-art FID of 1.58 and 2.11 on\nImageNet for class-conditional image generation at resolutions of 256x256 and\n512x512, respectively.", + "We address the problem of generating realistic 3D motions of humans\ninteracting with objects in a scene. Our key idea is to create a neural\ninteraction field attached to a specific object, which outputs the distance to\nthe valid interaction manifold given a human pose as input. This interaction\nfield guides the sampling of an object-conditioned human motion diffusion\nmodel, so as to encourage plausible contacts and affordance semantics. To\nsupport interactions with scarcely available data, we propose an automated\nsynthetic data pipeline. For this, we seed a pre-trained motion model, which\nhas priors for the basics of human movement, with interaction-specific anchor\nposes extracted from limited motion capture data. Using our guided diffusion\nmodel trained on generated synthetic data, we synthesize realistic motions for\nsitting and lifting with several objects, outperforming alternative approaches\nin terms of motion quality and successful action completion. We call our\nframework NIFTY: Neural Interaction Fields for Trajectory sYnthesis.", + "Language models have demonstrated impressive ability in context understanding\nand generative performance. Inspired by the recent success of language\nfoundation models, in this paper, we propose LMTraj (Language-based Multimodal\nTrajectory predictor), which recasts the trajectory prediction task into a sort\nof question-answering problem. Departing from traditional numerical regression\nmodels, which treat the trajectory coordinate sequence as continuous signals,\nwe consider them as discrete signals like text prompts. Specially, we first\ntransform an input space for the trajectory coordinate into the natural\nlanguage space. Here, the entire time-series trajectories of pedestrians are\nconverted into a text prompt, and scene images are described as text\ninformation through image captioning. The transformed numerical and image data\nare then wrapped into the question-answering template for use in a language\nmodel. Next, to guide the language model in understanding and reasoning\nhigh-level knowledge, such as scene context and social relationships between\npedestrians, we introduce an auxiliary multi-task question and answering. We\nthen train a numerical tokenizer with the prompt data.", + "Next, to guide the language model in understanding and reasoning\nhigh-level knowledge, such as scene context and social relationships between\npedestrians, we introduce an auxiliary multi-task question and answering. We\nthen train a numerical tokenizer with the prompt data. We encourage the\ntokenizer to separate the integer and decimal parts well, and leverage it to\ncapture correlations between the consecutive numbers in the language model.\nLastly, we train the language model using the numerical tokenizer and all of\nthe question-answer prompts. Here, we propose a beam-search-based most-likely\nprediction and a temperature-based multimodal prediction to implement both\ndeterministic and stochastic inferences. Applying our LMTraj, we show that the\nlanguage-based model can be a powerful pedestrian trajectory predictor, and\noutperforms existing numerical-based predictor methods. Code is publicly\navailable at https://github.com/inhwanbae/LMTrajectory .", + "Neural Architecture Search is a costly practice. The fact that a search space\ncan span a vast number of design choices with each architecture evaluation\ntaking nontrivial overhead makes it hard for an algorithm to sufficiently\nexplore candidate networks. In this paper, we propose AutoBuild, a scheme which\nlearns to align the latent embeddings of operations and architecture modules\nwith the ground-truth performance of the architectures they appear in. By doing\nso, AutoBuild is capable of assigning interpretable importance scores to\narchitecture modules, such as individual operation features and larger macro\noperation sequences such that high-performance neural networks can be\nconstructed without any need for search. Through experiments performed on\nstate-of-the-art image classification, segmentation, and Stable Diffusion\nmodels, we show that by mining a relatively small set of evaluated\narchitectures, AutoBuild can learn to build high-quality architectures directly\nor help to reduce search space to focus on relevant areas, finding better\narchitectures that outperform both the original labeled ones and ones found by\nsearch baselines. Code available at\nhttps://github.com/Ascend-Research/AutoBuild", + "When we look around and perform complex tasks, how we see and selectively\nprocess what we see is crucial. However, the lack of this visual search\nmechanism in current multimodal LLMs (MLLMs) hinders their ability to focus on\nimportant visual details, especially when handling high-resolution and visually\ncrowded images. To address this, we introduce V*, an LLM-guided visual search\nmechanism that employs the world knowledge in LLMs for efficient visual\nquerying. When combined with an MLLM, this mechanism enhances collaborative\nreasoning, contextual understanding, and precise targeting of specific visual\nelements. This integration results in a new MLLM meta-architecture, named Show,\nsEArch, and TelL (SEAL). We further create V*Bench, a benchmark specifically\ndesigned to evaluate MLLMs in their ability to process high-resolution images\nand focus on visual details. Our study highlights the necessity of\nincorporating visual search capabilities into multimodal systems. The code is\navailable https://github.com/penghao-wu/vstar.", + "Salient object detection (SOD) and camouflaged object detection (COD) are\nrelated yet distinct binary mapping tasks. These tasks involve multiple\nmodalities, sharing commonalities and unique cues. Existing research often\nemploys intricate task-specific specialist models, potentially leading to\nredundancy and suboptimal results. We introduce VSCode, a generalist model with\nnovel 2D prompt learning, to jointly address four SOD tasks and three COD\ntasks. We utilize VST as the foundation model and introduce 2D prompts within\nthe encoder-decoder architecture to learn domain and task-specific knowledge on\ntwo separate dimensions. A prompt discrimination loss helps disentangle\npeculiarities to benefit model optimization. VSCode outperforms\nstate-of-the-art methods across six tasks on 26 datasets and exhibits zero-shot\ngeneralization to unseen tasks by combining 2D prompts, such as RGB-D COD.\nSource code has been available at https://github.com/Sssssuperior/VSCode.", + "3D editing plays a crucial role in many areas such as gaming and virtual\nreality. Traditional 3D editing methods, which rely on representations like\nmeshes and point clouds, often fall short in realistically depicting complex\nscenes. On the other hand, methods based on implicit 3D representations, like\nNeural Radiance Field (NeRF), render complex scenes effectively but suffer from\nslow processing speeds and limited control over specific scene areas. In\nresponse to these challenges, our paper presents GaussianEditor, an innovative\nand efficient 3D editing algorithm based on Gaussian Splatting (GS), a novel 3D\nrepresentation. GaussianEditor enhances precision and control in editing\nthrough our proposed Gaussian semantic tracing, which traces the editing target\nthroughout the training process. Additionally, we propose Hierarchical Gaussian\nsplatting (HGS) to achieve stabilized and fine results under stochastic\ngenerative guidance from 2D diffusion models. We also develop editing\nstrategies for efficient object removal and integration, a challenging task for\nexisting methods. Our comprehensive experiments demonstrate GaussianEditor's\nsuperior control, efficacy, and rapid performance, marking a significant\nadvancement in 3D editing.", + "We also develop editing\nstrategies for efficient object removal and integration, a challenging task for\nexisting methods. Our comprehensive experiments demonstrate GaussianEditor's\nsuperior control, efficacy, and rapid performance, marking a significant\nadvancement in 3D editing. Project Page:\nhttps://buaacyw.github.io/gaussian-editor/", + "We present PointInfinity, an efficient family of point cloud diffusion\nmodels. Our core idea is to use a transformer-based architecture with a\nfixed-size, resolution-invariant latent representation. This enables efficient\ntraining with low-resolution point clouds, while allowing high-resolution point\nclouds to be generated during inference. More importantly, we show that scaling\nthe test-time resolution beyond the training resolution improves the fidelity\nof generated point clouds and surfaces. We analyze this phenomenon and draw a\nlink to classifier-free guidance commonly used in diffusion models,\ndemonstrating that both allow trading off fidelity and variability during\ninference. Experiments on CO3D show that PointInfinity can efficiently generate\nhigh-resolution point clouds (up to 131k points, 31 times more than Point-E)\nwith state-of-the-art quality.", + "The field of autonomous driving increasingly demands high-quality annotated\ntraining data. In this paper, we propose Panacea, an innovative approach to\ngenerate panoramic and controllable videos in driving scenarios, capable of\nyielding an unlimited numbers of diverse, annotated samples pivotal for\nautonomous driving advancements. Panacea addresses two critical challenges:\n'Consistency' and 'Controllability.' Consistency ensures temporal and\ncross-view coherence, while Controllability ensures the alignment of generated\ncontent with corresponding annotations. Our approach integrates a novel 4D\nattention and a two-stage generation pipeline to maintain coherence,\nsupplemented by the ControlNet framework for meticulous control by the\nBird's-Eye-View (BEV) layouts. Extensive qualitative and quantitative\nevaluations of Panacea on the nuScenes dataset prove its effectiveness in\ngenerating high-quality multi-view driving-scene videos. This work notably\npropels the field of autonomous driving by effectively augmenting the training\ndataset used for advanced BEV perception techniques.", + "Multiple clustering has gained significant attention in recent years due to\nits potential to reveal multiple hidden structures of data from different\nperspectives. The advent of deep multiple clustering techniques has notably\nadvanced the performance by uncovering complex patterns and relationships\nwithin large datasets. However, a major challenge arises as users often do not\nneed all the clusterings that algorithms generate, and figuring out the one\nneeded requires a substantial understanding of each clustering result.\nTraditionally, aligning a user's brief keyword of interest with the\ncorresponding vision components was challenging, but the emergence of\nmulti-modal and large language models (LLMs) has begun to bridge this gap. In\nresponse, given unlabeled target visual data, we propose Multi-MaP, a novel\nmethod employing a multi-modal proxy learning process. It leverages CLIP\nencoders to extract coherent text and image embeddings, with GPT-4 integrating\nusers' interests to formulate effective textual contexts. Moreover, reference\nword constraint and concept-level constraint are designed to learn the optimal\ntext proxy according to the user's interest. Multi-MaP not only adeptly\ncaptures a user's interest via a keyword but also facilitates identifying\nrelevant clusterings.", + "Moreover, reference\nword constraint and concept-level constraint are designed to learn the optimal\ntext proxy according to the user's interest. Multi-MaP not only adeptly\ncaptures a user's interest via a keyword but also facilitates identifying\nrelevant clusterings. Our extensive experiments show that Multi-MaP\nconsistently outperforms state-of-the-art methods in all benchmark\nmulti-clustering vision tasks. Our code is available at\nhttps://github.com/Alexander-Yao/Multi-MaP.", + "The objective of text-to-image (T2I) personalization is to customize a\ndiffusion model to a user-provided reference concept, generating diverse images\nof the concept aligned with the target prompts. Conventional methods\nrepresenting the reference concepts using unique text embeddings often fail to\naccurately mimic the appearance of the reference. To address this, one solution\nmay be explicitly conditioning the reference images into the target denoising\nprocess, known as key-value replacement. However, prior works are constrained\nto local editing since they disrupt the structure path of the pre-trained T2I\nmodel. To overcome this, we propose a novel plug-in method, called\nDreamMatcher, which reformulates T2I personalization as semantic matching.\nSpecifically, DreamMatcher replaces the target values with reference values\naligned by semantic matching, while leaving the structure path unchanged to\npreserve the versatile capability of pre-trained T2I models for generating\ndiverse structures. We also introduce a semantic-consistent masking strategy to\nisolate the personalized concept from irrelevant regions introduced by the\ntarget prompts. Compatible with existing T2I models, DreamMatcher shows\nsignificant improvements in complex scenarios. Intensive analyses demonstrate\nthe effectiveness of our approach.", + "In this paper, we first assess and harness various Vision Foundation Models\n(VFMs) in the context of Domain Generalized Semantic Segmentation (DGSS).\nDriven by the motivation that Leveraging Stronger pre-trained models and Fewer\ntrainable parameters for Superior generalizability, we introduce a robust\nfine-tuning approach, namely Rein, to parameter-efficiently harness VFMs for\nDGSS. Built upon a set of trainable tokens, each linked to distinct instances,\nRein precisely refines and forwards the feature maps from each layer to the\nnext layer within the backbone. This process produces diverse refinements for\ndifferent categories within a single image. With fewer trainable parameters,\nRein efficiently fine-tunes VFMs for DGSS tasks, surprisingly surpassing full\nparameter fine-tuning. Extensive experiments across various settings\ndemonstrate that Rein significantly outperforms state-of-the-art methods.\nRemarkably, with just an extra 1% of trainable parameters within the frozen\nbackbone, Rein achieves a mIoU of 78.4% on the Cityscapes, without accessing\nany real urban-scene datasets.Code is available at\nhttps://github.com/w1oves/Rein.git.", + "Unlike color photography images, which are consistently encoded into RGB\nchannels, biological images encompass various modalities, where the type of\nmicroscopy and the meaning of each channel varies with each experiment.\nImportantly, the number of channels can range from one to a dozen and their\ncorrelation is often comparatively much lower than RGB, as each of them brings\nspecific information content. This aspect is largely overlooked by methods\ndesigned out of the bioimage field, and current solutions mostly focus on\nintra-channel spatial attention, often ignoring the relationship between\nchannels, yet crucial in most biological applications. Importantly, the\nvariable channel type and count prevent the projection of several experiments\nto a unified representation for large scale pre-training. In this study, we\npropose ChAda-ViT, a novel Channel Adaptive Vision Transformer architecture\nemploying an Inter-Channel Attention mechanism on images with an arbitrary\nnumber, order and type of channels. We also introduce IDRCell100k, a bioimage\ndataset with a rich set of 79 experiments covering 7 microscope modalities,\nwith a multitude of channel types, and counts varying from 1 to 10 per\nexperiment.", + "We also introduce IDRCell100k, a bioimage\ndataset with a rich set of 79 experiments covering 7 microscope modalities,\nwith a multitude of channel types, and counts varying from 1 to 10 per\nexperiment. Our architecture, trained in a self-supervised manner, outperforms\nexisting approaches in several biologically relevant downstream tasks.\nAdditionally, it can be used to bridge the gap for the first time between\nassays with different microscopes, channel numbers or types by embedding\nvarious image and experimental modalities into a unified biological image\nrepresentation. The latter should facilitate interdisciplinary studies and pave\nthe way for better adoption of deep learning in biological image-based\nanalyses. Code and Data available at https://github.com/nicoboou/chadavit.", + "The advancement of Zero-Shot Learning in the medical domain has been driven\nforward by using pre-trained models on large-scale image-text pairs, focusing\non image-text alignment. However, existing methods primarily rely on cosine\nsimilarity for alignment, which may not fully capture the complex relationship\nbetween medical images and reports. To address this gap, we introduce a novel\napproach called Cross-Attention Alignment for Radiology Zero-Shot\nClassification (CARZero). Our approach innovatively leverages cross-attention\nmechanisms to process image and report features, creating a Similarity\nRepresentation that more accurately reflects the intricate relationships in\nmedical semantics. This representation is then linearly projected to form an\nimage-text similarity matrix for cross-modality alignment. Additionally,\nrecognizing the pivotal role of prompt selection in zero-shot learning, CARZero\nincorporates a Large Language Model-based prompt alignment strategy. This\nstrategy standardizes diverse diagnostic expressions into a unified format for\nboth training and inference phases, overcoming the challenges of manual prompt\ndesign.", + "Additionally,\nrecognizing the pivotal role of prompt selection in zero-shot learning, CARZero\nincorporates a Large Language Model-based prompt alignment strategy. This\nstrategy standardizes diverse diagnostic expressions into a unified format for\nboth training and inference phases, overcoming the challenges of manual prompt\ndesign. Our approach is simple yet effective, demonstrating state-of-the-art\nperformance in zero-shot classification on five official chest radiograph\ndiagnostic test sets, including remarkable results on datasets with long-tail\ndistributions of rare diseases. This achievement is attributed to our new\nimage-text alignment strategy, which effectively addresses the complex\nrelationship between medical images and reports. Code and models are available\nat https://github.com/laihaoran/CARZero.", + "3D hand-object interaction data is scarce due to the hardware constraints in\nscaling up the data collection process. In this paper, we propose HOIDiffusion\nfor generating realistic and diverse 3D hand-object interaction data. Our model\nis a conditional diffusion model that takes both the 3D hand-object geometric\nstructure and text description as inputs for image synthesis. This offers a\nmore controllable and realistic synthesis as we can specify the structure and\nstyle inputs in a disentangled manner. HOIDiffusion is trained by leveraging a\ndiffusion model pre-trained on large-scale natural images and a few 3D human\ndemonstrations. Beyond controllable image synthesis, we adopt the generated 3D\ndata for learning 6D object pose estimation and show its effectiveness in\nimproving perception systems. Project page:\nhttps://mq-zhang1.github.io/HOIDiffusion", + "We present VecFusion, a new neural architecture that can generate vector\nfonts with varying topological structures and precise control point positions.\nOur approach is a cascaded diffusion model which consists of a raster diffusion\nmodel followed by a vector diffusion model. The raster model generates\nlow-resolution, rasterized fonts with auxiliary control point information,\ncapturing the global style and shape of the font, while the vector model\nsynthesizes vector fonts conditioned on the low-resolution raster fonts from\nthe first stage. To synthesize long and complex curves, our vector diffusion\nmodel uses a transformer architecture and a novel vector representation that\nenables the modeling of diverse vector geometry and the precise prediction of\ncontrol points. Our experiments show that, in contrast to previous generative\nmodels for vector graphics, our new cascaded vector diffusion model generates\nhigher quality vector fonts, with complex structures and diverse styles.", + "Generative Vision-Language Models (VLMs) are prone to generate\nplausible-sounding textual answers that, however, are not always grounded in\nthe input image. We investigate this phenomenon, usually referred to as\n\"hallucination\" and show that it stems from an excessive reliance on the\nlanguage prior. In particular, we show that as more tokens are generated, the\nreliance on the visual prompt decreases, and this behavior strongly correlates\nwith the emergence of hallucinations. To reduce hallucinations, we introduce\nMulti-Modal Mutual-Information Decoding (M3ID), a new sampling method for\nprompt amplification. M3ID amplifies the influence of the reference image over\nthe language prior, hence favoring the generation of tokens with higher mutual\ninformation with the visual prompt. M3ID can be applied to any pre-trained\nautoregressive VLM at inference time without necessitating further training and\nwith minimal computational overhead. If training is an option, we show that\nM3ID can be paired with Direct Preference Optimization (DPO) to improve the\nmodel's reliance on the prompt image without requiring any labels.", + "If training is an option, we show that\nM3ID can be paired with Direct Preference Optimization (DPO) to improve the\nmodel's reliance on the prompt image without requiring any labels. Our\nempirical findings show that our algorithms maintain the fluency and linguistic\ncapabilities of pre-trained VLMs while reducing hallucinations by mitigating\nvisually ungrounded answers. Specifically, for the LLaVA 13B model, M3ID and\nM3ID+DPO reduce the percentage of hallucinated objects in captioning tasks by\n25% and 28%, respectively, and improve the accuracy on VQA benchmarks such as\nPOPE by 21% and 24%.", + "We are witnessing significant breakthroughs in the technology for generating\n3D objects from text. Existing approaches either leverage large text-to-image\nmodels to optimize a 3D representation or train 3D generators on object-centric\ndatasets. Generating entire scenes, however, remains very challenging as a\nscene contains multiple 3D objects, diverse and scattered. In this work, we\nintroduce SceneWiz3D, a novel approach to synthesize high-fidelity 3D scenes\nfrom text. We marry the locality of objects with globality of scenes by\nintroducing a hybrid 3D representation: explicit for objects and implicit for\nscenes. Remarkably, an object, being represented explicitly, can be either\ngenerated from text using conventional text-to-3D approaches, or provided by\nusers. To configure the layout of the scene and automatically place objects, we\napply the Particle Swarm Optimization technique during the optimization\nprocess. Furthermore, it is difficult for certain parts of the scene (e.g.,\ncorners, occlusion) to receive multi-view supervision, leading to inferior\ngeometry. We incorporate an RGBD panorama diffusion model to mitigate it,\nresulting in high-quality geometry.", + "Furthermore, it is difficult for certain parts of the scene (e.g.,\ncorners, occlusion) to receive multi-view supervision, leading to inferior\ngeometry. We incorporate an RGBD panorama diffusion model to mitigate it,\nresulting in high-quality geometry. Extensive evaluation supports that our\napproach achieves superior quality over previous approaches, enabling the\ngeneration of detailed and view-consistent 3D scenes.", + "We propose EMAGE, a framework to generate full-body human gestures from audio\nand masked gestures, encompassing facial, local body, hands, and global\nmovements. To achieve this, we first introduce BEAT2 (BEAT-SMPLX-FLAME), a new\nmesh-level holistic co-speech dataset. BEAT2 combines a MoShed SMPL-X body with\nFLAME head parameters and further refines the modeling of head, neck, and\nfinger movements, offering a community-standardized, high-quality 3D motion\ncaptured dataset. EMAGE leverages masked body gesture priors during training to\nboost inference performance. It involves a Masked Audio Gesture Transformer,\nfacilitating joint training on audio-to-gesture generation and masked gesture\nreconstruction to effectively encode audio and body gesture hints. Encoded body\nhints from masked gestures are then separately employed to generate facial and\nbody movements. Moreover, EMAGE adaptively merges speech features from the\naudio's rhythm and content and utilizes four compositional VQ-VAEs to enhance\nthe results' fidelity and diversity.", + "Encoded body\nhints from masked gestures are then separately employed to generate facial and\nbody movements. Moreover, EMAGE adaptively merges speech features from the\naudio's rhythm and content and utilizes four compositional VQ-VAEs to enhance\nthe results' fidelity and diversity. Experiments demonstrate that EMAGE\ngenerates holistic gestures with state-of-the-art performance and is flexible\nin accepting predefined spatial-temporal gesture inputs, generating complete,\naudio-synchronized results. Our code and dataset are available\nhttps://pantomatrix.github.io/EMAGE/", + "Vision-language models (VLMs) excel in zero-shot recognition but their\nperformance varies greatly across different visual concepts. For example,\nalthough CLIP achieves impressive accuracy on ImageNet (60-80%), its\nperformance drops below 10% for more than ten concepts like night snake,\npresumably due to their limited presence in the pretraining data. However,\nmeasuring the frequency of concepts in VLMs' large-scale datasets is\nchallenging. We address this by using large language models (LLMs) to count the\nnumber of pretraining texts that contain synonyms of these concepts. Our\nanalysis confirms that popular datasets, such as LAION, exhibit a long-tailed\nconcept distribution, yielding biased performance in VLMs. We also find that\ndownstream applications of VLMs, including visual chatbots (e.g., GPT-4V) and\ntext-to-image models (e.g., Stable Diffusion), often fail to recognize or\ngenerate images of rare concepts identified by our method. To mitigate the\nimbalanced performance of zero-shot VLMs, we propose REtrieval-Augmented\nLearning (REAL).", + "To mitigate the\nimbalanced performance of zero-shot VLMs, we propose REtrieval-Augmented\nLearning (REAL). First, instead of prompting VLMs using the original class\nnames, REAL uses their most frequent synonyms found in pretraining texts. This\nsimple change already outperforms costly human-engineered and LLM-enriched\nprompts over nine benchmark datasets. Second, REAL trains a linear classifier\non a small yet balanced set of pretraining data retrieved using concept\nsynonyms. REAL surpasses the previous zero-shot SOTA, using 400x less storage\nand 10,000x less training time!", + "Since humans interact with diverse objects every day, the holistic 3D capture\nof these interactions is important to understand and model human behaviour.\nHowever, most existing methods for hand-object reconstruction from RGB either\nassume pre-scanned object templates or heavily rely on limited 3D hand-object\ndata, restricting their ability to scale and generalize to more unconstrained\ninteraction settings. To this end, we introduce HOLD -- the first\ncategory-agnostic method that reconstructs an articulated hand and object\njointly from a monocular interaction video. We develop a compositional\narticulated implicit model that can reconstruct disentangled 3D hand and object\nfrom 2D images. We also further incorporate hand-object constraints to improve\nhand-object poses and consequently the reconstruction quality. Our method does\nnot rely on 3D hand-object annotations while outperforming fully-supervised\nbaselines in both in-the-lab and challenging in-the-wild settings. Moreover, we\nqualitatively show its robustness in reconstructing from in-the-wild videos.\nCode: https://github.com/zc-alexfan/hold", + "Most continual segmentation methods tackle the problem as a per-pixel\nclassification task. However, such a paradigm is very challenging, and we find\nquery-based segmenters with built-in objectness have inherent advantages\ncompared with per-pixel ones, as objectness has strong transfer ability and\nforgetting resistance. Based on these findings, we propose CoMasTRe by\ndisentangling continual segmentation into two stages: forgetting-resistant\ncontinual objectness learning and well-researched continual classification.\nCoMasTRe uses a two-stage segmenter learning class-agnostic mask proposals at\nthe first stage and leaving recognition to the second stage. During continual\nlearning, a simple but effective distillation is adopted to strengthen\nobjectness. To further mitigate the forgetting of old classes, we design a\nmulti-label class distillation strategy suited for segmentation. We assess the\neffectiveness of CoMasTRe on PASCAL VOC and ADE20K. Extensive experiments show\nthat our method outperforms per-pixel and query-based methods on both datasets.\nCode will be available at https://github.com/jordangong/CoMasTRe.", + "In this paper, we propose an accurate data-free post-training quantization\nframework of diffusion models (ADP-DM) for efficient image generation.\nConventional data-free quantization methods learn shared quantization functions\nfor tensor discretization regardless of the generation timesteps, while the\nactivation distribution differs significantly across various timesteps. The\ncalibration images are acquired in random timesteps which fail to provide\nsufficient information for generalizable quantization function learning. Both\nissues cause sizable quantization errors with obvious image generation\nperformance degradation. On the contrary, we design group-wise quantization\nfunctions for activation discretization in different timesteps and sample the\noptimal timestep for informative calibration image generation, so that our\nquantized diffusion model can reduce the discretization errors with negligible\ncomputational overhead. Specifically, we partition the timesteps according to\nthe importance weights of quantization functions in different groups, which are\noptimized by differentiable search algorithms. We also select the optimal\ntimestep for calibration image generation by structural risk minimizing\nprinciple in order to enhance the generalization ability in the deployment of\nquantized diffusion model.", + "We also select the optimal\ntimestep for calibration image generation by structural risk minimizing\nprinciple in order to enhance the generalization ability in the deployment of\nquantized diffusion model. Extensive experimental results show that our method\noutperforms the state-of-the-art post-training quantization of diffusion model\nby a sizable margin with similar computational cost.", + "In the evolving landscape of computer vision, foundation models have emerged\nas pivotal tools, exhibiting exceptional adaptability to a myriad of tasks.\nAmong these, the Segment Anything Model (SAM) by Meta AI has distinguished\nitself in image segmentation. However, SAM, like its counterparts, encounters\nlimitations in specific niche applications, prompting a quest for enhancement\nstrategies that do not compromise its inherent capabilities. This paper\nintroduces ASAM, a novel methodology that amplifies SAM's performance through\nadversarial tuning. We harness the potential of natural adversarial examples,\ninspired by their successful implementation in natural language processing. By\nutilizing a stable diffusion model, we augment a subset (1%) of the SA-1B\ndataset, generating adversarial instances that are more representative of\nnatural variations rather than conventional imperceptible perturbations. Our\napproach maintains the photorealism of adversarial examples and ensures\nalignment with original mask annotations, thereby preserving the integrity of\nthe segmentation task. The fine-tuned ASAM demonstrates significant\nimprovements across a diverse range of segmentation tasks without necessitating\nadditional data or architectural modifications.", + "Our\napproach maintains the photorealism of adversarial examples and ensures\nalignment with original mask annotations, thereby preserving the integrity of\nthe segmentation task. The fine-tuned ASAM demonstrates significant\nimprovements across a diverse range of segmentation tasks without necessitating\nadditional data or architectural modifications. The results of our extensive\nevaluations confirm that ASAM establishes new benchmarks in segmentation tasks,\nthereby contributing to the advancement of foundational models in computer\nvision. Our project page is in https://asam2024.github.io/.", + "We present UniBind, a flexible and efficient approach that learns a unified\nrepresentation space for seven diverse modalities -- images, text, audio, point\ncloud, thermal, video, and event data. Existing works, eg., ImageBind, treat\nthe image as the central modality and build an image-centered representation\nspace; however, the space may be sub-optimal as it leads to an unbalanced\nrepresentation space among all modalities. Moreover, the category names are\ndirectly used to extract text embeddings for the downstream tasks, making it\nhardly possible to represent the semantics of multi-modal data. The\n'out-of-the-box' insight of our UniBind is to make the alignment center\nmodality-agnostic and further learn a unified and balanced representation\nspace, empowered by the large language models (LLMs). UniBind is superior in\nits flexible application to all CLIP-style models and delivers remarkable\nperformance boosts.", + "UniBind is superior in\nits flexible application to all CLIP-style models and delivers remarkable\nperformance boosts. To make this possible, we 1) construct a knowledge base of\ntext embeddings with the help of LLMs and multi-modal LLMs; 2) adaptively build\nLLM-augmented class-wise embedding center on top of the knowledge base and\nencoded visual embeddings; 3) align all the embeddings to the LLM-augmented\nembedding center via contrastive learning to achieve a unified and balanced\nrepresentation space. UniBind shows strong zero-shot recognition performance\ngains over prior arts by an average of 6.36%. Finally, we achieve new\nstate-of-the-art performance, eg., a 6.75% gain on ImageNet, on the multi-modal\nfine-tuning setting while reducing 90% of the learnable parameters.", + "It is common to observe performance degradation when transferring models\ntrained on some (source) datasets to target testing data due to a domain gap\nbetween them. Existing methods for bridging this gap, such as domain adaptation\n(DA), may require the source data on which the model was trained (often not\navailable), while others, i.e., source-free DA, require many passes through the\ntesting data. We propose an online test-time adaptation method for depth\ncompletion, the task of inferring a dense depth map from a single image and\nassociated sparse depth map, that closes the performance gap in a single pass.\nWe first present a study on how the domain shift in each data modality affects\nmodel performance. Based on our observations that the sparse depth modality\nexhibits a much smaller covariate shift than the image, we design an embedding\nmodule trained in the source domain that preserves a mapping from features\nencoding only sparse depth to those encoding image and sparse depth.", + "Based on our observations that the sparse depth modality\nexhibits a much smaller covariate shift than the image, we design an embedding\nmodule trained in the source domain that preserves a mapping from features\nencoding only sparse depth to those encoding image and sparse depth. During\ntest time, sparse depth features are projected using this map as a proxy for\nsource domain features and are used as guidance to train a set of auxiliary\nparameters (i.e., adaptation layer) to align image and sparse depth features\nfrom the target test domain to that of the source domain. We evaluate our\nmethod on indoor and outdoor scenarios and show that it improves over baselines\nby an average of 21.1%.", + "Despite the remarkable performance of score distillation in text-to-3D\ngeneration, such techniques notoriously suffer from view inconsistency issues,\nalso known as \"Janus\" artifact, where the generated objects fake each view with\nmultiple front faces. Although empirically effective methods have approached\nthis problem via score debiasing or prompt engineering, a more rigorous\nperspective to explain and tackle this problem remains elusive. In this paper,\nwe reveal that the existing score distillation-based text-to-3D generation\nframeworks degenerate to maximal likelihood seeking on each view independently\nand thus suffer from the mode collapse problem, manifesting as the Janus\nartifact in practice. To tame mode collapse, we improve score distillation by\nre-establishing the entropy term in the corresponding variational objective,\nwhich is applied to the distribution of rendered images. Maximizing the entropy\nencourages diversity among different views in generated 3D assets, thereby\nmitigating the Janus problem. Based on this new objective, we derive a new\nupdate rule for 3D score distillation, dubbed Entropic Score Distillation\n(ESD).", + "Maximizing the entropy\nencourages diversity among different views in generated 3D assets, thereby\nmitigating the Janus problem. Based on this new objective, we derive a new\nupdate rule for 3D score distillation, dubbed Entropic Score Distillation\n(ESD). We theoretically reveal that ESD can be simplified and implemented by\njust adopting the classifier-free guidance trick upon variational score\ndistillation. Although embarrassingly straightforward, our extensive\nexperiments successfully demonstrate that ESD can be an effective treatment for\nJanus artifacts in score distillation.", + "Recently, deep neural networks have achieved excellent performance on\nlow-light raw video enhancement. However, they often come with high\ncomputational complexity and large memory costs, which hinder their\napplications on resource-limited devices. In this paper, we explore the\nfeasibility of applying the extremely compact binary neural network (BNN) to\nlow-light raw video enhancement. Nevertheless, there are two main issues with\nbinarizing video enhancement models. One is how to fuse the temporal\ninformation to improve low-light denoising without complex modules. The other\nis how to narrow the performance gap between binary convolutions with the full\nprecision ones. To address the first issue, we introduce a spatial-temporal\nshift operation, which is easy-to-binarize and effective. The temporal shift\nefficiently aggregates the features of neighbor frames and the spatial shift\nhandles the misalignment caused by the large motion in videos. For the second\nissue, we present a distribution-aware binary convolution, which captures the\ndistribution characteristics of real-valued input and incorporates them into\nplain binary convolutions to alleviate the degradation in performance.", + "For the second\nissue, we present a distribution-aware binary convolution, which captures the\ndistribution characteristics of real-valued input and incorporates them into\nplain binary convolutions to alleviate the degradation in performance.\nExtensive quantitative and qualitative experiments have shown our\nhigh-efficiency binarized low-light raw video enhancement method can attain a\npromising performance.", + "Referring video segmentation relies on natural language expressions to\nidentify and segment objects, often emphasizing motion clues. Previous works\ntreat a sentence as a whole and directly perform identification at the\nvideo-level, mixing up static image-level cues with temporal motion cues.\nHowever, image-level features cannot well comprehend motion cues in sentences,\nand static cues are not crucial for temporal perception. In fact, static cues\ncan sometimes interfere with temporal perception by overshadowing motion cues.\nIn this work, we propose to decouple video-level referring expression\nunderstanding into static and motion perception, with a specific emphasis on\nenhancing temporal comprehension. Firstly, we introduce an\nexpression-decoupling module to make static cues and motion cues perform their\ndistinct role, alleviating the issue of sentence embeddings overlooking motion\ncues. Secondly, we propose a hierarchical motion perception module to capture\ntemporal information effectively across varying timescales. Furthermore, we\nemploy contrastive learning to distinguish the motions of visually similar\nobjects.", + "Secondly, we propose a hierarchical motion perception module to capture\ntemporal information effectively across varying timescales. Furthermore, we\nemploy contrastive learning to distinguish the motions of visually similar\nobjects. These contributions yield state-of-the-art performance across five\ndatasets, including a remarkable $\\textbf{9.2%}$ $\\mathcal{J\\&F}$ improvement\non the challenging $\\textbf{MeViS}$ dataset. Code is available at\nhttps://github.com/heshuting555/DsHmp.", + "This paper studies the human image animation task, which aims to generate a\nvideo of a certain reference identity following a particular motion sequence.\nExisting animation works typically employ the frame-warping technique to\nanimate the reference image towards the target motion. Despite achieving\nreasonable results, these approaches face challenges in maintaining temporal\nconsistency throughout the animation due to the lack of temporal modeling and\npoor preservation of reference identity. In this work, we introduce\nMagicAnimate, a diffusion-based framework that aims at enhancing temporal\nconsistency, preserving reference image faithfully, and improving animation\nfidelity. To achieve this, we first develop a video diffusion model to encode\ntemporal information. Second, to maintain the appearance coherence across\nframes, we introduce a novel appearance encoder to retain the intricate details\nof the reference image. Leveraging these two innovations, we further employ a\nsimple video fusion technique to encourage smooth transitions for long video\nanimation. Empirical results demonstrate the superiority of our method over\nbaseline approaches on two benchmarks. Notably, our approach outperforms the\nstrongest baseline by over 38% in terms of video fidelity on the challenging\nTikTok dancing dataset. Code and model will be made available.", + "Few-shot model compression aims to compress a large model into a more compact\none with only a tiny training set (even without labels). Block-level pruning\nhas recently emerged as a leading technique in achieving high accuracy and low\nlatency in few-shot CNN compression. But, few-shot compression for Vision\nTransformers (ViT) remains largely unexplored, which presents a new challenge.\nIn particular, the issue of sparse compression exists in traditional CNN\nfew-shot methods, which can only produce very few compressed models of\ndifferent model sizes. This paper proposes a novel framework for few-shot ViT\ncompression named DC-ViT. Instead of dropping the entire block, DC-ViT\nselectively eliminates the attention module while retaining and reusing\nportions of the MLP module. DC-ViT enables dense compression, which outputs\nnumerous compressed models that densely populate the range of model complexity.\nDC-ViT outperforms state-of-the-art few-shot compression methods by a\nsignificant margin of 10 percentage points, along with lower latency in the\ncompression of ViT and its variants.", + "Inspired by the success of general-purpose models in NLP, recent studies\nattempt to unify different vision tasks in the same sequence format and employ\nautoregressive Transformers for sequence prediction. They apply uni-directional\nattention to capture sequential dependencies and generate task sequences\nrecursively. However, such autoregressive Transformers may not fit vision tasks\nwell, as vision task sequences usually lack the sequential dependencies\ntypically observed in natural languages. In this work, we design Masked\nAutoDecoder~(MAD), an effective multi-task vision generalist. MAD consists of\ntwo core designs. First, we develop a parallel decoding framework that\nintroduces bi-directional attention to capture contextual dependencies\ncomprehensively and decode vision task sequences in parallel. Second, we design\na masked sequence modeling approach that learns rich task contexts by masking\nand reconstructing task sequences. In this way, MAD handles all the tasks by a\nsingle network branch and a simple cross-entropy loss with minimal\ntask-specific designs. Extensive experiments demonstrate the great potential of\nMAD as a new paradigm for unifying various vision tasks.", + "In this way, MAD handles all the tasks by a\nsingle network branch and a simple cross-entropy loss with minimal\ntask-specific designs. Extensive experiments demonstrate the great potential of\nMAD as a new paradigm for unifying various vision tasks. MAD achieves superior\nperformance and inference efficiency compared to autoregressive counterparts\nwhile obtaining competitive accuracy with task-specific models. Code will be\nreleased.", + "Egocentric sensors such as AR/VR devices capture human-object interactions\nand offer the potential to provide task-assistance by recalling 3D locations of\nobjects of interest in the surrounding environment. This capability requires\ninstance tracking in real-world 3D scenes from egocentric videos (IT3DEgo). We\nexplore this problem by first introducing a new benchmark dataset, consisting\nof RGB and depth videos, per-frame camera pose, and instance-level annotations\nin both 2D camera and 3D world coordinates. We present an evaluation protocol\nwhich evaluates tracking performance in 3D coordinates with two settings for\nenrolling instances to track: (1) single-view online enrollment where an\ninstance is specified on-the-fly based on the human wearer's interactions. and\n(2) multi-view pre-enrollment where images of an instance to be tracked are\nstored in memory ahead of time. To address IT3DEgo, we first re-purpose methods\nfrom relevant areas, e.g., single object tracking (SOT) -- running SOT methods\nto track instances in 2D frames and lifting them to 3D using camera pose and\ndepth.", + "To address IT3DEgo, we first re-purpose methods\nfrom relevant areas, e.g., single object tracking (SOT) -- running SOT methods\nto track instances in 2D frames and lifting them to 3D using camera pose and\ndepth. We also present a simple method that leverages pretrained segmentation\nand detection models to generate proposals from RGB frames and match proposals\nwith enrolled instance images. Our experiments show that our method (with no\nfinetuning) significantly outperforms SOT-based approaches in the egocentric\nsetting. We conclude by arguing that the problem of egocentric instance\ntracking is made easier by leveraging camera pose and using a 3D allocentric\n(world) coordinate representation.", + "This paper explores the problem of Generalist Anomaly Detection (GAD), aiming\nto train one single detection model that can generalize to detect anomalies in\ndiverse datasets from different application domains without any further\ntraining on the target data. Some recent studies have shown that large\npre-trained Visual-Language Models (VLMs) like CLIP have strong generalization\ncapabilities on detecting industrial defects from various datasets, but their\nmethods rely heavily on handcrafted text prompts about defects, making them\ndifficult to generalize to anomalies in other applications, e.g., medical image\nanomalies or semantic anomalies in natural images. In this work, we propose to\ntrain a GAD model with few-shot normal images as sample prompts for AD on\ndiverse datasets on the fly. To this end, we introduce a novel approach that\nlearns an in-context residual learning model for GAD, termed InCTRL. It is\ntrained on an auxiliary dataset to discriminate anomalies from normal samples\nbased on a holistic evaluation of the residuals between query images and\nfew-shot normal sample prompts.", + "To this end, we introduce a novel approach that\nlearns an in-context residual learning model for GAD, termed InCTRL. It is\ntrained on an auxiliary dataset to discriminate anomalies from normal samples\nbased on a holistic evaluation of the residuals between query images and\nfew-shot normal sample prompts. Regardless of the datasets, per definition of\nanomaly, larger residuals are expected for anomalies than normal samples,\nthereby enabling InCTRL to generalize across different domains without further\ntraining. Comprehensive experiments on nine AD datasets are performed to\nestablish a GAD benchmark that encapsulate the detection of industrial defect\nanomalies, medical anomalies, and semantic anomalies in both one-vs-all and\nmulti-class setting, on which InCTRL is the best performer and significantly\noutperforms state-of-the-art competing methods. Code is available at\nhttps://github.com/mala-lab/InCTRL.", + "Computer vision models normally witness degraded performance when deployed in\nreal-world scenarios, due to unexpected changes in inputs that were not\naccounted for during training. Data augmentation is commonly used to address\nthis issue, as it aims to increase data variety and reduce the distribution gap\nbetween training and test data. However, common visual augmentations might not\nguarantee extensive robustness of computer vision models. In this paper, we\npropose Auxiliary Fourier-basis Augmentation (AFA), a complementary technique\ntargeting augmentation in the frequency domain and filling the augmentation gap\nleft by visual augmentations. We demonstrate the utility of augmentation via\nFourier-basis additive noise in a straightforward and efficient adversarial\nsetting. Our results show that AFA benefits the robustness of models against\ncommon corruptions, OOD generalization, and consistency of performance of\nmodels against increasing perturbations, with negligible deficit to the\nstandard performance of models. It can be seamlessly integrated with other\naugmentation techniques to further boost performance. Code and models can be\nfound at: https://github.com/nis-research/afa-augment", + "Adversarial examples, crafted by adding perturbations imperceptible to\nhumans, can deceive neural networks. Recent studies identify the adversarial\ntransferability across various models, \\textit{i.e.}, the cross-model attack\nability of adversarial samples. To enhance such adversarial transferability,\nexisting input transformation-based methods diversify input data with\ntransformation augmentation. However, their effectiveness is limited by the\nfinite number of available transformations. In our study, we introduce a novel\napproach named Learning to Transform (L2T). L2T increases the diversity of\ntransformed images by selecting the optimal combination of operations from a\npool of candidates, consequently improving adversarial transferability. We\nconceptualize the selection of optimal transformation combinations as a\ntrajectory optimization problem and employ a reinforcement learning strategy to\neffectively solve the problem. Comprehensive experiments on the ImageNet\ndataset, as well as practical tests with Google Vision and GPT-4V, reveal that\nL2T surpasses current methodologies in enhancing adversarial transferability,\nthereby confirming its effectiveness and practical significance. The code is\navailable at https://github.com/RongyiZhu/L2T.", + "Recently, model merging techniques have surfaced as a solution to combine\nmultiple single-talent models into a single multi-talent model. However,\nprevious endeavors in this field have either necessitated additional training\nor fine-tuning processes, or require that the models possess the same\npre-trained initialization. In this work, we identify a common drawback in\nprior works w.r.t. the inconsistency of unit similarity in the weight space and\nthe activation space. To address this inconsistency, we propose an innovative\nmodel merging framework, coined as merging under dual-space constraints\n(MuDSC). Specifically, instead of solely maximizing the objective of a single\nspace, we advocate for the exploration of permutation matrices situated in a\nregion with a unified high similarity in the dual space, achieved through the\nlinear combination of activation and weight similarity matrices. In order to\nenhance usability, we have also incorporated adaptations for group structure,\nincluding Multi-Head Attention and Group Normalization. Comprehensive\nexperimental comparisons demonstrate that MuDSC can significantly boost the\nperformance of merged models with various task combinations and architectures.", + "In order to\nenhance usability, we have also incorporated adaptations for group structure,\nincluding Multi-Head Attention and Group Normalization. Comprehensive\nexperimental comparisons demonstrate that MuDSC can significantly boost the\nperformance of merged models with various task combinations and architectures.\nFurthermore, the visualization of the merged model within the multi-task loss\nlandscape reveals that MuDSC enables the merged model to reside in the\noverlapping segment, featuring a unified lower loss for each task. Our code is\npublicly available at https://github.com/zju-vipa/training_free_model_merging.", + "Promptly identifying procedural errors from egocentric videos in an online\nsetting is highly challenging and valuable for detecting mistakes as soon as\nthey happen. This capability has a wide range of applications across various\nfields, such as manufacturing and healthcare. The nature of procedural mistakes\nis open-set since novel types of failures might occur, which calls for\none-class classifiers trained on correctly executed procedures. However, no\ntechnique can currently detect open-set procedural mistakes online. We propose\nPREGO, the first online one-class classification model for mistake detection in\nPRocedural EGOcentric videos. PREGO is based on an online action recognition\ncomponent to model the current action, and a symbolic reasoning module to\npredict the next actions. Mistake detection is performed by comparing the\nrecognized current action with the expected future one. We evaluate PREGO on\ntwo procedural egocentric video datasets, Assembly101 and Epic-tent, which we\nadapt for online benchmarking of procedural mistake detection to establish\nsuitable benchmarks, thus defining the Assembly101-O and Epic-tent-O datasets,\nrespectively.", + "We introduce ChatPose, a framework employing Large Language Models (LLMs) to\nunderstand and reason about 3D human poses from images or textual descriptions.\nOur work is motivated by the human ability to intuitively understand postures\nfrom a single image or a brief description, a process that intertwines image\ninterpretation, world knowledge, and an understanding of body language.\nTraditional human pose estimation and generation methods often operate in\nisolation, lacking semantic understanding and reasoning abilities. ChatPose\naddresses these limitations by embedding SMPL poses as distinct signal tokens\nwithin a multimodal LLM, enabling the direct generation of 3D body poses from\nboth textual and visual inputs. Leveraging the powerful capabilities of\nmultimodal LLMs, ChatPose unifies classical 3D human pose and generation tasks\nwhile offering user interactions. Additionally, ChatPose empowers LLMs to apply\ntheir extensive world knowledge in reasoning about human poses, leading to two\nadvanced tasks: speculative pose generation and reasoning about pose\nestimation. These tasks involve reasoning about humans to generate 3D poses\nfrom subtle text queries, possibly accompanied by images.", + "These tasks involve reasoning about humans to generate 3D poses\nfrom subtle text queries, possibly accompanied by images. We establish\nbenchmarks for these tasks, moving beyond traditional 3D pose generation and\nestimation methods. Our results show that ChatPose outperforms existing\nmultimodal LLMs and task-specific methods on these newly proposed tasks.\nFurthermore, ChatPose's ability to understand and generate 3D human poses based\non complex reasoning opens new directions in human pose analysis.", + "Knowledge distillation involves transferring soft labels from a teacher to a\nstudent using a shared temperature-based softmax function. However, the\nassumption of a shared temperature between teacher and student implies a\nmandatory exact match between their logits in terms of logit range and\nvariance. This side-effect limits the performance of student, considering the\ncapacity discrepancy between them and the finding that the innate logit\nrelations of teacher are sufficient for student to learn. To address this\nissue, we propose setting the temperature as the weighted standard deviation of\nlogit and performing a plug-and-play Z-score pre-process of logit\nstandardization before applying softmax and Kullback-Leibler divergence. Our\npre-process enables student to focus on essential logit relations from teacher\nrather than requiring a magnitude match, and can improve the performance of\nexisting logit-based distillation methods. We also show a typical case where\nthe conventional setting of sharing temperature between teacher and student\ncannot reliably yield the authentic distillation evaluation; nonetheless, this\nchallenge is successfully alleviated by our Z-score. We extensively evaluate\nour method for various student and teacher models on CIFAR-100 and ImageNet,\nshowing its significant superiority.", + "We extensively evaluate\nour method for various student and teacher models on CIFAR-100 and ImageNet,\nshowing its significant superiority. The vanilla knowledge distillation powered\nby our pre-process can achieve favorable performance against state-of-the-art\nmethods, and other distillation variants can obtain considerable gain with the\nassistance of our pre-process.", + "The scarcity of ground-truth labels poses one major challenge in developing\noptical flow estimation models that are both generalizable and robust. While\ncurrent methods rely on data augmentation, they have yet to fully exploit the\nrich information available in labeled video sequences. We propose OCAI, a\nmethod that supports robust frame interpolation by generating intermediate\nvideo frames alongside optical flows in between. Utilizing a forward warping\napproach, OCAI employs occlusion awareness to resolve ambiguities in pixel\nvalues and fills in missing values by leveraging the forward-backward\nconsistency of optical flows. Additionally, we introduce a teacher-student\nstyle semi-supervised learning method on top of the interpolated frames. Using\na pair of unlabeled frames and the teacher model's predicted optical flow, we\ngenerate interpolated frames and flows to train a student model. The teacher's\nweights are maintained using Exponential Moving Averaging of the student. Our\nevaluations demonstrate perceptually superior interpolation quality and\nenhanced optical flow accuracy on established benchmarks such as Sintel and\nKITTI.", + "Abstract Diffusion models have recently gained prominence as a novel category\nof generative models. Despite their success, these models face a notable\ndrawback in terms of slow sampling speeds, requiring a high number of function\nevaluations (NFE) in the order of hundreds or thousands. In response, both\nlearning-free and learning-based sampling strategies have been explored to\nexpedite the sampling process. Learning-free sampling employs various ordinary\ndifferential equation (ODE) solvers based on the formulation of diffusion ODEs.\nHowever, it encounters challenges in faithfully tracking the true sampling\ntrajectory, particularly for small NFE. Conversely, learning-based sampling\nmethods, such as knowledge distillation, demand extensive additional training,\nlimiting their practical applicability. To overcome these limitations, we\nintroduce Distilled-ODE solvers (D-ODE solvers), a straightforward distillation\napproach grounded in ODE solver formulations. Our method seamlessly integrates\nthe strengths of both learning-free and learning-based sampling. D-ODE solvers\nare constructed by introducing a single parameter adjustment to existing ODE\nsolvers.", + "Our method seamlessly integrates\nthe strengths of both learning-free and learning-based sampling. D-ODE solvers\nare constructed by introducing a single parameter adjustment to existing ODE\nsolvers. Furthermore, we optimize D-ODE solvers with smaller steps using\nknowledge distillation from ODE solvers with larger steps across a batch of\nsamples. Comprehensive experiments demonstrate the superior performance of\nD-ODE solvers compared to existing ODE solvers, including DDIM, PNDM,\nDPM-Solver, DEIS, and EDM, particularly in scenarios with fewer NFE. Notably,\nour method incurs negligible computational overhead compared to previous\ndistillation techniques, facilitating straightforward and rapid integration\nwith existing samplers. Qualitative analysis reveals that D-ODE solvers not\nonly enhance image quality but also faithfully follow the target ODE\ntrajectory.", + "Deep learning has led to a dramatic leap on Single Image Super-Resolution\n(SISR) performances in recent years. %Despite the substantial advancement%\nWhile most existing work assumes a simple and fixed degradation model (e.g.,\nbicubic downsampling), the research of Blind SR seeks to improve model\ngeneralization ability with unknown degradation. Recently, Kong et al pioneer\nthe investigation of a more suitable training strategy for Blind SR using\nDropout. Although such method indeed brings substantial generalization\nimprovements via mitigating overfitting, we argue that Dropout simultaneously\nintroduces undesirable side-effect that compromises model's capacity to\nfaithfully reconstruct fine details. We show both the theoretical and\nexperimental analyses in our paper, and furthermore, we present another easy\nyet effective training strategy that enhances the generalization ability of the\nmodel by simply modulating its first and second-order features statistics.\nExperimental results have shown that our method could serve as a model-agnostic\nregularization and outperforms Dropout on seven benchmark datasets including\nboth synthetic and real-world scenarios.", + "In this paper, we democratise 3D content creation, enabling precise\ngeneration of 3D shapes from abstract sketches while overcoming limitations\ntied to drawing skills. We introduce a novel part-level modelling and alignment\nframework that facilitates abstraction modelling and cross-modal\ncorrespondence. Leveraging the same part-level decoder, our approach seamlessly\nextends to sketch modelling by establishing correspondence between CLIPasso\nedgemaps and projected 3D part regions, eliminating the need for a dataset\npairing human sketches and 3D shapes. Additionally, our method introduces a\nseamless in-position editing process as a byproduct of cross-modal part-aligned\nmodelling. Operating in a low-dimensional implicit space, our approach\nsignificantly reduces computational demands and processing time.", + "We introduce LightIt, a method for explicit illumination control for image\ngeneration. Recent generative methods lack lighting control, which is crucial\nto numerous artistic aspects of image generation such as setting the overall\nmood or cinematic appearance. To overcome these limitations, we propose to\ncondition the generation on shading and normal maps. We model the lighting with\nsingle bounce shading, which includes cast shadows. We first train a shading\nestimation module to generate a dataset of real-world images and shading pairs.\nThen, we train a control network using the estimated shading and normals as\ninput. Our method demonstrates high-quality image generation and lighting\ncontrol in numerous scenes. Additionally, we use our generated dataset to train\nan identity-preserving relighting model, conditioned on an image and a target\nshading. Our method is the first that enables the generation of images with\ncontrollable, consistent lighting and performs on par with specialized\nrelighting state-of-the-art methods.", + "Refractive Index Tomography is the inverse problem of reconstructing the\ncontinuously-varying 3D refractive index in a scene using 2D projected image\nmeasurements. Although a purely refractive field is not directly visible, it\nbends light rays as they travel through space, thus providing a signal for\nreconstruction. The effects of such fields appear in many scientific computer\nvision settings, ranging from refraction due to transparent cells in microscopy\nto the lensing of distant galaxies caused by dark matter in astrophysics.\nReconstructing these fields is particularly difficult due to the complex\nnonlinear effects of the refractive field on observed images. Furthermore,\nwhile standard 3D reconstruction and tomography settings typically have access\nto observations of the scene from many viewpoints, many refractive index\ntomography problem settings only have access to images observed from a single\nviewpoint. We introduce a method that leverages prior knowledge of light\nsources scattered throughout the refractive medium to help disambiguate the\nsingle-view refractive index tomography problem.", + "We introduce a method that leverages prior knowledge of light\nsources scattered throughout the refractive medium to help disambiguate the\nsingle-view refractive index tomography problem. We differentiably trace curved\nrays through a neural field representation of the refractive field, and\noptimize its parameters to best reproduce the observed image. We demonstrate\nthe efficacy of our approach by reconstructing simulated refractive fields,\nanalyze the effects of light source distribution on the recovered field, and\ntest our method on a simulated dark matter mapping problem where we\nsuccessfully recover the 3D refractive field caused by a realistic dark matter\ndistribution.", + "In this paper, we present RStab, a novel framework for video stabilization\nthat integrates 3D multi-frame fusion through volume rendering. Departing from\nconventional methods, we introduce a 3D multi-frame perspective to generate\nstabilized images, addressing the challenge of full-frame generation while\npreserving structure. The core of our approach lies in Stabilized Rendering\n(SR), a volume rendering module, which extends beyond the image fusion by\nincorporating feature fusion. The core of our RStab framework lies in\nStabilized Rendering (SR), a volume rendering module, fusing multi-frame\ninformation in 3D space. Specifically, SR involves warping features and colors\nfrom multiple frames by projection, fusing them into descriptors to render the\nstabilized image. However, the precision of warped information depends on the\nprojection accuracy, a factor significantly influenced by dynamic regions. In\nresponse, we introduce the Adaptive Ray Range (ARR) module to integrate depth\npriors, adaptively defining the sampling range for the projection process.\nAdditionally, we propose Color Correction (CC) assisting geometric constraints\nwith optical flow for accurate color aggregation.", + "In\nresponse, we introduce the Adaptive Ray Range (ARR) module to integrate depth\npriors, adaptively defining the sampling range for the projection process.\nAdditionally, we propose Color Correction (CC) assisting geometric constraints\nwith optical flow for accurate color aggregation. Thanks to the three modules,\nour RStab demonstrates superior performance compared with previous stabilizers\nin the field of view (FOV), image quality, and video stability across various\ndatasets.", + "Rotation invariance is an important requirement for point shape analysis. To\nachieve this, current state-of-the-art methods attempt to construct the local\nrotation-invariant representation through learning or defining the local\nreference frame (LRF). Although efficient, these LRF-based methods suffer from\nperturbation of local geometric relations, resulting in suboptimal local\nrotation invariance. To alleviate this issue, we propose a Local-consistent\nTransformation (LocoTrans) learning strategy. Specifically, we first construct\nthe local-consistent reference frame (LCRF) by considering the symmetry of the\ntwo axes in LRF. In comparison with previous LRFs, our LCRF is able to preserve\nlocal geometric relationships better through performing local-consistent\ntransformation. However, as the consistency only exists in local regions, the\nrelative pose information is still lost in the intermediate layers of the\nnetwork. We mitigate such a relative pose issue by developing a relative pose\nrecovery (RPR) module. RPR aims to restore the relative pose between adjacent\ntransformed patches.", + "We mitigate such a relative pose issue by developing a relative pose\nrecovery (RPR) module. RPR aims to restore the relative pose between adjacent\ntransformed patches. Equipped with LCRF and RPR, our LocoTrans is capable of\nlearning local-consistent transformation and preserving local geometry, which\nbenefits rotation invariance learning. Competitive performance under arbitrary\nrotations on both shape classification and part segmentation tasks and\nablations can demonstrate the effectiveness of our method. Code will be\navailable publicly at https://github.com/wdttt/LocoTrans.", + "Despite significant progress in the field, it is still challenging to create\npersonalized visual representations that align closely with the desires and\npreferences of individual users. This process requires users to articulate\ntheir ideas in words that are both comprehensible to the models and accurately\ncapture their vision, posing difficulties for many users. In this paper, we\ntackle this challenge by leveraging historical user interactions with the\nsystem to enhance user prompts. We propose a novel approach that involves\nrewriting user prompts based on a newly collected large-scale text-to-image\ndataset with over 300k prompts from 3115 users. Our rewriting model enhances\nthe expressiveness and alignment of user prompts with their intended visual\noutputs. Experimental results demonstrate the superiority of our methods over\nbaseline approaches, as evidenced in our new offline evaluation method and\nonline tests. Our code and dataset are available at\nhttps://github.com/zzjchen/Tailored-Visions.", + "We introduce Deformable Convolution v4 (DCNv4), a highly efficient and\neffective operator designed for a broad spectrum of vision applications. DCNv4\naddresses the limitations of its predecessor, DCNv3, with two key enhancements:\n1. removing softmax normalization in spatial aggregation to enhance its dynamic\nproperty and expressive power and 2. optimizing memory access to minimize\nredundant operations for speedup. These improvements result in a significantly\nfaster convergence compared to DCNv3 and a substantial increase in processing\nspeed, with DCNv4 achieving more than three times the forward speed. DCNv4\ndemonstrates exceptional performance across various tasks, including image\nclassification, instance and semantic segmentation, and notably, image\ngeneration. When integrated into generative models like U-Net in the latent\ndiffusion model, DCNv4 outperforms its baseline, underscoring its possibility\nto enhance generative models. In practical applications, replacing DCNv3 with\nDCNv4 in the InternImage model to create FlashInternImage results in up to 80%\nspeed increase and further performance improvement without further\nmodifications.", + "In practical applications, replacing DCNv3 with\nDCNv4 in the InternImage model to create FlashInternImage results in up to 80%\nspeed increase and further performance improvement without further\nmodifications. The advancements in speed and efficiency of DCNv4, combined with\nits robust performance across diverse vision tasks, show its potential as a\nfoundational building block for future vision models.", + "Understanding and reasoning about spatial relationships is a fundamental\ncapability for Visual Question Answering (VQA) and robotics. While Vision\nLanguage Models (VLM) have demonstrated remarkable performance in certain VQA\nbenchmarks, they still lack capabilities in 3D spatial reasoning, such as\nrecognizing quantitative relationships of physical objects like distances or\nsize differences. We hypothesize that VLMs' limited spatial reasoning\ncapability is due to the lack of 3D spatial knowledge in training data and aim\nto solve this problem by training VLMs with Internet-scale spatial reasoning\ndata. To this end, we present a system to facilitate this approach. We first\ndevelop an automatic 3D spatial VQA data generation framework that scales up to\n2 billion VQA examples on 10 million real-world images. We then investigate\nvarious factors in the training recipe, including data quality, training\npipeline, and VLM architecture. Our work features the first internet-scale 3D\nspatial reasoning dataset in metric space. By training a VLM on such data, we\nsignificantly enhance its ability on both qualitative and quantitative spatial\nVQA.", + "Our work features the first internet-scale 3D\nspatial reasoning dataset in metric space. By training a VLM on such data, we\nsignificantly enhance its ability on both qualitative and quantitative spatial\nVQA. Finally, we demonstrate that this VLM unlocks novel downstream\napplications in chain-of-thought spatial reasoning and robotics due to its\nquantitative estimation capability. Project website:\nhttps://spatial-vlm.github.io/", + "We present InstructDiffusion, a unifying and generic framework for aligning\ncomputer vision tasks with human instructions. Unlike existing approaches that\nintegrate prior knowledge and pre-define the output space (e.g., categories and\ncoordinates) for each vision task, we cast diverse vision tasks into a\nhuman-intuitive image-manipulating process whose output space is a flexible and\ninteractive pixel space. Concretely, the model is built upon the diffusion\nprocess and is trained to predict pixels according to user instructions, such\nas encircling the man's left shoulder in red or applying a blue mask to the\nleft car. InstructDiffusion could handle a variety of vision tasks, including\nunderstanding tasks (such as segmentation and keypoint detection) and\ngenerative tasks (such as editing and enhancement). It even exhibits the\nability to handle unseen tasks and outperforms prior methods on novel datasets.\nThis represents a significant step towards a generalist modeling interface for\nvision tasks, advancing artificial general intelligence in the field of\ncomputer vision.", + "Customized generation using diffusion models has made impressive progress in\nimage generation, but remains unsatisfactory in the challenging video\ngeneration task, as it requires the controllability of both subjects and\nmotions. To that end, we present DreamVideo, a novel approach to generating\npersonalized videos from a few static images of the desired subject and a few\nvideos of target motion. DreamVideo decouples this task into two stages,\nsubject learning and motion learning, by leveraging a pre-trained video\ndiffusion model. The subject learning aims to accurately capture the fine\nappearance of the subject from provided images, which is achieved by combining\ntextual inversion and fine-tuning of our carefully designed identity adapter.\nIn motion learning, we architect a motion adapter and fine-tune it on the given\nvideos to effectively model the target motion pattern. Combining these two\nlightweight and efficient adapters allows for flexible customization of any\nsubject with any motion. Extensive experimental results demonstrate the\nsuperior performance of our DreamVideo over the state-of-the-art methods for\ncustomized video generation. Our project page is at\nhttps://dreamvideo-t2v.github.io.", + "Reconstructing outdoor 3D scenes from temporal observations is a challenge\nthat recent work on neural fields has offered a new avenue for. However,\nexisting methods that recover scene properties, such as geometry, appearance,\nor radiance, solely from RGB captures often fail when handling poorly-lit or\ntexture-deficient regions. Similarly, recovering scenes with scanning LiDAR\nsensors is also difficult due to their low angular sampling rate which makes\nrecovering expansive real-world scenes difficult. Tackling these gaps, we\nintroduce Gated Fields - a neural scene reconstruction method that utilizes\nactive gated video sequences. To this end, we propose a neural rendering\napproach that seamlessly incorporates time-gated capture and illumination. Our\nmethod exploits the intrinsic depth cues in the gated videos, achieving precise\nand dense geometry reconstruction irrespective of ambient illumination\nconditions. We validate the method across day and night scenarios and find that\nGated Fields compares favorably to RGB and LiDAR reconstruction methods. Our\ncode and datasets are available at https://light.princeton.edu/gatedfields/.", + "The inherent noisy and sparse characteristics of radar data pose challenges\nin finding effective representations for 3D object detection. In this paper, we\npropose RadarDistill, a novel knowledge distillation (KD) method, which can\nimprove the representation of radar data by leveraging LiDAR data. RadarDistill\nsuccessfully transfers desirable characteristics of LiDAR features into radar\nfeatures using three key components: Cross-Modality Alignment (CMA),\nActivation-based Feature Distillation (AFD), and Proposal-based Feature\nDistillation (PFD). CMA enhances the density of radar features by employing\nmultiple layers of dilation operations, effectively addressing the challenge of\ninefficient knowledge transfer from LiDAR to radar. AFD selectively transfers\nknowledge based on regions of the LiDAR features, with a specific focus on\nareas where activation intensity exceeds a predefined threshold. PFD similarly\nguides the radar network to selectively mimic features from the LiDAR network\nwithin the object proposals.", + "AFD selectively transfers\nknowledge based on regions of the LiDAR features, with a specific focus on\nareas where activation intensity exceeds a predefined threshold. PFD similarly\nguides the radar network to selectively mimic features from the LiDAR network\nwithin the object proposals. Our comparative analyses conducted on the nuScenes\ndatasets demonstrate that RadarDistill achieves state-of-the-art (SOTA)\nperformance for radar-only object detection task, recording 20.5% in mAP and\n43.7% in NDS. Also, RadarDistill significantly improves the performance of the\ncamera-radar fusion model.", + "Parameter-efficient transfer learning (PETL), i.e., fine-tuning a small\nportion of parameters, is an effective strategy for adapting pre-trained models\nto downstream domains. To further reduce the memory demand, recent PETL works\nfocus on the more valuable memory-efficient characteristic. In this paper, we\nargue that the scalability, adaptability, and generalizability of\nstate-of-the-art methods are hindered by structural dependency and pertinency\non specific pre-trained backbones. To this end, we propose a new\nmemory-efficient PETL strategy, Universal Parallel Tuning (UniPT), to mitigate\nthese weaknesses. Specifically, we facilitate the transfer process via a\nlightweight and learnable parallel network, which consists of: 1) A parallel\ninteraction module that decouples the sequential connections and processes the\nintermediate activations detachedly from the pre-trained network. 2) A\nconfidence aggregation module that learns optimal strategies adaptively for\nintegrating cross-layer features.", + "2) A\nconfidence aggregation module that learns optimal strategies adaptively for\nintegrating cross-layer features. We evaluate UniPT with different backbones\n(e.g., T5, VSE$\\infty$, CLIP4Clip, Clip-ViL, and MDETR) on various\nvision-and-language and pure NLP tasks. Extensive ablations on 18 datasets have\nvalidated that UniPT can not only dramatically reduce memory consumption and\noutperform the best competitor, but also achieve competitive performance over\nother plain PETL methods with lower training memory overhead. Our code is\npublicly available at: https://github.com/Paranioar/UniPT.", + "Composed video retrieval (CoVR) is a challenging problem in computer vision\nwhich has recently highlighted the integration of modification text with visual\nqueries for more sophisticated video search in large databases. Existing works\npredominantly rely on visual queries combined with modification text to\ndistinguish relevant videos. However, such a strategy struggles to fully\npreserve the rich query-specific context in retrieved target videos and only\nrepresents the target video using visual embedding. We introduce a novel CoVR\nframework that leverages detailed language descriptions to explicitly encode\nquery-specific contextual information and learns discriminative embeddings of\nvision only, text only and vision-text for better alignment to accurately\nretrieve matched target videos. Our proposed framework can be flexibly employed\nfor both composed video (CoVR) and image (CoIR) retrieval tasks. Experiments on\nthree datasets show that our approach obtains state-of-the-art performance for\nboth CovR and zero-shot CoIR tasks, achieving gains as high as around 7% in\nterms of recall@K=1 score. Our code, models, detailed language descriptions for\nWebViD-CoVR dataset are available at\n\\url{https://github.com/OmkarThawakar/composed-video-retrieval}", + "Using reinforcement learning with human feedback (RLHF) has shown significant\npromise in fine-tuning diffusion models. Previous methods start by training a\nreward model that aligns with human preferences, then leverage RL techniques to\nfine-tune the underlying models. However, crafting an efficient reward model\ndemands extensive datasets, optimal architecture, and manual hyperparameter\ntuning, making the process both time and cost-intensive. The direct preference\noptimization (DPO) method, effective in fine-tuning large language models,\neliminates the necessity for a reward model. However, the extensive GPU memory\nrequirement of the diffusion model's denoising process hinders the direct\napplication of the DPO method. To address this issue, we introduce the Direct\nPreference for Denoising Diffusion Policy Optimization (D3PO) method to\ndirectly fine-tune diffusion models. The theoretical analysis demonstrates that\nalthough D3PO omits training a reward model, it effectively functions as the\noptimal reward model trained using human feedback data to guide the learning\nprocess. This approach requires no training of a reward model, proving to be\nmore direct, cost-effective, and minimizing computational overhead.", + "This approach requires no training of a reward model, proving to be\nmore direct, cost-effective, and minimizing computational overhead. In\nexperiments, our method uses the relative scale of objectives as a proxy for\nhuman preference, delivering comparable results to methods using ground-truth\nrewards. Moreover, D3PO demonstrates the ability to reduce image distortion\nrates and generate safer images, overcoming challenges lacking robust reward\nmodels. Our code is publicly available at https://github.com/yk7333/D3PO.", + "Despite the commercial abundance of UAVs, aerial data acquisition remains\nchallenging, and the existing Asia and North America-centric open-source UAV\ndatasets are small-scale or low-resolution and lack diversity in scene\ncontextuality. Additionally, the color content of the scenes, solar-zenith\nangle, and population density of different geographies influence the data\ndiversity. These two factors conjointly render suboptimal aerial-visual\nperception of the deep neural network (DNN) models trained primarily on the\nground-view data, including the open-world foundational models.\n To pave the way for a transformative era of aerial detection, we present\nMultiview Aerial Visual RECognition or MAVREC, a video dataset where we record\nsynchronized scenes from different perspectives -- ground camera and\ndrone-mounted camera. MAVREC consists of around 2.5 hours of industry-standard\n2.7K resolution video sequences, more than 0.5 million frames, and 1.1 million\nannotated bounding boxes. This makes MAVREC the largest ground and aerial-view\ndataset, and the fourth largest among all drone-based datasets across all\nmodalities and tasks.", + "This makes MAVREC the largest ground and aerial-view\ndataset, and the fourth largest among all drone-based datasets across all\nmodalities and tasks. Through our extensive benchmarking on MAVREC, we\nrecognize that augmenting object detectors with ground-view images from the\ncorresponding geographical location is a superior pre-training strategy for\naerial detection. Building on this strategy, we benchmark MAVREC with a\ncurriculum-based semi-supervised object detection approach that leverages\nlabeled (ground and aerial) and unlabeled (only aerial) images to enhance the\naerial detection. We publicly release the MAVREC dataset:\nhttps://mavrec.github.io.", + "We present a new multi-modal face image generation method that converts a\ntext prompt and a visual input, such as a semantic mask or scribble map, into a\nphoto-realistic face image. To do this, we combine the strengths of Generative\nAdversarial networks (GANs) and diffusion models (DMs) by employing the\nmulti-modal features in the DM into the latent space of the pre-trained GANs.\nWe present a simple mapping and a style modulation network to link two models\nand convert meaningful representations in feature maps and attention maps into\nlatent codes. With GAN inversion, the estimated latent codes can be used to\ngenerate 2D or 3D-aware facial images. We further present a multi-step training\nstrategy that reflects textual and structural representations into the\ngenerated image. Our proposed network produces realistic 2D, multi-view, and\nstylized face images, which align well with inputs. We validate our method by\nusing pre-trained 2D and 3D GANs, and our results outperform existing methods.\nOur project page is available at\nhttps://github.com/1211sh/Diffusion-driven_GAN-Inversion/.", + "The popularity of large-scale pre-training has promoted the development of\nmedical foundation models. However, some studies have shown that although\nfoundation models exhibit strong general feature extraction capabilities, their\nperformance on specific tasks is still inferior to task-specific methods. In\nthis paper, we explore a new perspective called ``Knowledge Decomposition'' to\nimprove the performance on specific medical tasks, which deconstruct the\nfoundation model into multiple lightweight expert models, each dedicated to a\nparticular task, with the goal of improving specialization while concurrently\nmitigating resource expenditure. To accomplish the above objective, we design a\nnovel framework named Low-Rank Knowledge Decomposition (LoRKD), which\nexplicitly separates graidents by incorporating low-rank expert modules and the\nefficient knowledge separation convolution. Extensive experimental results\ndemonstrate that the decomposed models perform well in terms of performance and\ntransferability, even surpassing the original foundation models.", + "Ensuring the legal usage of deep models is crucial to promoting trustable,\naccountable, and responsible artificial intelligence innovation. Current\npassport-based methods that obfuscate model functionality for license-to-use\nand ownership verifications suffer from capacity and quality constraints, as\nthey require retraining the owner model for new users. They are also vulnerable\nto advanced Expanded Residual Block ambiguity attacks. We propose\nSteganographic Passport, which uses an invertible steganographic network to\ndecouple license-to-use from ownership verification by hiding the user's\nidentity images into the owner-side passport and recovering them from their\nrespective user-side passports. An irreversible and collision-resistant hash\nfunction is used to avoid exposing the owner-side passport from the derived\nuser-side passports and increase the uniqueness of the model signature. To\nsafeguard both the passport and model's weights against advanced ambiguity\nattacks, an activation-level obfuscation is proposed for the verification\nbranch of the owner's model. By jointly training the verification and\ndeployment branches, their weights become tightly coupled.", + "To\nsafeguard both the passport and model's weights against advanced ambiguity\nattacks, an activation-level obfuscation is proposed for the verification\nbranch of the owner's model. By jointly training the verification and\ndeployment branches, their weights become tightly coupled. The proposed method\nsupports agile licensing of deep models by providing a strong ownership proof\nand license accountability without requiring a separate model retraining for\nthe admission of every new user. Experiment results show that our\nSteganographic Passport outperforms other passport-based deep model protection\nmethods in robustness against various known attacks.", + "We present En3D, an enhanced generative scheme for sculpting high-quality 3D\nhuman avatars. Unlike previous works that rely on scarce 3D datasets or limited\n2D collections with imbalanced viewing angles and imprecise pose priors, our\napproach aims to develop a zero-shot 3D generative scheme capable of producing\nvisually realistic, geometrically accurate and content-wise diverse 3D humans\nwithout relying on pre-existing 3D or 2D assets. To address this challenge, we\nintroduce a meticulously crafted workflow that implements accurate physical\nmodeling to learn the enhanced 3D generative model from synthetic 2D data.\nDuring inference, we integrate optimization modules to bridge the gap between\nrealistic appearances and coarse 3D shapes.", + "To address this challenge, we\nintroduce a meticulously crafted workflow that implements accurate physical\nmodeling to learn the enhanced 3D generative model from synthetic 2D data.\nDuring inference, we integrate optimization modules to bridge the gap between\nrealistic appearances and coarse 3D shapes. Specifically, En3D comprises three\nmodules: a 3D generator that accurately models generalizable 3D humans with\nrealistic appearance from synthesized balanced, diverse, and structured human\nimages; a geometry sculptor that enhances shape quality using multi-view normal\nconstraints for intricate human anatomy; and a texturing module that\ndisentangles explicit texture maps with fidelity and editability, leveraging\nsemantical UV partitioning and a differentiable rasterizer. Experimental\nresults show that our approach significantly outperforms prior works in terms\nof image quality, geometry accuracy and content diversity. We also showcase the\napplicability of our generated avatars for animation and editing, as well as\nthe scalability of our approach for content-style free adaptation.", + "Depth completion is a vital task for autonomous driving, as it involves\nreconstructing the precise 3D geometry of a scene from sparse and noisy depth\nmeasurements. However, most existing methods either rely only on 2D depth\nrepresentations or directly incorporate raw 3D point clouds for compensation,\nwhich are still insufficient to capture the fine-grained 3D geometry of the\nscene. To address this challenge, we introduce Tri-Perspective view\nDecomposition (TPVD), a novel framework that can explicitly model 3D geometry.\nIn particular, (1) TPVD ingeniously decomposes the original point cloud into\nthree 2D views, one of which corresponds to the sparse depth input. (2) We\ndesign TPV Fusion to update the 2D TPV features through recurrent 2D-3D-2D\naggregation, where a Distance-Aware Spherical Convolution (DASC) is applied.\n(3) By adaptively choosing TPV affinitive neighbors, the newly proposed\nGeometric Spatial Propagation Network (GSPN) further improves the geometric\nconsistency.", + "(3) By adaptively choosing TPV affinitive neighbors, the newly proposed\nGeometric Spatial Propagation Network (GSPN) further improves the geometric\nconsistency. As a result, our TPVD outperforms existing methods on KITTI,\nNYUv2, and SUN RGBD. Furthermore, we build a novel depth completion dataset\nnamed TOFDC, which is acquired by the time-of-flight (TOF) sensor and the color\ncamera on smartphones. Project page:\nhttps://yanzq95.github.io/projectpage/TOFDC/index.html", + "Adversarial training is extensively utilized to improve the adversarial\nrobustness of deep neural networks. Yet, mitigating the degradation of standard\ngeneralization performance in adversarial-trained models remains an open\nproblem. This paper attempts to resolve this issue through the lens of model\ncomplexity. First, We leverage the Fisher-Rao norm, a geometrically invariant\nmetric for model complexity, to establish the non-trivial bounds of the\nCross-Entropy Loss-based Rademacher complexity for a ReLU-activated Multi-Layer\nPerceptron. Then we generalize a complexity-related variable, which is\nsensitive to the changes in model width and the trade-off factors in\nadversarial training. Moreover, intensive empirical evidence validates that\nthis variable highly correlates with the generalization gap of Cross-Entropy\nloss between adversarial-trained and standard-trained models, especially during\nthe initial and final phases of the training process. Building upon this\nobservation, we propose a novel regularization framework, called Logit-Oriented\nAdversarial Training (LOAT), which can mitigate the trade-off between\nrobustness and accuracy while imposing only a negligible increase in\ncomputational overhead.", + "Building upon this\nobservation, we propose a novel regularization framework, called Logit-Oriented\nAdversarial Training (LOAT), which can mitigate the trade-off between\nrobustness and accuracy while imposing only a negligible increase in\ncomputational overhead. Our extensive experiments demonstrate that the proposed\nregularization strategy can boost the performance of the prevalent adversarial\ntraining algorithms, including PGD-AT, TRADES, TRADES (LSE), MART, and DM-AT,\nacross various network architectures. Our code will be available at\nhttps://github.com/TrustAI/LOAT.", + "To synthesize high-fidelity samples, diffusion models typically require\nauxiliary data to guide the generation process. However, it is impractical to\nprocure the painstaking patch-level annotation effort required in specialized\ndomains like histopathology and satellite imagery; it is often performed by\ndomain experts and involves hundreds of millions of patches. Modern-day\nself-supervised learning (SSL) representations encode rich semantic and visual\ninformation. In this paper, we posit that such representations are expressive\nenough to act as proxies to fine-grained human labels. We introduce a novel\napproach that trains diffusion models conditioned on embeddings from SSL. Our\ndiffusion models successfully project these features back to high-quality\nhistopathology and remote sensing images. In addition, we construct larger\nimages by assembling spatially consistent patches inferred from SSL embeddings,\npreserving long-range dependencies. Augmenting real data by generating\nvariations of real images improves downstream classifier accuracy for\npatch-level and larger, image-scale classification tasks. Our models are\neffective even on datasets not encountered during training, demonstrating their\nrobustness and generalizability. Generating images from learned embeddings is\nagnostic to the source of the embeddings.", + "Our models are\neffective even on datasets not encountered during training, demonstrating their\nrobustness and generalizability. Generating images from learned embeddings is\nagnostic to the source of the embeddings. The SSL embeddings used to generate a\nlarge image can either be extracted from a reference image, or sampled from an\nauxiliary model conditioned on any related modality (e.g. class labels, text,\ngenomic data). As proof of concept, we introduce the text-to-large image\nsynthesis paradigm where we successfully synthesize large pathology and\nsatellite images out of text descriptions.", + "Existing text-to-image (T2I) diffusion models usually struggle in\ninterpreting complex prompts, especially those with quantity, object-attribute\nbinding, and multi-subject descriptions. In this work, we introduce a semantic\npanel as the middleware in decoding texts to images, supporting the generator\nto better follow instructions. The panel is obtained through arranging the\nvisual concepts parsed from the input text by the aid of large language models,\nand then injected into the denoising network as a detailed control signal to\ncomplement the text condition. To facilitate text-to-panel learning, we come up\nwith a carefully designed semantic formatting protocol, accompanied by a\nfully-automatic data preparation pipeline. Thanks to such a design, our\napproach, which we call Ranni, manages to enhance a pre-trained T2I generator\nregarding its textual controllability. More importantly, the introduction of\nthe generative middleware brings a more convenient form of interaction (i.e.,\ndirectly adjusting the elements in the panel or using language instructions)\nand further allows users to finely customize their generation, based on which\nwe develop a practical system and showcase its potential in continuous\ngeneration and chatting-based editing.", + "Our project page is at\nhttps://ranni-t2i.github.io/Ranni.", + "We propose a novel contrastive learning framework to effectively address the\nchallenges of data heterogeneity in federated learning. We first analyze the\ninconsistency of gradient updates across clients during local training and\nestablish its dependence on the distribution of feature representations,\nleading to the derivation of the supervised contrastive learning (SCL)\nobjective to mitigate local deviations. In addition, we show that a na\\\"ive\nadoption of SCL in federated learning leads to representation collapse,\nresulting in slow convergence and limited performance gains. To address this\nissue, we introduce a relaxed contrastive learning loss that imposes a\ndivergence penalty on excessively similar sample pairs within each class. This\nstrategy prevents collapsed representations and enhances feature\ntransferability, facilitating collaborative training and leading to significant\nperformance improvements. Our framework outperforms all existing federated\nlearning approaches by huge margins on the standard benchmarks through\nextensive experimental results.", + "We present a novel method for efficiently producing semi-dense matches across\nimages. Previous detector-free matcher LoFTR has shown remarkable matching\ncapability in handling large-viewpoint change and texture-poor scenarios but\nsuffers from low efficiency. We revisit its design choices and derive multiple\nimprovements for both efficiency and accuracy. One key observation is that\nperforming the transformer over the entire feature map is redundant due to\nshared local information, therefore we propose an aggregated attention\nmechanism with adaptive token selection for efficiency. Furthermore, we find\nspatial variance exists in LoFTR's fine correlation module, which is adverse to\nmatching accuracy. A novel two-stage correlation layer is proposed to achieve\naccurate subpixel correspondences for accuracy improvement. Our efficiency\noptimized model is $\\sim 2.5\\times$ faster than LoFTR which can even surpass\nstate-of-the-art efficient sparse matching pipeline SuperPoint + LightGlue.\nMoreover, extensive experiments show that our method can achieve higher\naccuracy compared with competitive semi-dense matchers, with considerable\nefficiency benefits. This opens up exciting prospects for large-scale or\nlatency-sensitive applications such as image retrieval and 3D reconstruction.", + "Moreover, extensive experiments show that our method can achieve higher\naccuracy compared with competitive semi-dense matchers, with considerable\nefficiency benefits. This opens up exciting prospects for large-scale or\nlatency-sensitive applications such as image retrieval and 3D reconstruction.\nProject page: https://zju3dv.github.io/efficientloftr.", + "Large-scale pre-trained vision-language models like CLIP have demonstrated\nimpressive performance across various tasks, and exhibit remarkable zero-shot\ngeneralization capability, while they are also vulnerable to imperceptible\nadversarial examples. Existing works typically employ adversarial training\n(fine-tuning) as a defense method against adversarial examples. However, direct\napplication to the CLIP model may result in overfitting, compromising the\nmodel's capacity for generalization. In this paper, we propose Pre-trained\nModel Guided Adversarial Fine-Tuning (PMG-AFT) method, which leverages\nsupervision from the original pre-trained model by carefully designing an\nauxiliary branch, to enhance the model's zero-shot adversarial robustness.\nSpecifically, PMG-AFT minimizes the distance between the features of\nadversarial examples in the target model and those in the pre-trained model,\naiming to preserve the generalization features already captured by the\npre-trained model. Extensive Experiments on 15 zero-shot datasets demonstrate\nthat PMG-AFT significantly outperforms the state-of-the-art method, improving\nthe top-1 robust accuracy by an average of 4.99%.", + "Extensive Experiments on 15 zero-shot datasets demonstrate\nthat PMG-AFT significantly outperforms the state-of-the-art method, improving\nthe top-1 robust accuracy by an average of 4.99%. Furthermore, our approach\nconsistently improves clean accuracy by an average of 8.72%. Our code is\navailable at\nhttps://github.com/serendipity1122/Pre-trained-Model-Guided-Fine-Tuning-for-Zero-Shot-Adversarial-Robustness.", + "Creating high-quality materials in computer graphics is a challenging and\ntime-consuming task, which requires great expertise. To simplify this process,\nwe introduce MatFuse, a unified approach that harnesses the generative power of\ndiffusion models for creation and editing of 3D materials. Our method\nintegrates multiple sources of conditioning, including color palettes,\nsketches, text, and pictures, enhancing creative possibilities and granting\nfine-grained control over material synthesis. Additionally, MatFuse enables\nmap-level material editing capabilities through latent manipulation by means of\na multi-encoder compression model which learns a disentangled latent\nrepresentation for each map. We demonstrate the effectiveness of MatFuse under\nmultiple conditioning settings and explore the potential of material editing.\nFinally, we assess the quality of the generated materials both quantitatively\nin terms of CLIP-IQA and FID scores and qualitatively by conducting a user\nstudy. Source code for training MatFuse and supplemental materials are publicly\navailable at https://gvecchio.com/matfuse.", + "Capturing and re-animating the 3D structure of articulated objects present\nsignificant barriers. On one hand, methods requiring extensively calibrated\nmulti-view setups are prohibitively complex and resource-intensive, limiting\ntheir practical applicability. On the other hand, while single-camera Neural\nRadiance Fields (NeRFs) offer a more streamlined approach, they have excessive\ntraining and rendering costs. 3D Gaussian Splatting would be a suitable\nalternative but for two reasons. Firstly, existing methods for 3D dynamic\nGaussians require synchronized multi-view cameras, and secondly, the lack of\ncontrollability in dynamic scenarios. We present CoGS, a method for\nControllable Gaussian Splatting, that enables the direct manipulation of scene\nelements, offering real-time control of dynamic scenes without the prerequisite\nof pre-computing control signals. We evaluated CoGS using both synthetic and\nreal-world datasets that include dynamic objects that differ in degree of\ndifficulty. In our evaluations, CoGS consistently outperformed existing dynamic\nand controllable neural representations in terms of visual fidelity.", + "Finding correspondences between 3D shapes is an important and long-standing\nproblem in computer vision, graphics and beyond. A prominent challenge are\npartial-to-partial shape matching settings, which occur when the shapes to\nmatch are only observed incompletely (e.g. from 3D scanning). Although\npartial-to-partial matching is a highly relevant setting in practice, it is\nrarely explored. Our work bridges the gap between existing (rather artificial)\n3D full shape matching and partial-to-partial real-world settings by exploiting\ngeometric consistency as a strong constraint. We demonstrate that it is indeed\npossible to solve this challenging problem in a variety of settings. For the\nfirst time, we achieve geometric consistency for partial-to-partial matching,\nwhich is realized by a novel integer non-linear program formalism building on\ntriangle product spaces, along with a new pruning algorithm based on linear\ninteger programming. Further, we generate a new inter-class dataset for\npartial-to-partial shape-matching. We show that our method outperforms current\nSOTA methods on both an established intra-class dataset and our novel\ninter-class dataset.", + "Over the past year, a large body of multimodal research has emerged around\nzero-shot evaluation using GPT descriptors. These studies boost the zero-shot\naccuracy of pretrained VL models with an ensemble of label-specific text\ngenerated by GPT. A recent study, WaffleCLIP, demonstrated that similar\nzero-shot accuracy can be achieved with an ensemble of random descriptors.\nHowever, both zero-shot methods are un-trainable and consequently sub-optimal\nwhen some few-shot out-of-distribution (OOD) training data is available.\nInspired by these prior works, we present two more flexible methods called\ndescriptor and word soups, which do not require an LLM at test time and can\nleverage training data to increase OOD target accuracy. Descriptor soup\ngreedily selects a small set of textual descriptors using generic few-shot\ntraining data, then calculates robust class embeddings using the selected\ndescriptors. Word soup greedily assembles a chain of words in a similar manner.\nCompared to existing few-shot soft prompt tuning methods, word soup requires\nfewer parameters by construction and less GPU memory, since it does not require\nbackpropagation.", + "Word soup greedily assembles a chain of words in a similar manner.\nCompared to existing few-shot soft prompt tuning methods, word soup requires\nfewer parameters by construction and less GPU memory, since it does not require\nbackpropagation. Both soups outperform current published few-shot methods, even\nwhen combined with SoTA zero-shot methods, on cross-dataset and domain\ngeneralization benchmarks. Compared with SoTA prompt and descriptor ensembling\nmethods, such as ProDA and WaffleCLIP, word soup achieves higher OOD accuracy\nwith fewer ensemble members. Please checkout our code:\ngithub.com/Chris210634/word_soups", + "Ethical concerns surrounding copyright protection and inappropriate content\ngeneration pose challenges for the practical implementation of diffusion\nmodels. One effective solution involves watermarking the generated images.\nHowever, existing methods often compromise the model performance or require\nadditional training, which is undesirable for operators and users. To address\nthis issue, we propose Gaussian Shading, a diffusion model watermarking\ntechnique that is both performance-lossless and training-free, while serving\nthe dual purpose of copyright protection and tracing of offending content. Our\nwatermark embedding is free of model parameter modifications and thus is\nplug-and-play. We map the watermark to latent representations following a\nstandard Gaussian distribution, which is indistinguishable from latent\nrepresentations obtained from the non-watermarked diffusion model. Therefore we\ncan achieve watermark embedding with lossless performance, for which we also\nprovide theoretical proof. Furthermore, since the watermark is intricately\nlinked with image semantics, it exhibits resilience to lossy processing and\nerasure attempts. The watermark can be extracted by Denoising Diffusion\nImplicit Models (DDIM) inversion and inverse sampling.", + "Furthermore, since the watermark is intricately\nlinked with image semantics, it exhibits resilience to lossy processing and\nerasure attempts. The watermark can be extracted by Denoising Diffusion\nImplicit Models (DDIM) inversion and inverse sampling. We evaluate Gaussian\nShading on multiple versions of Stable Diffusion, and the results demonstrate\nthat Gaussian Shading not only is performance-lossless but also outperforms\nexisting methods in terms of robustness.", + "Real-world objects and environments are predominantly composed of edge\nfeatures, including straight lines and curves. Such edges are crucial elements\nfor various applications, such as CAD modeling, surface meshing, lane mapping,\netc. However, existing traditional methods only prioritize lines over curves\nfor simplicity in geometric modeling. To this end, we introduce EMAP, a new\nmethod for learning 3D edge representations with a focus on both lines and\ncurves. Our method implicitly encodes 3D edge distance and direction in\nUnsigned Distance Functions (UDF) from multi-view edge maps. On top of this\nneural representation, we propose an edge extraction algorithm that robustly\nabstracts parametric 3D edges from the inferred edge points and their\ndirections. Comprehensive evaluations demonstrate that our method achieves\nbetter 3D edge reconstruction on multiple challenging datasets. We further show\nthat our learned UDF field enhances neural surface reconstruction by capturing\nmore details.", + "Document image restoration is a crucial aspect of Document AI systems, as the\nquality of document images significantly influences the overall performance.\nPrevailing methods address distinct restoration tasks independently, leading to\nintricate systems and the incapability to harness the potential synergies of\nmulti-task learning. To overcome this challenge, we propose DocRes, a\ngeneralist model that unifies five document image restoration tasks including\ndewarping, deshadowing, appearance enhancement, deblurring, and binarization.\nTo instruct DocRes to perform various restoration tasks, we propose a novel\nvisual prompt approach called Dynamic Task-Specific Prompt (DTSPrompt). The\nDTSPrompt for different tasks comprises distinct prior features, which are\nadditional characteristics extracted from the input image. Beyond its role as a\ncue for task-specific execution, DTSPrompt can also serve as supplementary\ninformation to enhance the model's performance. Moreover, DTSPrompt is more\nflexible than prior visual prompt approaches as it can be seamlessly applied\nand adapted to inputs with high and variable resolutions. Experimental results\ndemonstrate that DocRes achieves competitive or superior performance compared\nto existing state-of-the-art task-specific models.", + "Moreover, DTSPrompt is more\nflexible than prior visual prompt approaches as it can be seamlessly applied\nand adapted to inputs with high and variable resolutions. Experimental results\ndemonstrate that DocRes achieves competitive or superior performance compared\nto existing state-of-the-art task-specific models. This underscores the\npotential of DocRes across a broader spectrum of document image restoration\ntasks. The source code is publicly available at\nhttps://github.com/ZZZHANG-jx/DocRes", + "In Multimodal Large Language Models (MLLMs), a visual projector plays a\ncrucial role in bridging pre-trained vision encoders with LLMs, enabling\nprofound visual understanding while harnessing the LLMs' robust capabilities.\nDespite the importance of the visual projector, it has been relatively less\nexplored. In this study, we first identify two essential projector properties:\n(i) flexibility in managing the number of visual tokens, crucial for MLLMs'\noverall efficiency, and (ii) preservation of local context from visual\nfeatures, vital for spatial understanding. Based on these findings, we propose\na novel projector design that is both flexible and locality-enhanced,\neffectively satisfying the two desirable properties. Additionally, we present\ncomprehensive strategies to effectively utilize multiple and multifaceted\ninstruction datasets. Through extensive experiments, we examine the impact of\nindividual design choices. Finally, our proposed MLLM, Honeybee, remarkably\noutperforms previous state-of-the-art methods across various benchmarks,\nincluding MME, MMBench, SEED-Bench, and LLaVA-Bench, achieving significantly\nhigher efficiency. Code and models are available at\nhttps://github.com/kakaobrain/honeybee.", + "Recent progress in single-image 3D generation highlights the importance of\nmulti-view coherency, leveraging 3D priors from large-scale diffusion models\npretrained on Internet-scale images. However, the aspect of novel-view\ndiversity remains underexplored within the research landscape due to the\nambiguity in converting a 2D image into 3D content, where numerous potential\nshapes can emerge. Here, we aim to address this research gap by simultaneously\naddressing both consistency and diversity. Yet, striking a balance between\nthese two aspects poses a considerable challenge due to their inherent\ntrade-offs. This work introduces HarmonyView, a simple yet effective diffusion\nsampling technique adept at decomposing two intricate aspects in single-image\n3D generation: consistency and diversity. This approach paves the way for a\nmore nuanced exploration of the two critical dimensions within the sampling\nprocess. Moreover, we propose a new evaluation metric based on CLIP image and\ntext encoders to comprehensively assess the diversity of the generated views,\nwhich closely aligns with human evaluators' judgments. In experiments,\nHarmonyView achieves a harmonious balance, demonstrating a win-win scenario in\nboth consistency and diversity.", + "In this paper, we introduce VoteCut, an innovative method for unsupervised\nobject discovery that leverages feature representations from multiple\nself-supervised models. VoteCut employs normalized-cut based graph\npartitioning, clustering and a pixel voting approach. Additionally, We present\nCuVLER (Cut-Vote-and-LEaRn), a zero-shot model, trained using pseudo-labels,\ngenerated by VoteCut, and a novel soft target loss to refine segmentation\naccuracy. Through rigorous evaluations across multiple datasets and several\nunsupervised setups, our methods demonstrate significant improvements in\ncomparison to previous state-of-the-art models. Our ablation studies further\nhighlight the contributions of each component, revealing the robustness and\nefficacy of our approach. Collectively, VoteCut and CuVLER pave the way for\nfuture advancements in image segmentation.", + "Traditional unsupervised optical flow methods are vulnerable to occlusions\nand motion boundaries due to lack of object-level information. Therefore, we\npropose UnSAMFlow, an unsupervised flow network that also leverages object\ninformation from the latest foundation model Segment Anything Model (SAM). We\nfirst include a self-supervised semantic augmentation module tailored to SAM\nmasks. We also analyze the poor gradient landscapes of traditional smoothness\nlosses and propose a new smoothness definition based on homography instead. A\nsimple yet effective mask feature module has also been added to further\naggregate features on the object level. With all these adaptations, our method\nproduces clear optical flow estimation with sharp boundaries around objects,\nwhich outperforms state-of-the-art methods on both KITTI and Sintel datasets.\nOur method also generalizes well across domains and runs very efficiently.", + "Dataset distillation has emerged as a promising approach in deep learning,\nenabling efficient training with small synthetic datasets derived from larger\nreal ones. Particularly, distribution matching-based distillation methods\nattract attention thanks to its effectiveness and low computational cost.\nHowever, these methods face two primary limitations: the dispersed feature\ndistribution within the same class in synthetic datasets, reducing class\ndiscrimination, and an exclusive focus on mean feature consistency, lacking\nprecision and comprehensiveness. To address these challenges, we introduce two\nnovel constraints: a class centralization constraint and a covariance matching\nconstraint. The class centralization constraint aims to enhance class\ndiscrimination by more closely clustering samples within classes. The\ncovariance matching constraint seeks to achieve more accurate feature\ndistribution matching between real and synthetic datasets through local feature\ncovariance matrices, particularly beneficial when sample sizes are much smaller\nthan the number of features.", + "The\ncovariance matching constraint seeks to achieve more accurate feature\ndistribution matching between real and synthetic datasets through local feature\ncovariance matrices, particularly beneficial when sample sizes are much smaller\nthan the number of features. Experiments demonstrate notable improvements with\nthese constraints, yielding performance boosts of up to 6.6% on CIFAR10, 2.9%\non SVHN, 2.5% on CIFAR100, and 2.5% on TinyImageNet, compared to the\nstate-of-the-art relevant methods. In addition, our method maintains robust\nperformance in cross-architecture settings, with a maximum performance drop of\n1.7% on four architectures. Code is available at\nhttps://github.com/VincenDen/IID.", + "Scaling up model and data size has been quite successful for the evolution of\nLLMs. However, the scaling law for the diffusion based text-to-image (T2I)\nmodels is not fully explored. It is also unclear how to efficiently scale the\nmodel for better performance at reduced cost. The different training settings\nand expensive training cost make a fair model comparison extremely difficult.\nIn this work, we empirically study the scaling properties of diffusion based\nT2I models by performing extensive and rigours ablations on scaling both\ndenoising backbones and training set, including training scaled UNet and\nTransformer variants ranging from 0.4B to 4B parameters on datasets upto 600M\nimages. For model scaling, we find the location and amount of cross attention\ndistinguishes the performance of existing UNet designs. And increasing the\ntransformer blocks is more parameter-efficient for improving text-image\nalignment than increasing channel numbers. We then identify an efficient UNet\nvariant, which is 45% smaller and 28% faster than SDXL's UNet. On the data\nscaling side, we show the quality and diversity of the training set matters\nmore than simply dataset size.", + "We then identify an efficient UNet\nvariant, which is 45% smaller and 28% faster than SDXL's UNet. On the data\nscaling side, we show the quality and diversity of the training set matters\nmore than simply dataset size. Increasing caption density and diversity\nimproves text-image alignment performance and the learning efficiency. Finally,\nwe provide scaling functions to predict the text-image alignment performance as\nfunctions of the scale of model size, compute and dataset size.", + "The limited scale of current 3D shape datasets hinders the advancements in 3D\nshape understanding, and motivates multi-modal learning approaches which\ntransfer learned knowledge from data-abundant 2D image and language modalities\nto 3D shapes. However, even though the image and language representations have\nbeen aligned by cross-modal models like CLIP, we find that the image modality\nfails to contribute as much as the language in existing multi-modal 3D\nrepresentation learning methods. This is attributed to the domain shift in the\n2D images and the distinct focus of each modality. To more effectively leverage\nboth modalities in the pre-training, we introduce TriAdapter Multi-Modal\nLearning (TAMM) -- a novel two-stage learning approach based on three\nsynergistic adapters. First, our CLIP Image Adapter mitigates the domain gap\nbetween 3D-rendered images and natural images, by adapting the visual\nrepresentations of CLIP for synthetic image-text pairs.", + "First, our CLIP Image Adapter mitigates the domain gap\nbetween 3D-rendered images and natural images, by adapting the visual\nrepresentations of CLIP for synthetic image-text pairs. Subsequently, our Dual\nAdapters decouple the 3D shape representation space into two complementary\nsub-spaces: one focusing on visual attributes and the other for semantic\nunderstanding, which ensure a more comprehensive and effective multi-modal\npre-training. Extensive experiments demonstrate that TAMM consistently enhances\n3D representations for a wide range of 3D encoder architectures, pre-training\ndatasets, and downstream tasks. Notably, we boost the zero-shot classification\naccuracy on Objaverse-LVIS from 46.8\\% to 50.7\\%, and improve the 5-way 10-shot\nlinear probing classification accuracy on ModelNet40 from 96.1\\% to 99.0\\%.\nProject page: https://alanzhangcs.github.io/tamm-page.", + "We present, GauHuman, a 3D human model with Gaussian Splatting for both fast\ntraining (1 ~ 2 minutes) and real-time rendering (up to 189 FPS), compared with\nexisting NeRF-based implicit representation modelling frameworks demanding\nhours of training and seconds of rendering per frame. Specifically, GauHuman\nencodes Gaussian Splatting in the canonical space and transforms 3D Gaussians\nfrom canonical space to posed space with linear blend skinning (LBS), in which\neffective pose and LBS refinement modules are designed to learn fine details of\n3D humans under negligible computational cost. Moreover, to enable fast\noptimization of GauHuman, we initialize and prune 3D Gaussians with 3D human\nprior, while splitting/cloning via KL divergence guidance, along with a novel\nmerge operation for further speeding up. Extensive experiments on ZJU_Mocap and\nMonoCap datasets demonstrate that GauHuman achieves state-of-the-art\nperformance quantitatively and qualitatively with fast training and real-time\nrendering speed. Notably, without sacrificing rendering quality, GauHuman can\nfast model the 3D human performer with ~13k 3D Gaussians.", + "Traditional approaches in physics-based motion generation, centered around\nimitation learning and reward shaping, often struggle to adapt to new\nscenarios. To tackle this limitation, we propose AnySkill, a novel hierarchical\nmethod that learns physically plausible interactions following open-vocabulary\ninstructions. Our approach begins by developing a set of atomic actions via a\nlow-level controller trained via imitation learning. Upon receiving an\nopen-vocabulary textual instruction, AnySkill employs a high-level policy that\nselects and integrates these atomic actions to maximize the CLIP similarity\nbetween the agent's rendered images and the text. An important feature of our\nmethod is the use of image-based rewards for the high-level policy, which\nallows the agent to learn interactions with objects without manual reward\nengineering. We demonstrate AnySkill's capability to generate realistic and\nnatural motion sequences in response to unseen instructions of varying lengths,\nmarking it the first method capable of open-vocabulary physical skill learning\nfor interactive humanoid agents.", + "Scene Graph Generation (SGG) is a challenging task of detecting objects and\npredicting relationships between objects. After DETR was developed, one-stage\nSGG models based on a one-stage object detector have been actively studied.\nHowever, complex modeling is used to predict the relationship between objects,\nand the inherent relationship between object queries learned in the multi-head\nself-attention of the object detector has been neglected. We propose a\nlightweight one-stage SGG model that extracts the relation graph from the\nvarious relationships learned in the multi-head self-attention layers of the\nDETR decoder. By fully utilizing the self-attention by-products, the relation\ngraph can be extracted effectively with a shallow relation extraction head.\nConsidering the dependency of the relation extraction task on the object\ndetection task, we propose a novel relation smoothing technique that adjusts\nthe relation label adaptively according to the quality of the detected objects.\nBy the relation smoothing, the model is trained according to the continuous\ncurriculum that focuses on object detection task at the beginning of training\nand performs multi-task learning as the object detection performance gradually\nimproves.", + "By the relation smoothing, the model is trained according to the continuous\ncurriculum that focuses on object detection task at the beginning of training\nand performs multi-task learning as the object detection performance gradually\nimproves. Furthermore, we propose a connectivity prediction task that predicts\nwhether a relation exists between object pairs as an auxiliary task of the\nrelation extraction. We demonstrate the effectiveness and efficiency of our\nmethod for the Visual Genome and Open Image V6 datasets. Our code is publicly\navailable at https://github.com/naver-ai/egtr.", + "Recent advances in generative models trained on large-scale datasets have\nmade it possible to synthesize high-quality samples across various domains.\nMoreover, the emergence of strong inversion networks enables not only a\nreconstruction of real-world images but also the modification of attributes\nthrough various editing methods. However, in certain domains related to privacy\nissues, e.g., human faces, advanced generative models along with strong\ninversion methods can lead to potential misuses. In this paper, we propose an\nessential yet under-explored task called generative identity unlearning, which\nsteers the model not to generate an image of a specific identity. In the\ngenerative identity unlearning, we target the following objectives: (i)\npreventing the generation of images with a certain identity, and (ii)\npreserving the overall quality of the generative model. To satisfy these goals,\nwe propose a novel framework, Generative Unlearning for Any Identity (GUIDE),\nwhich prevents the reconstruction of a specific identity by unlearning the\ngenerator with only a single image.", + "To satisfy these goals,\nwe propose a novel framework, Generative Unlearning for Any Identity (GUIDE),\nwhich prevents the reconstruction of a specific identity by unlearning the\ngenerator with only a single image. GUIDE consists of two parts: (i) finding a\ntarget point for optimization that un-identifies the source latent code and\n(ii) novel loss functions that facilitate the unlearning procedure while less\naffecting the learned distribution. Our extensive experiments demonstrate that\nour proposed method achieves state-of-the-art performance in the generative\nmachine unlearning task. The code is available at\nhttps://github.com/KHU-AGI/GUIDE.", + "Compositional Zero-Shot Learning (CZSL) aims to recognize unseen\nattribute-object pairs based on a limited set of observed examples. Current\nCZSL methodologies, despite their advancements, tend to neglect the distinct\nspecificity levels present in attributes. For instance, given images of sliced\nstrawberries, they may fail to prioritize `Sliced-Strawberry' over a generic\n`Red-Strawberry', despite the former being more informative. They also suffer\nfrom ballooning search space when shifting from Close-World (CW) to Open-World\n(OW) CZSL. To address the issues, we introduce the Context-based and\nDiversity-driven Specificity learning framework for CZSL (CDS-CZSL). Our\nframework evaluates the specificity of attributes by considering the diversity\nof objects they apply to and their related context. This novel approach allows\nfor more accurate predictions by emphasizing specific attribute-object pairs\nand improves composition filtering in OW-CZSL. We conduct experiments in both\nCW and OW scenarios, and our model achieves state-of-the-art results across\nthree datasets.", + "Diffusion models have transformed the image-to-image (I2I) synthesis and are\nnow permeating into videos. However, the advancement of video-to-video (V2V)\nsynthesis has been hampered by the challenge of maintaining temporal\nconsistency across video frames. This paper proposes a consistent V2V synthesis\nframework by jointly leveraging spatial conditions and temporal optical flow\nclues within the source video. Contrary to prior methods that strictly adhere\nto optical flow, our approach harnesses its benefits while handling the\nimperfection in flow estimation. We encode the optical flow via warping from\nthe first frame and serve it as a supplementary reference in the diffusion\nmodel. This enables our model for video synthesis by editing the first frame\nwith any prevalent I2I models and then propagating edits to successive frames.\nOur V2V model, FlowVid, demonstrates remarkable properties: (1) Flexibility:\nFlowVid works seamlessly with existing I2I models, facilitating various\nmodifications, including stylization, object swaps, and local edits.", + "Our V2V model, FlowVid, demonstrates remarkable properties: (1) Flexibility:\nFlowVid works seamlessly with existing I2I models, facilitating various\nmodifications, including stylization, object swaps, and local edits. (2)\nEfficiency: Generation of a 4-second video with 30 FPS and 512x512 resolution\ntakes only 1.5 minutes, which is 3.1x, 7.2x, and 10.5x faster than CoDeF,\nRerender, and TokenFlow, respectively. (3) High-quality: In user studies, our\nFlowVid is preferred 45.7% of the time, outperforming CoDeF (3.5%), Rerender\n(10.2%), and TokenFlow (40.4%).", + "We propose a method that can generate cinemagraphs automatically from a still\nlandscape image using a pre-trained StyleGAN. Inspired by the success of recent\nunconditional video generation, we leverage a powerful pre-trained image\ngenerator to synthesize high-quality cinemagraphs. Unlike previous approaches\nthat mainly utilize the latent space of a pre-trained StyleGAN, our approach\nutilizes its deep feature space for both GAN inversion and cinemagraph\ngeneration. Specifically, we propose multi-scale deep feature warping (MSDFW),\nwhich warps the intermediate features of a pre-trained StyleGAN at different\nresolutions. By using MSDFW, the generated cinemagraphs are of high resolution\nand exhibit plausible looping animation. We demonstrate the superiority of our\nmethod through user studies and quantitative comparisons with state-of-the-art\ncinemagraph generation methods and a video generation method that uses a\npre-trained StyleGAN.", + "Multi-domain generalization (mDG) is universally aimed to minimize the\ndiscrepancy between training and testing distributions to enhance\nmarginal-to-label distribution mapping. However, existing mDG literature lacks\na general learning objective paradigm and often imposes constraints on static\ntarget marginal distributions. In this paper, we propose to leverage a\n$Y$-mapping to relax the constraint. We rethink the learning objective for mDG\nand design a new \\textbf{general learning objective} to interpret and analyze\nmost existing mDG wisdom. This general objective is bifurcated into two\nsynergistic amis: learning domain-independent conditional features and\nmaximizing a posterior. Explorations also extend to two effective\nregularization terms that incorporate prior information and suppress invalid\ncausality, alleviating the issues that come with relaxed constraints. We\ntheoretically contribute an upper bound for the domain alignment of\ndomain-independent conditional features, disclosing that many previous mDG\nendeavors actually \\textbf{optimize partially the objective} and thus lead to\nlimited performance.", + "We\ntheoretically contribute an upper bound for the domain alignment of\ndomain-independent conditional features, disclosing that many previous mDG\nendeavors actually \\textbf{optimize partially the objective} and thus lead to\nlimited performance. As such, our study distills a general learning objective\ninto four practical components, providing a general, robust, and flexible\nmechanism to handle complex domain shifts. Extensive empirical results indicate\nthat the proposed objective with $Y$-mapping leads to substantially better mDG\nperformance in various downstream tasks, including regression, segmentation,\nand classification.", + "While replacing Gaussian decoders with a conditional diffusion model enhances\nthe perceptual quality of reconstructions in neural image compression, their\nlack of inductive bias for image data restricts their ability to achieve\nstate-of-the-art perceptual levels. To address this limitation, we adopt a\nnon-isotropic diffusion model at the decoder side. This model imposes an\ninductive bias aimed at distinguishing between frequency contents, thereby\nfacilitating the generation of high-quality images. Moreover, our framework is\nequipped with a novel entropy model that accurately models the probability\ndistribution of latent representation by exploiting spatio-channel correlations\nin latent space, while accelerating the entropy decoding step. This\nchannel-wise entropy model leverages both local and global spatial contexts\nwithin each channel chunk. The global spatial context is built upon the\nTransformer, which is specifically designed for image compression tasks. The\ndesigned Transformer employs a Laplacian-shaped positional encoding, the\nlearnable parameters of which are adaptively adjusted for each channel cluster.\nOur experiments demonstrate that our proposed framework yields better\nperceptual quality compared to cutting-edge generative-based codecs, and the\nproposed entropy model contributes to notable bitrate savings.", + "Recently, diffusion models (DM) have been applied in magnetic resonance\nimaging (MRI) super-resolution (SR) reconstruction, exhibiting impressive\nperformance, especially with regard to detailed reconstruction. However, the\ncurrent DM-based SR reconstruction methods still face the following issues: (1)\nThey require a large number of iterations to reconstruct the final image, which\nis inefficient and consumes a significant amount of computational resources.\n(2) The results reconstructed by these methods are often misaligned with the\nreal high-resolution images, leading to remarkable distortion in the\nreconstructed MR images. To address the aforementioned issues, we propose an\nefficient diffusion model for multi-contrast MRI SR, named as DiffMSR.\nSpecifically, we apply DM in a highly compact low-dimensional latent space to\ngenerate prior knowledge with high-frequency detail information. The highly\ncompact latent space ensures that DM requires only a few simple iterations to\nproduce accurate prior knowledge. In addition, we design the Prior-Guide Large\nWindow Transformer (PLWformer) as the decoder for DM, which can extend the\nreceptive field while fully utilizing the prior knowledge generated by DM to\nensure that the reconstructed MR image remains undistorted.", + "In addition, we design the Prior-Guide Large\nWindow Transformer (PLWformer) as the decoder for DM, which can extend the\nreceptive field while fully utilizing the prior knowledge generated by DM to\nensure that the reconstructed MR image remains undistorted. Extensive\nexperiments on public and clinical datasets demonstrate that our DiffMSR\noutperforms state-of-the-art methods.", + "Object detection in remote sensing images (RSIs) often suffers from several\nincreasing challenges, including the large variation in object scales and the\ndiverse-ranging context. Prior methods tried to address these challenges by\nexpanding the spatial receptive field of the backbone, either through\nlarge-kernel convolution or dilated convolution. However, the former typically\nintroduces considerable background noise, while the latter risks generating\noverly sparse feature representations. In this paper, we introduce the Poly\nKernel Inception Network (PKINet) to handle the above challenges. PKINet\nemploys multi-scale convolution kernels without dilation to extract object\nfeatures of varying scales and capture local context. In addition, a Context\nAnchor Attention (CAA) module is introduced in parallel to capture long-range\ncontextual information. These two components work jointly to advance the\nperformance of PKINet on four challenging remote sensing detection benchmarks,\nnamely DOTA-v1.0, DOTA-v1.5, HRSC2016, and DIOR-R.", + "Vision Transformer (ViT) has gained increasing attention in the computer\nvision community in recent years. However, the core component of ViT,\nSelf-Attention, lacks explicit spatial priors and bears a quadratic\ncomputational complexity, thereby constraining the applicability of ViT. To\nalleviate these issues, we draw inspiration from the recent Retentive Network\n(RetNet) in the field of NLP, and propose RMT, a strong vision backbone with\nexplicit spatial prior for general purposes. Specifically, we extend the\nRetNet's temporal decay mechanism to the spatial domain, and propose a spatial\ndecay matrix based on the Manhattan distance to introduce the explicit spatial\nprior to Self-Attention. Additionally, an attention decomposition form that\nadeptly adapts to explicit spatial prior is proposed, aiming to reduce the\ncomputational burden of modeling global information without disrupting the\nspatial decay matrix. Based on the spatial decay matrix and the attention\ndecomposition form, we can flexibly integrate explicit spatial prior into the\nvision backbone with linear complexity. Extensive experiments demonstrate that\nRMT exhibits exceptional performance across various vision tasks.", + "Based on the spatial decay matrix and the attention\ndecomposition form, we can flexibly integrate explicit spatial prior into the\nvision backbone with linear complexity. Extensive experiments demonstrate that\nRMT exhibits exceptional performance across various vision tasks. Specifically,\nwithout extra training data, RMT achieves **84.8%** and **86.1%** top-1 acc on\nImageNet-1k with **27M/4.5GFLOPs** and **96M/18.2GFLOPs**. For downstream\ntasks, RMT achieves **54.5** box AP and **47.2** mask AP on the COCO detection\ntask, and **52.8** mIoU on the ADE20K semantic segmentation task. Code is\navailable at https://github.com/qhfan/RMT", + "We propose to improve transformers of a specific modality with irrelevant\ndata from other modalities, e.g., improve an ImageNet model with audio or point\ncloud datasets. We would like to highlight that the data samples of the target\nmodality are irrelevant to the other modalities, which distinguishes our method\nfrom other works utilizing paired (e.g., CLIP) or interleaved data of different\nmodalities. We propose a methodology named Multimodal Pathway - given a target\nmodality and a transformer designed for it, we use an auxiliary transformer\ntrained with data of another modality and construct pathways to connect\ncomponents of the two models so that data of the target modality can be\nprocessed by both models. In this way, we utilize the universal\nsequence-to-sequence modeling abilities of transformers obtained from two\nmodalities. As a concrete implementation, we use a modality-specific tokenizer\nand task-specific head as usual but utilize the transformer blocks of the\nauxiliary model via a proposed method named Cross-Modal Re-parameterization,\nwhich exploits the auxiliary weights without any inference costs.", + "As a concrete implementation, we use a modality-specific tokenizer\nand task-specific head as usual but utilize the transformer blocks of the\nauxiliary model via a proposed method named Cross-Modal Re-parameterization,\nwhich exploits the auxiliary weights without any inference costs. On the image,\npoint cloud, video, and audio recognition tasks, we observe significant and\nconsistent performance improvements with irrelevant data from other modalities.\nThe code and models are available at https://github.com/AILab-CVC/M2PT.", + "The core of video understanding tasks, such as recognition, captioning, and\ntracking, is to automatically detect objects or actions in a video and analyze\ntheir temporal evolution. Despite sharing a common goal, different tasks often\nrely on distinct model architectures and annotation formats. In contrast,\nnatural language processing benefits from a unified output space, i.e., text\nsequences, which simplifies the training of powerful foundational language\nmodels, such as GPT-3, with extensive training corpora. Inspired by this, we\nseek to unify the output space of video understanding tasks by using languages\nas labels and additionally introducing time and box tokens. In this way, a\nvariety of video tasks could be formulated as video-grounded token generation.\nThis enables us to address various types of video tasks, including\nclassification (such as action recognition), captioning (covering clip\ncaptioning, video question answering, and dense video captioning), and\nlocalization tasks (such as visual object tracking) within a fully shared\nencoder-decoder architecture, following a generative framework.", + "Through\ncomprehensive experiments, we demonstrate such a simple and straightforward\nidea is quite effective and can achieve state-of-the-art or competitive results\non seven video benchmarks, providing a novel perspective for more universal\nvideo understanding. Code is available at https://github.com/wangjk666/OmniVid.", + "3D visual grounding is a challenging task that often requires direct and\ndense supervision, notably the semantic label for each object in the scene. In\nthis paper, we instead study the naturally supervised setting that learns from\nonly 3D scene and QA pairs, where prior works underperform. We propose the\nLanguage-Regularized Concept Learner (LARC), which uses constraints from\nlanguage as regularization to significantly improve the accuracy of\nneuro-symbolic concept learners in the naturally supervised setting. Our\napproach is based on two core insights: the first is that language constraints\n(e.g., a word's relation to another) can serve as effective regularization for\nstructured representations in neuro-symbolic models; the second is that we can\nquery large language models to distill such constraints from language\nproperties. We show that LARC improves performance of prior works in naturally\nsupervised 3D visual grounding, and demonstrates a wide range of 3D visual\nreasoning capabilities-from zero-shot composition, to data efficiency and\ntransferability. Our method represents a promising step towards regularizing\nstructured visual reasoning frameworks with language-based priors, for learning\nin settings without dense supervision.", + "While neural rendering has led to impressive advances in scene reconstruction\nand novel view synthesis, it relies heavily on accurately pre-computed camera\nposes. To relax this constraint, multiple efforts have been made to train\nNeural Radiance Fields (NeRFs) without pre-processed camera poses. However, the\nimplicit representations of NeRFs provide extra challenges to optimize the 3D\nstructure and camera poses at the same time. On the other hand, the recently\nproposed 3D Gaussian Splatting provides new opportunities given its explicit\npoint cloud representations. This paper leverages both the explicit geometric\nrepresentation and the continuity of the input video stream to perform novel\nview synthesis without any SfM preprocessing. We process the input frames in a\nsequential manner and progressively grow the 3D Gaussians set by taking one\ninput frame at a time, without the need to pre-compute the camera poses. Our\nmethod significantly improves over previous approaches in view synthesis and\ncamera pose estimation under large motion changes. Our project page is\nhttps://oasisyang.github.io/colmap-free-3dgs", + "Most image-to-image translation models postulate that a unique correspondence\nexists between the semantic classes of the source and target domains. However,\nthis assumption does not always hold in real-world scenarios due to divergent\ndistributions, different class sets, and asymmetrical information\nrepresentation. As conventional GANs attempt to generate images that match the\ndistribution of the target domain, they may hallucinate spurious instances of\nclasses absent from the source domain, thereby diminishing the usefulness and\nreliability of translated images. CycleGAN-based methods are also known to hide\nthe mismatched information in the generated images to bypass cycle consistency\nobjectives, a process known as steganography. In response to the challenge of\nnon-bijective image translation, we introduce StegoGAN, a novel model that\nleverages steganography to prevent spurious features in generated images. Our\napproach enhances the semantic consistency of the translated images without\nrequiring additional postprocessing or supervision. Our experimental\nevaluations demonstrate that StegoGAN outperforms existing GAN-based models\nacross various non-bijective image-to-image translation tasks, both\nqualitatively and quantitatively.", + "Our experimental\nevaluations demonstrate that StegoGAN outperforms existing GAN-based models\nacross various non-bijective image-to-image translation tasks, both\nqualitatively and quantitatively. Our code and pretrained models are accessible\nat https://github.com/sian-wusidi/StegoGAN.", + "Studying backdoor attacks is valuable for model copyright protection and\nenhancing defenses. While existing backdoor attacks have successfully infected\nmultimodal contrastive learning models such as CLIP, they can be easily\ncountered by specialized backdoor defenses for MCL models. This paper reveals\nthe threats in this practical scenario that backdoor attacks can remain\neffective even after defenses and introduces the \\emph{\\toolns} attack, which\nis resistant to backdoor detection and model fine-tuning defenses. To achieve\nthis, we draw motivations from the perspective of the Bayesian rule and propose\na dual-embedding guided framework for backdoor attacks. Specifically, we ensure\nthat visual trigger patterns approximate the textual target semantics in the\nembedding space, making it challenging to detect the subtle parameter\nvariations induced by backdoor learning on such natural trigger patterns.\nAdditionally, we optimize the visual trigger patterns to align the poisoned\nsamples with target vision features in order to hinder the backdoor unlearning\nthrough clean fine-tuning.", + "Additionally, we optimize the visual trigger patterns to align the poisoned\nsamples with target vision features in order to hinder the backdoor unlearning\nthrough clean fine-tuning. Extensive experiments demonstrate that our attack\nsignificantly outperforms state-of-the-art baselines (+45.3% ASR) in the\npresence of SoTA backdoor defenses, rendering these mitigation and detection\nstrategies virtually ineffective. Furthermore, our approach effectively attacks\nsome more rigorous scenarios like downstream tasks. We believe that this paper\nraises awareness regarding the potential threats associated with the practical\napplication of multimodal contrastive learning and encourages the development\nof more robust defense mechanisms.", + "This paper introduces a novel human pose estimation approach using sparse\ninertial sensors, addressing the shortcomings of previous methods reliant on\nsynthetic data. It leverages a diverse array of real inertial motion capture\ndata from different skeleton formats to improve motion diversity and model\ngeneralization. This method features two innovative components: a\npseudo-velocity regression model for dynamic motion capture with inertial\nsensors, and a part-based model dividing the body and sensor data into three\nregions, each focusing on their unique characteristics. The approach\ndemonstrates superior performance over state-of-the-art models across five\npublic datasets, notably reducing pose error by 19\\% on the DIP-IMU dataset,\nthus representing a significant improvement in inertial sensor-based human pose\nestimation. Our codes are available at {\\url{https://github.com/dx118/dynaip}}.", + "Given two images, we can estimate the relative camera pose between them by\nestablishing image-to-image correspondences. Usually, correspondences are\n2D-to-2D and the pose we estimate is defined only up to scale. Some\napplications, aiming at instant augmented reality anywhere, require\nscale-metric pose estimates, and hence, they rely on external depth estimators\nto recover the scale. We present MicKey, a keypoint matching pipeline that is\nable to predict metric correspondences in 3D camera space. By learning to match\n3D coordinates across images, we are able to infer the metric relative pose\nwithout depth measurements. Depth measurements are also not required for\ntraining, nor are scene reconstructions or image overlap information. MicKey is\nsupervised only by pairs of images and their relative poses. MicKey achieves\nstate-of-the-art performance on the Map-Free Relocalisation benchmark while\nrequiring less supervision than competing approaches.", + "We propose a simple strategy for masking image patches during visual-language\ncontrastive learning that improves the quality of the learned representations\nand the training speed. During each iteration of training, we randomly mask\nclusters of visually similar image patches, as measured by their raw pixel\nintensities. This provides an extra learning signal, beyond the contrastive\ntraining itself, since it forces a model to predict words for masked visual\nstructures solely from context. It also speeds up training by reducing the\namount of data used in each image. We evaluate the effectiveness of our model\nby pre-training on a number of benchmarks, finding that it outperforms other\nmasking strategies, such as FLIP, on the quality of the learned representation.", + "Interactive Segmentation (IS) segments specific objects or parts in the image\naccording to user input. Current IS pipelines fall into two categories:\nsingle-granularity output and multi-granularity output. The latter aims to\nalleviate the spatial ambiguity present in the former. However, the\nmulti-granularity output pipeline suffers from limited interaction flexibility\nand produces redundant results. In this work, we introduce\nGranularity-Controllable Interactive Segmentation (GraCo), a novel approach\nthat allows precise control of prediction granularity by introducing additional\nparameters to input. This enhances the customization of the interactive system\nand eliminates redundancy while resolving ambiguity. Nevertheless, the\nexorbitant cost of annotating multi-granularity masks and the lack of available\ndatasets with granularity annotations make it difficult for models to acquire\nthe necessary guidance to control output granularity. To address this problem,\nwe design an any-granularity mask generator that exploits the semantic property\nof the pre-trained IS model to automatically generate abundant mask-granularity\npairs without requiring additional manual annotation. Based on these pairs, we\npropose a granularity-controllable learning strategy that efficiently imparts\nthe granularity controllability to the IS model.", + "Based on these pairs, we\npropose a granularity-controllable learning strategy that efficiently imparts\nthe granularity controllability to the IS model. Extensive experiments on\nintricate scenarios at object and part levels demonstrate that our GraCo has\nsignificant advantages over previous methods. This highlights the potential of\nGraCo to be a flexible annotation tool, capable of adapting to diverse\nsegmentation scenarios. The project page: https://zhao-yian.github.io/GraCo.", + "While Transformers have rapidly gained popularity in various computer vision\napplications, post-hoc explanations of their internal mechanisms remain largely\nunexplored. Vision Transformers extract visual information by representing\nimage regions as transformed tokens and integrating them via attention weights.\nHowever, existing post-hoc explanation methods merely consider these attention\nweights, neglecting crucial information from the transformed tokens, which\nfails to accurately illustrate the rationales behind the models' predictions.\nTo incorporate the influence of token transformation into interpretation, we\npropose TokenTM, a novel post-hoc explanation method that utilizes our\nintroduced measurement of token transformation effects. Specifically, we\nquantify token transformation effects by measuring changes in token lengths and\ncorrelations in their directions pre- and post-transformation. Moreover, we\ndevelop initialization and aggregation rules to integrate both attention\nweights and token transformation effects across all layers, capturing holistic\ntoken contributions throughout the model. Experimental results on segmentation\nand perturbation tests demonstrate the superiority of our proposed TokenTM\ncompared to state-of-the-art Vision Transformer explanation methods.", + "We propose a new method for cloth digitalization. Deviating from existing\nmethods which learn from data captured under relatively casual settings, we\npropose to learn from data captured in strictly tested measuring protocols, and\nfind plausible physical parameters of the cloths. However, such data is\ncurrently absent, so we first propose a new dataset with accurate cloth\nmeasurements. Further, the data size is considerably smaller than the ones in\ncurrent deep learning, due to the nature of the data capture process. To learn\nfrom small data, we propose a new Bayesian differentiable cloth model to\nestimate the complex material heterogeneity of real cloths. It can provide\nhighly accurate digitalization from very limited data samples. Through\nexhaustive evaluation and comparison, we show our method is accurate in cloth\ndigitalization, efficient in learning from limited data samples, and general in\ncapturing material variations. Code and data are available\nhttps://github.com/realcrane/Bayesian-Differentiable-Physics-for-Cloth-Digitalization", + "Vision-centric 3D environment understanding is both vital and challenging for\nautonomous driving systems. Recently, object-free methods have attracted\nconsiderable attention. Such methods perceive the world by predicting the\nsemantics of discrete voxel grids but fail to construct continuous and accurate\nobstacle surfaces. To this end, in this paper, we propose SurroundSDF to\nimplicitly predict the signed distance field (SDF) and semantic field for the\ncontinuous perception from surround images. Specifically, we introduce a\nquery-based approach and utilize SDF constrained by the Eikonal formulation to\naccurately describe the surfaces of obstacles. Furthermore, considering the\nabsence of precise SDF ground truth, we propose a novel weakly supervised\nparadigm for SDF, referred to as the Sandwich Eikonal formulation, which\nemphasizes applying correct and dense constraints on both sides of the surface,\nthereby enhancing the perceptual accuracy of the surface. Experiments suggest\nthat our method achieves SOTA for both occupancy prediction and 3D scene\nreconstruction tasks on the nuScenes dataset.", + "With the remarkable advent of text-to-image diffusion models, image editing\nmethods have become more diverse and continue to evolve. A promising recent\napproach in this realm is Delta Denoising Score (DDS) - an image editing\ntechnique based on Score Distillation Sampling (SDS) framework that leverages\nthe rich generative prior of text-to-image diffusion models. However, relying\nsolely on the difference between scoring functions is insufficient for\npreserving specific structural elements from the original image, a crucial\naspect of image editing. To address this, here we present an embarrassingly\nsimple yet very powerful modification of DDS, called Contrastive Denoising\nScore (CDS), for latent diffusion models (LDM). Inspired by the similarities\nand differences between DDS and the contrastive learning for unpaired\nimage-to-image translation(CUT), we introduce a straightforward approach using\nCUT loss within the DDS framework. Rather than employing auxiliary networks as\nin the original CUT approach, we leverage the intermediate features of LDM,\nspecifically those from the self-attention layers, which possesses rich spatial\ninformation.", + "Rather than employing auxiliary networks as\nin the original CUT approach, we leverage the intermediate features of LDM,\nspecifically those from the self-attention layers, which possesses rich spatial\ninformation. Our approach enables zero-shot image-to-image translation and\nneural radiance field (NeRF) editing, achieving structural correspondence\nbetween the input and output while maintaining content controllability.\nQualitative results and comparisons demonstrates the effectiveness of our\nproposed method. Project page: https://hyelinnam.github.io/CDS/", + "Self-supervised feature reconstruction methods have shown promising advances\nin industrial image anomaly detection and localization. Despite this progress,\nthese methods still face challenges in synthesizing realistic and diverse\nanomaly samples, as well as addressing the feature redundancy and pre-training\nbias of pre-trained feature. In this work, we introduce RealNet, a feature\nreconstruction network with realistic synthetic anomaly and adaptive feature\nselection. It is incorporated with three key innovations: First, we propose\nStrength-controllable Diffusion Anomaly Synthesis (SDAS), a diffusion\nprocess-based synthesis strategy capable of generating samples with varying\nanomaly strengths that mimic the distribution of real anomalous samples.\nSecond, we develop Anomaly-aware Features Selection (AFS), a method for\nselecting representative and discriminative pre-trained feature subsets to\nimprove anomaly detection performance while controlling computational costs.\nThird, we introduce Reconstruction Residuals Selection (RRS), a strategy that\nadaptively selects discriminative residuals for comprehensive identification of\nanomalous regions across multiple levels of granularity. We assess RealNet on\nfour benchmark datasets, and our results demonstrate significant improvements\nin both Image AUROC and Pixel AUROC compared to the current state-o-the-art\nmethods.", + "We assess RealNet on\nfour benchmark datasets, and our results demonstrate significant improvements\nin both Image AUROC and Pixel AUROC compared to the current state-o-the-art\nmethods. The code, data, and models are available at\nhttps://github.com/cnulab/RealNet.", + "Multimodal Large Language Model (MLLMs) leverages Large Language Models as a\ncognitive framework for diverse visual-language tasks. Recent efforts have been\nmade to equip MLLMs with visual perceiving and grounding capabilities. However,\nthere still remains a gap in providing fine-grained pixel-level perceptions and\nextending interactions beyond text-specific inputs. In this work, we propose\n{\\bf{AnyRef}}, a general MLLM model that can generate pixel-wise object\nperceptions and natural language descriptions from multi-modality references,\nsuch as texts, boxes, images, or audio. This innovation empowers users with\ngreater flexibility to engage with the model beyond textual and regional\nprompts, without modality-specific designs. Through our proposed refocusing\nmechanism, the generated grounding output is guided to better focus on the\nreferenced object, implicitly incorporating additional pixel-level supervision.\nThis simple modification utilizes attention scores generated during the\ninference of LLM, eliminating the need for extra computations while exhibiting\nperformance enhancements in both grounding masks and referring expressions.\nWith only publicly available training data, our model achieves state-of-the-art\nresults across multiple benchmarks, including diverse modality referring\nsegmentation and region-level referring expression generation.", + "Recovering dense and long-range pixel motion in videos is a challenging\nproblem. Part of the difficulty arises from the 3D-to-2D projection process,\nleading to occlusions and discontinuities in the 2D motion domain. While 2D\nmotion can be intricate, we posit that the underlying 3D motion can often be\nsimple and low-dimensional. In this work, we propose to estimate point\ntrajectories in 3D space to mitigate the issues caused by image projection. Our\nmethod, named SpatialTracker, lifts 2D pixels to 3D using monocular depth\nestimators, represents the 3D content of each frame efficiently using a\ntriplane representation, and performs iterative updates using a transformer to\nestimate 3D trajectories. Tracking in 3D allows us to leverage\nas-rigid-as-possible (ARAP) constraints while simultaneously learning a\nrigidity embedding that clusters pixels into different rigid parts. Extensive\nevaluation shows that our approach achieves state-of-the-art tracking\nperformance both qualitatively and quantitatively, particularly in challenging\nscenarios such as out-of-plane rotation.", + "Autonomous driving (AD) has made significant strides in recent years.\nHowever, existing frameworks struggle to interpret and execute spontaneous user\ninstructions, such as \"overtake the car ahead.\" Large Language Models (LLMs)\nhave demonstrated impressive reasoning capabilities showing potential to bridge\nthis gap. In this paper, we present LaMPilot, a novel framework that integrates\nLLMs into AD systems, enabling them to follow user instructions by generating\ncode that leverages established functional primitives. We also introduce\nLaMPilot-Bench, the first benchmark dataset specifically designed to\nquantitatively evaluate the efficacy of language model programs in AD. Adopting\nthe LaMPilot framework, we conduct extensive experiments to assess the\nperformance of off-the-shelf LLMs on LaMPilot-Bench. Our results demonstrate\nthe potential of LLMs in handling diverse driving scenarios and following user\ninstructions in driving. To facilitate further research in this area, we\nrelease our code and data at https://github.com/PurdueDigitalTwin/LaMPilot.", + "Test-time adaptation (TTA) has emerged as a promising solution to address\nperformance decay due to unforeseen distribution shifts between training and\ntest data. While recent TTA methods excel in adapting to test data variations,\nsuch adaptability exposes a model to vulnerability against malicious examples,\nan aspect that has received limited attention. Previous studies have uncovered\nsecurity vulnerabilities within TTA even when a small proportion of the test\nbatch is maliciously manipulated. In response to the emerging threat, we\npropose median batch normalization (MedBN), leveraging the robustness of the\nmedian for statistics estimation within the batch normalization layer during\ntest-time inference. Our method is algorithm-agnostic, thus allowing seamless\nintegration with existing TTA frameworks. Our experimental results on benchmark\ndatasets, including CIFAR10-C, CIFAR100-C and ImageNet-C, consistently\ndemonstrate that MedBN outperforms existing approaches in maintaining robust\nperformance across different attack scenarios, encompassing both instant and\ncumulative attacks. Through extensive experiments, we show that our approach\nsustains the performance even in the absence of attacks, achieving a practical\nbalance between robustness and performance.", + "Recent dataset deduplication techniques have demonstrated that content-aware\ndataset pruning can dramatically reduce the cost of training Vision-Language\nPretrained (VLP) models without significant performance losses compared to\ntraining on the original dataset. These results have been based on pruning\ncommonly used image-caption datasets collected from the web -- datasets that\nare known to harbor harmful social biases that may then be codified in trained\nmodels. In this work, we evaluate how deduplication affects the prevalence of\nthese biases in the resulting trained models and introduce an easy-to-implement\nmodification to the recent SemDeDup algorithm that can reduce the negative\neffects that we observe. When examining CLIP-style models trained on\ndeduplicated variants of LAION-400M, we find our proposed FairDeDup algorithm\nconsistently leads to improved fairness metrics over SemDeDup on the FairFace\nand FACET datasets while maintaining zero-shot performance on CLIP benchmarks.", + "Multimodal large language models (MLLMs) have recently achieved impressive\ngeneral-purpose vision-language capabilities through visual instruction tuning.\nHowever, current MLLMs primarily focus on image-level or box-level\nunderstanding, falling short in achieving fine-grained vision-language\nalignment at pixel level. Besides, the lack of mask-based instruction data\nlimits their advancements. In this paper, we propose Osprey, a mask-text\ninstruction tuning approach, to extend MLLMs by incorporating fine-grained mask\nregions into language instruction, aiming at achieving pixel-wise visual\nunderstanding. To achieve this goal, we first meticulously curate a mask-based\nregion-text dataset with 724K samples, and then design a vision-language model\nby injecting pixel-level representation into LLM. Specifically, Osprey adopts a\nconvolutional CLIP backbone as the vision encoder and employs a mask-aware\nvisual extractor to extract precise visual mask features from high resolution\ninput. Experimental results demonstrate Osprey's superiority in various region\nunderstanding tasks, showcasing its new capability for pixel-level instruction\ntuning.", + "Experimental results demonstrate Osprey's superiority in various region\nunderstanding tasks, showcasing its new capability for pixel-level instruction\ntuning. In particular, Osprey can be integrated with Segment Anything Model\n(SAM) seamlessly to obtain multi-granularity semantics. The source code,\ndataset and demo can be found at https://github.com/CircleRadon/Osprey.", + "Generalizability in deep neural networks plays a pivotal role in medical\nimage segmentation. However, deep learning-based medical image analyses tend to\noverlook the importance of frequency variance, which is critical element for\nachieving a model that is both modality-agnostic and domain-generalizable.\nAdditionally, various models fail to account for the potential information loss\nthat can arise from multi-task learning under deep supervision, a factor that\ncan impair the model representation ability. To address these challenges, we\npropose a Modality-agnostic Domain Generalizable Network (MADGNet) for medical\nimage segmentation, which comprises two key components: a Multi-Frequency in\nMulti-Scale Attention (MFMSA) block and Ensemble Sub-Decoding Module (E-SDM).\nThe MFMSA block refines the process of spatial feature extraction, particularly\nin capturing boundary features, by incorporating multi-frequency and\nmulti-scale features, thereby offering informative cues for tissue outline and\nanatomical structures. Moreover, we propose E-SDM to mitigate information loss\nin multi-task learning with deep supervision, especially during substantial\nupsampling from low resolution.", + "Moreover, we propose E-SDM to mitigate information loss\nin multi-task learning with deep supervision, especially during substantial\nupsampling from low resolution. We evaluate the segmentation performance of\nMADGNet across six modalities and fifteen datasets. Through extensive\nexperiments, we demonstrate that MADGNet consistently outperforms\nstate-of-the-art models across various modalities, showcasing superior\nsegmentation performance. This affirms MADGNet as a robust solution for medical\nimage segmentation that excels in diverse imaging scenarios. Our MADGNet code\nis available in GitHub Link.", + "Even when using large multi-modal foundation models, few-shot learning is\nstill challenging -- if there is no proper inductive bias, it is nearly\nimpossible to keep the nuanced class attributes while removing the visually\nprominent attributes that spuriously correlate with class labels. To this end,\nwe find an inductive bias that the time-steps of a Diffusion Model (DM) can\nisolate the nuanced class attributes, i.e., as the forward diffusion adds noise\nto an image at each time-step, nuanced attributes are usually lost at an\nearlier time-step than the spurious attributes that are visually prominent.\nBuilding on this, we propose Time-step Few-shot (TiF) learner. We train\nclass-specific low-rank adapters for a text-conditioned DM to make up for the\nlost attributes, such that images can be accurately reconstructed from their\nnoisy ones given a prompt. Hence, at a small time-step, the adapter and prompt\nare essentially a parameterization of only the nuanced class attributes. For a\ntest image, we can use the parameterization to only extract the nuanced class\nattributes for classification.", + "Hence, at a small time-step, the adapter and prompt\nare essentially a parameterization of only the nuanced class attributes. For a\ntest image, we can use the parameterization to only extract the nuanced class\nattributes for classification. TiF learner significantly outperforms OpenCLIP\nand its adapters on a variety of fine-grained and customized few-shot learning\ntasks. Codes are in https://github.com/yue-zhongqi/tif.", + "Despite the progress of learning-based methods for 6D object pose estimation,\nthe trade-off between accuracy and scalability for novel objects still exists.\nSpecifically, previous methods for novel objects do not make good use of the\ntarget object's 3D shape information since they focus on generalization by\nprocessing the shape indirectly, making them less effective. We present\nGenFlow, an approach that enables both accuracy and generalization to novel\nobjects with the guidance of the target object's shape. Our method predicts\noptical flow between the rendered image and the observed image and refines the\n6D pose iteratively. It boosts the performance by a constraint of the 3D shape\nand the generalizable geometric knowledge learned from an end-to-end\ndifferentiable system. We further improve our model by designing a cascade\nnetwork architecture to exploit the multi-scale correlations and coarse-to-fine\nrefinement. GenFlow ranked first on the unseen object pose estimation\nbenchmarks in both the RGB and RGB-D cases. It also achieves performance\ncompetitive with existing state-of-the-art methods for the seen object pose\nestimation without any fine-tuning.", + "Few-Shot Class-Incremental Learning (FSCIL) introduces a paradigm in which\nthe problem space expands with limited data. FSCIL methods inherently face the\nchallenge of catastrophic forgetting as data arrives incrementally, making\nmodels susceptible to overwriting previously acquired knowledge. Moreover,\ngiven the scarcity of labeled samples available at any given time, models may\nbe prone to overfitting and find it challenging to strike a balance between\nextensive pretraining and the limited incremental data. To address these\nchallenges, we propose the OrCo framework built on two core principles:\nfeatures' orthogonality in the representation space, and contrastive learning.\nIn particular, we improve the generalization of the embedding space by\nemploying a combination of supervised and self-supervised contrastive losses\nduring the pretraining phase. Additionally, we introduce OrCo loss to address\nchallenges arising from data limitations during incremental sessions. Through\nfeature space perturbations and orthogonality between classes, the OrCo loss\nmaximizes margins and reserves space for the following incremental data. This,\nin turn, ensures the accommodation of incoming classes in the feature space\nwithout compromising previously acquired knowledge.", + "Through\nfeature space perturbations and orthogonality between classes, the OrCo loss\nmaximizes margins and reserves space for the following incremental data. This,\nin turn, ensures the accommodation of incoming classes in the feature space\nwithout compromising previously acquired knowledge. Our experimental results\nshowcase state-of-the-art performance across three benchmark datasets,\nincluding mini-ImageNet, CIFAR100, and CUB datasets. Code is available at\nhttps://github.com/noorahmedds/OrCo", + "As recent advances in mobile camera technology have enabled the capability to\ncapture high-resolution images, such as 4K images, the demand for an efficient\ndeblurring model handling large motion has increased. In this paper, we\ndiscover that the image residual errors, i.e., blur-sharp pixel differences,\ncan be grouped into some categories according to their motion blur type and how\ncomplex their neighboring pixels are. Inspired by this, we decompose the\ndeblurring (regression) task into blur pixel discretization (pixel-level blur\nclassification) and discrete-to-continuous conversion (regression with blur\nclass map) tasks. Specifically, we generate the discretized image residual\nerrors by identifying the blur pixels and then transform them to a continuous\nform, which is computationally more efficient than naively solving the original\nregression problem with continuous values. Here, we found that the\ndiscretization result, i.e., blur segmentation map, remarkably exhibits visual\nsimilarity with the image residual errors. As a result, our efficient model\nshows comparable performance to state-of-the-art methods in realistic\nbenchmarks, while our method is up to 10 times computationally more efficient.", + "Visual Instruction Tuning represents a novel learning paradigm involving the\nfine-tuning of pre-trained language models using task-specific instructions.\nThis paradigm shows promising zero-shot results in various natural language\nprocessing tasks but is still unexplored in vision emotion understanding. In\nthis work, we focus on enhancing the model's proficiency in understanding and\nadhering to instructions related to emotional contexts. Initially, we identify\nkey visual clues critical to visual emotion recognition. Subsequently, we\nintroduce a novel GPT-assisted pipeline for generating emotion visual\ninstruction data, effectively addressing the scarcity of annotated instruction\ndata in this domain. Expanding on the groundwork established by InstructBLIP,\nour proposed EmoVIT architecture incorporates emotion-specific instruction\ndata, leveraging the powerful capabilities of Large Language Models to enhance\nperformance. Through extensive experiments, our model showcases its proficiency\nin emotion classification, adeptness in affective reasoning, and competence in\ncomprehending humor. The comparative analysis provides a robust benchmark for\nEmotion Visual Instruction Tuning in the era of LLMs, providing valuable\ninsights and opening avenues for future exploration in this domain. Our code is\navailable at \\url{https://github.com/aimmemotion/EmoVIT}.", + "While recent supervised methods for reference-based object counting continue\nto improve the performance on benchmark datasets, they have to rely on small\ndatasets due to the cost associated with manually annotating dozens of objects\nin images. We propose UnCounTR, a model that can learn this task without\nrequiring any manual annotations. To this end, we construct \"Self-Collages\",\nimages with various pasted objects as training samples, that provide a rich\nlearning signal covering arbitrary object types and counts. Our method builds\non existing unsupervised representations and segmentation techniques to\nsuccessfully demonstrate for the first time the ability of reference-based\ncounting without manual supervision. Our experiments show that our method not\nonly outperforms simple baselines and generic models such as FasterRCNN and\nDETR, but also matches the performance of supervised counting models in some\ndomains.", + "With recent text-to-image models, anyone can generate deceptively realistic\nimages with arbitrary contents, fueling the growing threat of visual\ndisinformation. A key enabler for generating high-resolution images with low\ncomputational cost has been the development of latent diffusion models (LDMs).\nIn contrast to conventional diffusion models, LDMs perform the denoising\nprocess in the low-dimensional latent space of a pre-trained autoencoder (AE)\ninstead of the high-dimensional image space. Despite their relevance, the\nforensic analysis of LDMs is still in its infancy. In this work we propose\nAEROBLADE, a novel detection method which exploits an inherent component of\nLDMs: the AE used to transform images between image and latent space. We find\nthat generated images can be more accurately reconstructed by the AE than real\nimages, allowing for a simple detection approach based on the reconstruction\nerror. Most importantly, our method is easy to implement and does not require\nany training, yet nearly matches the performance of detectors that rely on\nextensive training. We empirically demonstrate that AEROBLADE is effective\nagainst state-of-the-art LDMs, including Stable Diffusion and Midjourney.", + "Most importantly, our method is easy to implement and does not require\nany training, yet nearly matches the performance of detectors that rely on\nextensive training. We empirically demonstrate that AEROBLADE is effective\nagainst state-of-the-art LDMs, including Stable Diffusion and Midjourney.\nBeyond detection, our approach allows for the qualitative analysis of images,\nwhich can be leveraged for identifying inpainted regions. We release our code\nand data at https://github.com/jonasricker/aeroblade .", + "Logit knowledge distillation attracts increasing attention due to its\npracticality in recent studies. However, it often suffers inferior performance\ncompared to the feature knowledge distillation. In this paper, we argue that\nexisting logit-based methods may be sub-optimal since they only leverage the\nglobal logit output that couples multiple semantic knowledge. This may transfer\nambiguous knowledge to the student and mislead its learning. To this end, we\npropose a simple but effective method, i.e., Scale Decoupled Distillation\n(SDD), for logit knowledge distillation. SDD decouples the global logit output\ninto multiple local logit outputs and establishes distillation pipelines for\nthem. This helps the student to mine and inherit fine-grained and unambiguous\nlogit knowledge. Moreover, the decoupled knowledge can be further divided into\nconsistent and complementary logit knowledge that transfers the semantic\ninformation and sample ambiguity, respectively. By increasing the weight of\ncomplementary parts, SDD can guide the student to focus more on ambiguous\nsamples, improving its discrimination ability.", + "Moreover, the decoupled knowledge can be further divided into\nconsistent and complementary logit knowledge that transfers the semantic\ninformation and sample ambiguity, respectively. By increasing the weight of\ncomplementary parts, SDD can guide the student to focus more on ambiguous\nsamples, improving its discrimination ability. Extensive experiments on several\nbenchmark datasets demonstrate the effectiveness of SDD for wide\nteacher-student pairs, especially in the fine-grained classification task. Code\nis available at: https://github.com/shicaiwei123/SDD-CVPR2024", + "We present NARUTO, a neural active reconstruction system that combines a\nhybrid neural representation with uncertainty learning, enabling high-fidelity\nsurface reconstruction. Our approach leverages a multi-resolution hash-grid as\nthe mapping backbone, chosen for its exceptional convergence speed and capacity\nto capture high-frequency local features.The centerpiece of our work is the\nincorporation of an uncertainty learning module that dynamically quantifies\nreconstruction uncertainty while actively reconstructing the environment. By\nharnessing learned uncertainty, we propose a novel uncertainty aggregation\nstrategy for goal searching and efficient path planning. Our system\nautonomously explores by targeting uncertain observations and reconstructs\nenvironments with remarkable completeness and fidelity. We also demonstrate the\nutility of this uncertainty-aware approach by enhancing SOTA neural SLAM\nsystems through an active ray sampling strategy. Extensive evaluations of\nNARUTO in various environments, using an indoor scene simulator, confirm its\nsuperior performance and state-of-the-art status in active reconstruction, as\nevidenced by its impressive results on benchmark datasets like Replica and\nMP3D.", + "Computer-Aided Design (CAD) model reconstruction from point clouds is an\nimportant problem at the intersection of computer vision, graphics, and machine\nlearning; it saves the designer significant time when iterating on in-the-wild\nobjects. Recent advancements in this direction achieve relatively reliable\nsemantic segmentation but still struggle to produce an adequate topology of the\nCAD model. In this work, we analyze the current state of the art for that\nill-posed task and identify shortcomings of existing methods. We propose a\nhybrid analytic-neural reconstruction scheme that bridges the gap between\nsegmented point clouds and structured CAD models and can be readily combined\nwith different segmentation backbones. Moreover, to power the surface fitting\nstage, we propose a novel implicit neural representation of freeform surfaces,\ndriving up the performance of our overall CAD reconstruction scheme. We\nextensively evaluate our method on the popular ABC benchmark of CAD models and\nset a new state-of-the-art for that dataset. Project page:\nhttps://www.obukhov.ai/point2cad}{https://www.obukhov.ai/point2cad.", + "We propose an unsupervised method for parsing large 3D scans of real-world\nscenes with easily-interpretable shapes. This work aims to provide a practical\ntool for analyzing 3D scenes in the context of aerial surveying and mapping,\nwithout the need for user annotations. Our approach is based on a probabilistic\nreconstruction model that decomposes an input 3D point cloud into a small set\nof learned prototypical 3D shapes. The resulting reconstruction is visually\ninterpretable and can be used to perform unsupervised instance and low-shot\nsemantic segmentation of complex scenes. We demonstrate the usefulness of our\nmodel on a novel dataset of seven large aerial LiDAR scans from diverse\nreal-world scenarios. Our approach outperforms state-of-the-art unsupervised\nmethods in terms of decomposition accuracy while remaining visually\ninterpretable. Our code and dataset are available at\nhttps://romainloiseau.fr/learnable-earth-parser/", + "We propose NeRFiller, an approach that completes missing portions of a 3D\ncapture via generative 3D inpainting using off-the-shelf 2D visual generative\nmodels. Often parts of a captured 3D scene or object are missing due to mesh\nreconstruction failures or a lack of observations (e.g., contact regions, such\nas the bottom of objects, or hard-to-reach areas). We approach this challenging\n3D inpainting problem by leveraging a 2D inpainting diffusion model. We\nidentify a surprising behavior of these models, where they generate more 3D\nconsistent inpaints when images form a 2$\\times$2 grid, and show how to\ngeneralize this behavior to more than four images. We then present an iterative\nframework to distill these inpainted regions into a single consistent 3D scene.\nIn contrast to related works, we focus on completing scenes rather than\ndeleting foreground objects, and our approach does not require tight 2D object\nmasks or text.", + "We then present an iterative\nframework to distill these inpainted regions into a single consistent 3D scene.\nIn contrast to related works, we focus on completing scenes rather than\ndeleting foreground objects, and our approach does not require tight 2D object\nmasks or text. We compare our approach to relevant baselines adapted to our\nsetting on a variety of scenes, where NeRFiller creates the most 3D consistent\nand plausible scene completions. Our project page is at\nhttps://ethanweber.me/nerfiller.", + "The burgeoning field of Multimodal Large Language Models (MLLMs) has\nexhibited remarkable performance in diverse tasks such as captioning,\ncommonsense reasoning, and visual scene understanding. However, the deployment\nof these large-scale MLLMs on client devices is hindered by their extensive\nmodel parameters, leading to a notable decline in generalization capabilities\nwhen these models are compressed for device deployment. Addressing this\nchallenge, we introduce a Cloud-Device Collaborative Continual Adaptation\nframework, designed to enhance the performance of compressed, device-deployed\nMLLMs by leveraging the robust capabilities of cloud-based, larger-scale MLLMs.\nOur framework is structured into three key components: a device-to-cloud uplink\nfor efficient data transmission, cloud-based knowledge adaptation, and an\noptimized cloud-to-device downlink for model deployment. In the uplink phase,\nwe employ an Uncertainty-guided Token Sampling (UTS) strategy to effectively\nfilter out-of-distribution tokens, thereby reducing transmission costs and\nimproving training efficiency. On the cloud side, we propose Adapter-based\nKnowledge Distillation (AKD) method to transfer refined knowledge from\nlarge-scale to compressed, pocket-size MLLMs.", + "On the cloud side, we propose Adapter-based\nKnowledge Distillation (AKD) method to transfer refined knowledge from\nlarge-scale to compressed, pocket-size MLLMs. Furthermore, we propose a Dynamic\nWeight update Compression (DWC) strategy for the downlink, which adaptively\nselects and quantizes updated weight parameters, enhancing transmission\nefficiency and reducing the representational disparity between cloud and device\nmodels. Extensive experiments on several multimodal benchmarks demonstrate the\nsuperiority of our proposed framework over prior Knowledge Distillation and\ndevice-cloud collaboration methods. Notably, we also validate the feasibility\nof our approach to real-world experiments.", + "Source-Free Domain Adaptation (SFDA) aims to adapt a source model for a\ntarget domain, with only access to unlabeled target training data and the\nsource model pre-trained on a supervised source domain. Relying on pseudo\nlabeling and/or auxiliary supervision, conventional methods are inevitably\nerror-prone. To mitigate this limitation, in this work we for the first time\nexplore the potentials of off-the-shelf vision-language (ViL) multimodal models\n(e.g.,CLIP) with rich whilst heterogeneous knowledge. We find that directly\napplying the ViL model to the target domain in a zero-shot fashion is\nunsatisfactory, as it is not specialized for this particular task but largely\ngeneric. To make it task specific, we propose a novel Distilling multimodal\nFoundation model(DIFO)approach. Specifically, DIFO alternates between two steps\nduring adaptation: (i) Customizing the ViL model by maximizing the mutual\ninformation with the target model in a prompt learning manner, (ii) Distilling\nthe knowledge of this customized ViL model to the target model.", + "Specifically, DIFO alternates between two steps\nduring adaptation: (i) Customizing the ViL model by maximizing the mutual\ninformation with the target model in a prompt learning manner, (ii) Distilling\nthe knowledge of this customized ViL model to the target model. For more\nfine-grained and reliable distillation, we further introduce two effective\nregularization terms, namely most-likely category encouragement and predictive\nconsistency. Extensive experiments show that DIFO significantly outperforms the\nstate-of-the-art alternatives. Code is here", + "An efficient and effective decoding mechanism is crucial in medical image\nsegmentation, especially in scenarios with limited computational resources.\nHowever, these decoding mechanisms usually come with high computational costs.\nTo address this concern, we introduce EMCAD, a new efficient multi-scale\nconvolutional attention decoder, designed to optimize both performance and\ncomputational efficiency. EMCAD leverages a unique multi-scale depth-wise\nconvolution block, significantly enhancing feature maps through multi-scale\nconvolutions. EMCAD also employs channel, spatial, and grouped (large-kernel)\ngated attention mechanisms, which are highly effective at capturing intricate\nspatial relationships while focusing on salient regions. By employing group and\ndepth-wise convolution, EMCAD is very efficient and scales well (e.g., only\n1.91M parameters and 0.381G FLOPs are needed when using a standard encoder).\nOur rigorous evaluations across 12 datasets that belong to six medical image\nsegmentation tasks reveal that EMCAD achieves state-of-the-art (SOTA)\nperformance with 79.4% and 80.3% reduction in #Params and #FLOPs, respectively.", + "Our rigorous evaluations across 12 datasets that belong to six medical image\nsegmentation tasks reveal that EMCAD achieves state-of-the-art (SOTA)\nperformance with 79.4% and 80.3% reduction in #Params and #FLOPs, respectively.\nMoreover, EMCAD's adaptability to different encoders and versatility across\nsegmentation tasks further establish EMCAD as a promising tool, advancing the\nfield towards more efficient and accurate medical image analysis. Our\nimplementation is available at https://github.com/SLDGroup/EMCAD.", + "The ideal form of Visual Question Answering requires understanding, grounding\nand reasoning in the joint space of vision and language and serves as a proxy\nfor the AI task of scene understanding. However, most existing VQA benchmarks\nare limited to just picking the answer from a pre-defined set of options and\nlack attention to text. We present a new challenge with a dataset that contains\n23,781 questions based on 10124 image-text pairs. Specifically, the task\nrequires the model to align multimedia representations of the same entity to\nimplement multi-hop reasoning between image and text and finally use natural\nlanguage to answer the question. The aim of this challenge is to develop and\nbenchmark models that are capable of multimedia entity alignment, multi-step\nreasoning and open-ended answer generation.", + "Most domain adaptation (DA) methods are based on either a convolutional\nneural networks (CNNs) or a vision transformers (ViTs). They align the\ndistribution differences between domains as encoders without considering their\nunique characteristics. For instance, ViT excels in accuracy due to its\nsuperior ability to capture global representations, while CNN has an advantage\nin capturing local representations. This fact has led us to design a hybrid\nmethod to fully take advantage of both ViT and CNN, called Explicitly\nClass-specific Boundaries (ECB). ECB learns CNN on ViT to combine their\ndistinct strengths. In particular, we leverage ViT's properties to explicitly\nfind class-specific decision boundaries by maximizing the discrepancy between\nthe outputs of the two classifiers to detect target samples far from the source\nsupport. In contrast, the CNN encoder clusters target features based on the\npreviously defined class-specific boundaries by minimizing the discrepancy\nbetween the probabilities of the two classifiers. Finally, ViT and CNN mutually\nexchange knowledge to improve the quality of pseudo labels and reduce the\nknowledge discrepancies of these models.", + "In contrast, the CNN encoder clusters target features based on the\npreviously defined class-specific boundaries by minimizing the discrepancy\nbetween the probabilities of the two classifiers. Finally, ViT and CNN mutually\nexchange knowledge to improve the quality of pseudo labels and reduce the\nknowledge discrepancies of these models. Compared to conventional DA methods,\nour ECB achieves superior performance, which verifies its effectiveness in this\nhybrid model. The project website can be found\nhttps://dotrannhattuong.github.io/ECB/website.", + "Curation methods for massive vision-language datasets trade off between\ndataset size and quality. However, even the highest quality of available\ncurated captions are far too short to capture the rich visual detail in an\nimage. To show the value of dense and highly-aligned image-text pairs, we\ncollect the Densely Captioned Images (DCI) dataset, containing 8012 natural\nimages human-annotated with mask-aligned descriptions averaging above 1000\nwords each. With precise and reliable captions associated with specific parts\nof an image, we can evaluate vision-language models' (VLMs) understanding of\nimage content with a novel task that matches each caption with its\ncorresponding subcrop. As current models are often limited to 77 text tokens,\nwe also introduce a summarized version (sDCI) in which each caption length is\nlimited. We show that modern techniques that make progress on standard\nbenchmarks do not correspond with significant improvement on our sDCI based\nbenchmark. Lastly, we finetune CLIP using sDCI and show significant\nimprovements over the baseline despite a small training set.", + "We show that modern techniques that make progress on standard\nbenchmarks do not correspond with significant improvement on our sDCI based\nbenchmark. Lastly, we finetune CLIP using sDCI and show significant\nimprovements over the baseline despite a small training set. By releasing the\nfirst human annotated dense image captioning dataset, we hope to enable the\ndevelopment of new benchmarks or fine-tuning recipes for the next generation of\nVLMs to come.", + "Text-to-image generative models can generate high-quality humans, but realism\nis lost when generating hands. Common artifacts include irregular hand poses,\nshapes, incorrect numbers of fingers, and physically implausible finger\norientations. To generate images with realistic hands, we propose a novel\ndiffusion-based architecture called HanDiffuser that achieves realism by\ninjecting hand embeddings in the generative process. HanDiffuser consists of\ntwo components: a Text-to-Hand-Params diffusion model to generate SMPL-Body and\nMANO-Hand parameters from input text prompts, and a Text-Guided\nHand-Params-to-Image diffusion model to synthesize images by conditioning on\nthe prompts and hand parameters generated by the previous component. We\nincorporate multiple aspects of hand representation, including 3D shapes and\njoint-level finger positions, orientations and articulations, for robust\nlearning and reliable performance during inference. We conduct extensive\nquantitative and qualitative experiments and perform user studies to\ndemonstrate the efficacy of our method in generating images with high-quality\nhands.", + "Motion blur is a frequently observed image artifact, especially under\ninsufficient illumination where exposure time has to be prolonged so as to\ncollect more photons for a bright enough image. Rather than simply removing\nsuch blurring effects, recent researches have aimed at decomposing a blurry\nimage into multiple sharp images with spatial and temporal coherence. Since\nmotion blur decomposition itself is highly ambiguous, priors from neighbouring\nframes or human annotation are usually needed for motion disambiguation. In\nthis paper, inspired by the complementary exposure characteristics of a global\nshutter (GS) camera and a rolling shutter (RS) camera, we propose to utilize\nthe ordered scanline-wise delay in a rolling shutter image to robustify motion\ndecomposition of a single blurry image. To evaluate this novel dual imaging\nsetting, we construct a triaxial system to collect realistic data, as well as a\ndeep network architecture that explicitly addresses temporal and contextual\ninformation through reciprocal branches for cross-shutter motion blur\ndecomposition. Experiment results have verified the effectiveness of our\nproposed algorithm, as well as the validity of our dual imaging setting.", + "Face morphing is a problem in computer graphics with numerous artistic and\nforensic applications. It is challenging due to variations in pose, lighting,\ngender, and ethnicity. This task consists of a warping for feature alignment\nand a blending for a seamless transition between the warped images. We propose\nto leverage coord-based neural networks to represent such warpings and\nblendings of face images. During training, we exploit the smoothness and\nflexibility of such networks by combining energy functionals employed in\nclassical approaches without discretizations. Additionally, our method is\ntime-dependent, allowing a continuous warping/blending of the images. During\nmorphing inference, we need both direct and inverse transformations of the\ntime-dependent warping. The first (second) is responsible for warping the\ntarget (source) image into the source (target) image. Our neural warping stores\nthose maps in a single network dismissing the need for inverting them. The\nresults of our experiments indicate that our method is competitive with both\nclassical and generative models under the lens of image quality and\nface-morphing detectors. Aesthetically, the resulting images present a seamless\nblending of diverse faces not yet usual in the literature.", + "This paper introduces a novel unified representation of diffusion models for\nimage generation and segmentation. Specifically, we use a colormap to represent\nentity-level masks, addressing the challenge of varying entity numbers while\naligning the representation closely with the image RGB domain. Two novel\nmodules, including the location-aware color palette and progressive dichotomy\nmodule, are proposed to support our mask representation. On the one hand, a\nlocation-aware palette guarantees the colors' consistency to entities'\nlocations. On the other hand, the progressive dichotomy module can efficiently\ndecode the synthesized colormap to high-quality entity-level masks in a\ndepth-first binary search without knowing the cluster numbers. To tackle the\nissue of lacking large-scale segmentation training data, we employ an\ninpainting pipeline and then improve the flexibility of diffusion models across\nvarious tasks, including inpainting, image synthesis, referring segmentation,\nand entity segmentation. Comprehensive experiments validate the efficiency of\nour approach, demonstrating comparable segmentation mask quality to\nstate-of-the-art and adaptability to multiple tasks. The code will be released\nat \\href{https://github.com/qqlu/Entity}{https://github.com/qqlu/Entity}.", + "With advancements in domain generalized stereo matching networks, models\npre-trained on synthetic data demonstrate strong robustness to unseen domains.\nHowever, few studies have investigated the robustness after fine-tuning them in\nreal-world scenarios, during which the domain generalization ability can be\nseriously degraded. In this paper, we explore fine-tuning stereo matching\nnetworks without compromising their robustness to unseen domains. Our\nmotivation stems from comparing Ground Truth (GT) versus Pseudo Label (PL) for\nfine-tuning: GT degrades, but PL preserves the domain generalization ability.\nEmpirically, we find the difference between GT and PL implies valuable\ninformation that can regularize networks during fine-tuning. We also propose a\nframework to utilize this difference for fine-tuning, consisting of a frozen\nTeacher, an exponential moving average (EMA) Teacher, and a Student network.\nThe core idea is to utilize the EMA Teacher to measure what the Student has\nlearned and dynamically improve GT and PL for fine-tuning. We integrate our\nframework with state-of-the-art networks and evaluate its effectiveness on\nseveral real-world datasets. Extensive experiments show that our method\neffectively preserves the domain generalization ability during fine-tuning.", + "Post-training quantization (PTQ) is an efficient model compression technique\nthat quantizes a pretrained full-precision model using only a small calibration\nset of unlabeled samples without retraining. PTQ methods for convolutional\nneural networks (CNNs) provide quantization results comparable to\nfull-precision counterparts. Directly applying them to vision transformers\n(ViTs), however, incurs severe performance degradation, mainly due to the\ndifferences in architectures between CNNs and ViTs. In particular, the\ndistribution of activations for each channel vary drastically according to\ninput instances, making PTQ methods for CNNs inappropriate for ViTs. To address\nthis, we introduce instance-aware group quantization for ViTs (IGQ-ViT). To\nthis end, we propose to split the channels of activation maps into multiple\ngroups dynamically for each input instance, such that activations within each\ngroup share similar statistical properties. We also extend our scheme to\nquantize softmax attentions across tokens. In addition, the number of groups\nfor each layer is adjusted to minimize the discrepancies between predictions\nfrom quantized and full-precision models, under a bit-operation (BOP)\nconstraint.", + "We also extend our scheme to\nquantize softmax attentions across tokens. In addition, the number of groups\nfor each layer is adjusted to minimize the discrepancies between predictions\nfrom quantized and full-precision models, under a bit-operation (BOP)\nconstraint. We show extensive experimental results on image classification,\nobject detection, and instance segmentation, with various transformer\narchitectures, demonstrating the effectiveness of our approach.", + "The remarkable performance of Vision Transformers (ViTs) typically requires\nan extremely large training cost. Existing methods have attempted to accelerate\nthe training of ViTs, yet typically disregard method universality with accuracy\ndropping. Meanwhile, they break the training consistency of the original\ntransformers, including the consistency of hyper-parameters, architecture, and\nstrategy, which prevents them from being widely applied to different\nTransformer networks. In this paper, we propose a novel token growth scheme\nToken Expansion (termed ToE) to achieve consistent training acceleration for\nViTs. We introduce an \"initialization-expansion-merging\" pipeline to maintain\nthe integrity of the intermediate feature distribution of original\ntransformers, preventing the loss of crucial learnable information in the\ntraining process. ToE can not only be seamlessly integrated into the training\nand fine-tuning process of transformers (e.g., DeiT and LV-ViT), but also\neffective for efficient training frameworks (e.g., EfficientTrain), without\ntwisting the original training hyper-parameters, architecture, and introducing\nadditional training strategies.", + "Extensive experiments demonstrate that ToE\nachieves about 1.3x faster for the training of ViTs in a lossless manner, or\neven with performance gains over the full-token training baselines. Code is\navailable at https://github.com/Osilly/TokenExpansion .", + "Can we synthesize 3D humans interacting with scenes without learning from any\n3D human-scene interaction data? We propose GenZI, the first zero-shot approach\nto generating 3D human-scene interactions. Key to GenZI is our distillation of\ninteraction priors from large vision-language models (VLMs), which have learned\na rich semantic space of 2D human-scene compositions. Given a natural language\ndescription and a coarse point location of the desired interaction in a 3D\nscene, we first leverage VLMs to imagine plausible 2D human interactions\ninpainted into multiple rendered views of the scene. We then formulate a robust\niterative optimization to synthesize the pose and shape of a 3D human model in\nthe scene, guided by consistency with the 2D interaction hypotheses. In\ncontrast to existing learning-based approaches, GenZI circumvents the\nconventional need for captured 3D interaction data, and allows for flexible\ncontrol of the 3D interaction synthesis with easy-to-use text prompts.\nExtensive experiments show that our zero-shot approach has high flexibility and\ngenerality, making it applicable to diverse scene types, including both indoor\nand outdoor environments.", + "Existing learning-based solutions to medical image segmentation have two\nimportant shortcomings. First, for most new segmentation task, a new model has\nto be trained or fine-tuned. This requires extensive resources and machine\nlearning expertise, and is therefore often infeasible for medical researchers\nand clinicians. Second, most existing segmentation methods produce a single\ndeterministic segmentation mask for a given image. In practice however, there\nis often considerable uncertainty about what constitutes the correct\nsegmentation, and different expert annotators will often segment the same image\ndifferently. We tackle both of these problems with Tyche, a model that uses a\ncontext set to generate stochastic predictions for previously unseen tasks\nwithout the need to retrain. Tyche differs from other in-context segmentation\nmethods in two important ways. (1) We introduce a novel convolution block\narchitecture that enables interactions among predictions. (2) We introduce\nin-context test-time augmentation, a new mechanism to provide prediction\nstochasticity. When combined with appropriate model design and loss functions,\nTyche can predict a set of plausible diverse segmentation candidates for new or\nunseen medical images and segmentation tasks without the need to retrain.", + "Reassembly tasks play a fundamental role in many fields and multiple\napproaches exist to solve specific reassembly problems. In this context, we\nposit that a general unified model can effectively address them all,\nirrespective of the input data type (images, 3D, etc.). We introduce\nDiffAssemble, a Graph Neural Network (GNN)-based architecture that learns to\nsolve reassembly tasks using a diffusion model formulation. Our method treats\nthe elements of a set, whether pieces of 2D patch or 3D object fragments, as\nnodes of a spatial graph. Training is performed by introducing noise into the\nposition and rotation of the elements and iteratively denoising them to\nreconstruct the coherent initial pose. DiffAssemble achieves state-of-the-art\n(SOTA) results in most 2D and 3D reassembly tasks and is the first\nlearning-based approach that solves 2D puzzles for both rotation and\ntranslation. Furthermore, we highlight its remarkable reduction in run-time,\nperforming 11 times faster than the quickest optimization-based method for\npuzzle solving. Code available at https://github.com/IIT-PAVIS/DiffAssemble", + "Multi-view inverse rendering is the problem of estimating the scene\nparameters such as shapes, materials, or illuminations from a sequence of\nimages captured under different viewpoints. Many approaches, however, assume\nsingle light bounce and thus fail to recover challenging scenarios like\ninter-reflections. On the other hand, simply extending those methods to\nconsider multi-bounced light requires more assumptions to alleviate the\nambiguity. To address this problem, we propose Neural Incident Stokes Fields\n(NeISF), a multi-view inverse rendering framework that reduces ambiguities\nusing polarization cues. The primary motivation for using polarization cues is\nthat it is the accumulation of multi-bounced light, providing rich information\nabout geometry and material. Based on this knowledge, the proposed incident\nStokes field efficiently models the accumulated polarization effect with the\naid of an original physically-based differentiable polarimetric renderer.\nLastly, experimental results show that our method outperforms the existing\nworks in synthetic and real scenarios.", + "Open-vocabulary semantic segmentation aims at segmenting arbitrary categories\nexpressed in textual form. Previous works have trained over large amounts of\nimage-caption pairs to enforce pixel-level multimodal alignments. However,\ncaptions provide global information about the semantics of a given image but\nlack direct localization of individual concepts. Further, training on\nlarge-scale datasets inevitably brings significant computational costs. In this\npaper, we propose FreeDA, a training-free diffusion-augmented method for\nopen-vocabulary semantic segmentation, which leverages the ability of diffusion\nmodels to visually localize generated concepts and local-global similarities to\nmatch class-agnostic regions with semantic classes. Our approach involves an\noffline stage in which textual-visual reference embeddings are collected,\nstarting from a large set of captions and leveraging visual and semantic\ncontexts. At test time, these are queried to support the visual matching\nprocess, which is carried out by jointly considering class-agnostic regions and\nglobal semantic similarities. Extensive analyses demonstrate that FreeDA\nachieves state-of-the-art performance on five datasets, surpassing previous\nmethods by more than 7.0 average points in terms of mIoU and without requiring\nany training.", + "Recent advances in neural rendering have improved both training and rendering\ntimes by orders of magnitude. While these methods demonstrate state-of-the-art\nquality and speed, they are designed for photogrammetry of static scenes and do\nnot generalize well to freely moving humans in the environment. In this work,\nwe introduce Human Gaussian Splats (HUGS) that represents an animatable human\ntogether with the scene using 3D Gaussian Splatting (3DGS). Our method takes\nonly a monocular video with a small number of (50-100) frames, and it\nautomatically learns to disentangle the static scene and a fully animatable\nhuman avatar within 30 minutes. We utilize the SMPL body model to initialize\nthe human Gaussians. To capture details that are not modeled by SMPL (e.g.\ncloth, hairs), we allow the 3D Gaussians to deviate from the human body model.\nUtilizing 3D Gaussians for animated humans brings new challenges, including the\nartifacts created when articulating the Gaussians. We propose to jointly\noptimize the linear blend skinning weights to coordinate the movements of\nindividual Gaussians during animation.", + "Utilizing 3D Gaussians for animated humans brings new challenges, including the\nartifacts created when articulating the Gaussians. We propose to jointly\noptimize the linear blend skinning weights to coordinate the movements of\nindividual Gaussians during animation. Our approach enables novel-pose\nsynthesis of human and novel view synthesis of both the human and the scene. We\nachieve state-of-the-art rendering quality with a rendering speed of 60 FPS\nwhile being ~100x faster to train over previous work. Our code will be\nannounced here: https://github.com/apple/ml-hugs", + "Recent advancements in Large Vision-Language Models (VLMs) have shown great\npromise in natural image domains, allowing users to hold a dialogue about given\nvisual content. However, such general-domain VLMs perform poorly for Remote\nSensing (RS) scenarios, leading to inaccurate or fabricated information when\npresented with RS domain-specific queries. Such a behavior emerges due to the\nunique challenges introduced by RS imagery. For example, to handle\nhigh-resolution RS imagery with diverse scale changes across categories and\nmany small objects, region-level reasoning is necessary alongside holistic\nscene interpretation. Furthermore, the lack of domain-specific multimodal\ninstruction following data as well as strong backbone models for RS make it\nhard for the models to align their behavior with user queries. To address these\nlimitations, we propose GeoChat - the first versatile remote sensing VLM that\noffers multitask conversational capabilities with high-resolution RS images.\nSpecifically, GeoChat can not only answer image-level queries but also accepts\nregion inputs to hold region-specific dialogue. Furthermore, it can visually\nground objects in its responses by referring to their spatial coordinates.", + "Specifically, GeoChat can not only answer image-level queries but also accepts\nregion inputs to hold region-specific dialogue. Furthermore, it can visually\nground objects in its responses by referring to their spatial coordinates. To\naddress the lack of domain-specific datasets, we generate a novel RS multimodal\ninstruction-following dataset by extending image-text pairs from existing\ndiverse RS datasets. We establish a comprehensive benchmark for RS multitask\nconversations and compare with a number of baseline methods. GeoChat\ndemonstrates robust zero-shot performance on various RS tasks, e.g., image and\nregion captioning, visual question answering, scene classification, visually\ngrounded conversations and referring detection. Our code is available at\nhttps://github.com/mbzuai-oryx/geochat.", + "While current methods have shown promising progress on estimating 3D human\nmotion from monocular videos, their motion estimates are often physically\nunrealistic because they mainly consider kinematics. In this paper, we\nintroduce Physics-aware Pretrained Transformer (PhysPT), which improves\nkinematics-based motion estimates and infers motion forces. PhysPT exploits a\nTransformer encoder-decoder backbone to effectively learn human dynamics in a\nself-supervised manner. Moreover, it incorporates physics principles governing\nhuman motion. Specifically, we build a physics-based body representation and\ncontact force model. We leverage them to impose novel physics-inspired training\nlosses (i.e., force loss, contact loss, and Euler-Lagrange loss), enabling\nPhysPT to capture physical properties of the human body and the forces it\nexperiences. Experiments demonstrate that, once trained, PhysPT can be directly\napplied to kinematics-based estimates to significantly enhance their physical\nplausibility and generate favourable motion forces. Furthermore, we show that\nthese physically meaningful quantities translate into improved accuracy of an\nimportant downstream task: human action recognition.", + "High-definition (HD) maps have played an integral role in the development of\nmodern autonomous vehicle (AV) stacks, albeit with high associated labeling and\nmaintenance costs. As a result, many recent works have proposed methods for\nestimating HD maps online from sensor data, enabling AVs to operate outside of\npreviously-mapped regions. However, current online map estimation approaches\nare developed in isolation of their downstream tasks, complicating their\nintegration in AV stacks. In particular, they do not produce uncertainty or\nconfidence estimates. In this work, we extend multiple state-of-the-art online\nmap estimation methods to additionally estimate uncertainty and show how this\nenables more tightly integrating online mapping with trajectory forecasting. In\ndoing so, we find that incorporating uncertainty yields up to 50% faster\ntraining convergence and up to 15% better prediction performance on the\nreal-world nuScenes driving dataset.", + "The integration of visual inputs with large language models (LLMs) has led to\nremarkable advancements in multi-modal capabilities, giving rise to visual\nlarge language models (VLLMs). However, effectively harnessing VLLMs for\nintricate visual perception tasks remains a challenge. In this paper, we\npresent a novel end-to-end framework named PerceptionGPT, which efficiently and\neffectively equips the VLLMs with visual perception abilities by leveraging the\nrepresentation power of LLMs' token embedding. Our proposed method treats the\ntoken embedding of the LLM as the carrier of spatial information, then leverage\nlightweight visual task encoders and decoders to perform visual perception\ntasks (e.g., detection, segmentation). Our approach significantly alleviates\nthe training difficulty suffered by previous approaches that formulate the\nvisual outputs as discrete tokens, and enables achieving superior performance\nwith fewer trainable parameters, less training data and shorted training time.\nMoreover, as only one token embedding is required to decode the visual outputs,\nthe resulting sequence length during inference is significantly reduced.\nConsequently, our approach enables accurate and flexible representations,\nseamless integration of visual perception tasks, and efficient handling of a\nmultiple of visual outputs.", + "Moreover, as only one token embedding is required to decode the visual outputs,\nthe resulting sequence length during inference is significantly reduced.\nConsequently, our approach enables accurate and flexible representations,\nseamless integration of visual perception tasks, and efficient handling of a\nmultiple of visual outputs. We validate the effectiveness and efficiency of our\napproach through extensive experiments. The results demonstrate significant\nimprovements over previous methods with much fewer trainable parameters and GPU\nhours, which facilitates future research in enabling LLMs with visual\nperception abilities.", + "We consider the task of animating 3D facial geometry from speech signal.\nExisting works are primarily deterministic, focusing on learning a one-to-one\nmapping from speech signal to 3D face meshes on small datasets with limited\nspeakers. While these models can achieve high-quality lip articulation for\nspeakers in the training set, they are unable to capture the full and diverse\ndistribution of 3D facial motions that accompany speech in the real world.\nImportantly, the relationship between speech and facial motion is one-to-many,\ncontaining both inter-speaker and intra-speaker variations and necessitating a\nprobabilistic approach. In this paper, we identify and address key challenges\nthat have so far limited the development of probabilistic models: lack of\ndatasets and metrics that are suitable for training and evaluating them, as\nwell as the difficulty of designing a model that generates diverse results\nwhile remaining faithful to a strong conditioning signal as speech. We first\npropose large-scale benchmark datasets and metrics suitable for probabilistic\nmodeling. Then, we demonstrate a probabilistic model that achieves both\ndiversity and fidelity to speech, outperforming other methods across the\nproposed benchmarks.", + "We first\npropose large-scale benchmark datasets and metrics suitable for probabilistic\nmodeling. Then, we demonstrate a probabilistic model that achieves both\ndiversity and fidelity to speech, outperforming other methods across the\nproposed benchmarks. Finally, we showcase useful applications of probabilistic\nmodels trained on these large-scale datasets: we can generate diverse\nspeech-driven 3D facial motion that matches unseen speaker styles extracted\nfrom reference clips; and our synthetic meshes can be used to improve the\nperformance of downstream audio-visual models.", + "Deep neural networks for learning Symmetric Positive Definite (SPD) matrices\nare gaining increasing attention in machine learning. Despite the significant\nprogress, most existing SPD networks use traditional Euclidean classifiers on\nan approximated space rather than intrinsic classifiers that accurately capture\nthe geometry of SPD manifolds. Inspired by Hyperbolic Neural Networks (HNNs),\nwe propose Riemannian Multinomial Logistics Regression (RMLR) for the\nclassification layers in SPD networks. We introduce a unified framework for\nbuilding Riemannian classifiers under the metrics pulled back from the\nEuclidean space, and showcase our framework under the parameterized\nLog-Euclidean Metric (LEM) and Log-Cholesky Metric (LCM). Besides, our\nframework offers a novel intrinsic explanation for the most popular LogEig\nclassifier in existing SPD networks. The effectiveness of our method is\ndemonstrated in three applications: radar recognition, human action\nrecognition, and electroencephalography (EEG) classification. The code is\navailable at https://github.com/GitZH-Chen/SPDMLR.git.", + "3D Gaussian splatting has achieved very impressive performance in real-time\nnovel view synthesis. However, it often suffers from over-reconstruction during\nGaussian densification where high-variance image regions are covered by a few\nlarge Gaussians only, leading to blur and artifacts in the rendered images. We\ndesign a progressive frequency regularization (FreGS) technique to tackle the\nover-reconstruction issue within the frequency space. Specifically, FreGS\nperforms coarse-to-fine Gaussian densification by exploiting low-to-high\nfrequency components that can be easily extracted with low-pass and high-pass\nfilters in the Fourier space. By minimizing the discrepancy between the\nfrequency spectrum of the rendered image and the corresponding ground truth, it\nachieves high-quality Gaussian densification and alleviates the\nover-reconstruction of Gaussian splatting effectively. Experiments over\nmultiple widely adopted benchmarks (e.g., Mip-NeRF360, Tanks-and-Temples and\nDeep Blending) show that FreGS achieves superior novel view synthesis and\noutperforms the state-of-the-art consistently.", + "In this paper, we look at cross-domain few-shot classification which presents\nthe challenging task of learning new classes in previously unseen domains with\nfew labelled examples. Existing methods, though somewhat effective, encounter\nseveral limitations, which we alleviate through two significant improvements.\nFirst, we introduce a lightweight parameter-efficient adaptation strategy to\naddress overfitting associated with fine-tuning a large number of parameters on\nsmall datasets. This strategy employs a linear transformation of pre-trained\nfeatures, significantly reducing the trainable parameter count. Second, we\nreplace the traditional nearest centroid classifier with a discriminative\nsample-aware loss function, enhancing the model's sensitivity to the inter- and\nintra-class variances within the training set for improved clustering in\nfeature space. Empirical evaluations on the Meta-Dataset benchmark showcase\nthat our approach not only improves accuracy up to 7.7\\% and 5.3\\% on\npreviously seen and unseen datasets, respectively, but also achieves the above\nperformance while being at least $\\sim3\\times$ more parameter-efficient than\nexisting methods, establishing a new state-of-the-art in cross-domain few-shot\nlearning. Our code is available at https://github.com/rashindrie/DIPA.", + "In this paper, we explore the unique modality of sketch for explainability,\nemphasising the profound impact of human strokes compared to conventional\npixel-oriented studies. Beyond explanations of network behavior, we discern the\ngenuine implications of explainability across diverse downstream sketch-related\ntasks. We propose a lightweight and portable explainability solution -- a\nseamless plugin that integrates effortlessly with any pre-trained model,\neliminating the need for re-training. Demonstrating its adaptability, we\npresent four applications: highly studied retrieval and generation, and\ncompletely novel assisted drawing and sketch adversarial attacks. The\ncentrepiece to our solution is a stroke-level attribution map that takes\ndifferent forms when linked with downstream tasks. By addressing the inherent\nnon-differentiability of rasterisation, we enable explanations at both coarse\nstroke level (SLA) and partial stroke level (P-SLA), each with its advantages\nfor specific downstream tasks.", + "While image diffusion models have made significant progress in text-driven 3D\ncontent creation, they often fail to accurately capture the intended meaning of\ntext prompts, especially for view information. This limitation leads to the\nJanus problem, where multi-faced 3D models are generated under the guidance of\nsuch diffusion models. In this paper, we propose a robust high-quality 3D\ncontent generation pipeline by exploiting orthogonal-view image guidance.\nFirst, we introduce a novel 2D diffusion model that generates an image\nconsisting of four orthogonal-view sub-images based on the given text prompt.\nThen, the 3D content is created using this diffusion model. Notably, the\ngenerated orthogonal-view image provides strong geometric structure priors and\nthus improves 3D consistency. As a result, it effectively resolves the Janus\nproblem and significantly enhances the quality of 3D content creation.\nAdditionally, we present a 3D synthesis fusion network that can further improve\nthe details of the generated 3D contents. Both quantitative and qualitative\nevaluations demonstrate that our method surpasses previous text-to-3D\ntechniques. Project page: https://efficientdreamer.github.io.", + "Achieving high synchronization in the synthesis of realistic, speech-driven\ntalking head videos presents a significant challenge. Traditional Generative\nAdversarial Networks (GAN) struggle to maintain consistent facial identity,\nwhile Neural Radiance Fields (NeRF) methods, although they can address this\nissue, often produce mismatched lip movements, inadequate facial expressions,\nand unstable head poses. A lifelike talking head requires synchronized\ncoordination of subject identity, lip movements, facial expressions, and head\nposes. The absence of these synchronizations is a fundamental flaw, leading to\nunrealistic and artificial outcomes. To address the critical issue of\nsynchronization, identified as the \"devil\" in creating realistic talking heads,\nwe introduce SyncTalk. This NeRF-based method effectively maintains subject\nidentity, enhancing synchronization and realism in talking head synthesis.\nSyncTalk employs a Face-Sync Controller to align lip movements with speech and\ninnovatively uses a 3D facial blendshape model to capture accurate facial\nexpressions. Our Head-Sync Stabilizer optimizes head poses, achieving more\nnatural head movements. The Portrait-Sync Generator restores hair details and\nblends the generated head with the torso for a seamless visual experience.", + "Our Head-Sync Stabilizer optimizes head poses, achieving more\nnatural head movements. The Portrait-Sync Generator restores hair details and\nblends the generated head with the torso for a seamless visual experience.\nExtensive experiments and user studies demonstrate that SyncTalk outperforms\nstate-of-the-art methods in synchronization and realism. We recommend watching\nthe supplementary video: https://ziqiaopeng.github.io/synctalk", + "Event cameras, characterized by high temporal resolution, high dynamic range,\nlow power consumption, and high pixel bandwidth, offer unique capabilities for\nobject detection in specialized contexts. Despite these advantages, the\ninherent sparsity and asynchrony of event data pose challenges to existing\nobject detection algorithms. Spiking Neural Networks (SNNs), inspired by the\nway the human brain codes and processes information, offer a potential solution\nto these difficulties. However, their performance in object detection using\nevent cameras is limited in current implementations. In this paper, we propose\nthe Spiking Fusion Object Detector (SFOD), a simple and efficient approach to\nSNN-based object detection. Specifically, we design a Spiking Fusion Module,\nachieving the first-time fusion of feature maps from different scales in SNNs\napplied to event cameras. Additionally, through integrating our analysis and\nexperiments conducted during the pretraining of the backbone network on the\nNCAR dataset, we delve deeply into the impact of spiking decoding strategies\nand loss functions on model performance.", + "Additionally, through integrating our analysis and\nexperiments conducted during the pretraining of the backbone network on the\nNCAR dataset, we delve deeply into the impact of spiking decoding strategies\nand loss functions on model performance. Thereby, we establish state-of-the-art\nclassification results based on SNNs, achieving 93.7\\% accuracy on the NCAR\ndataset. Experimental results on the GEN1 detection dataset demonstrate that\nthe SFOD achieves a state-of-the-art mAP of 32.1\\%, outperforming existing\nSNN-based approaches. Our research not only underscores the potential of SNNs\nin object detection with event cameras but also propels the advancement of\nSNNs. Code is available at https://github.com/yimeng-fan/SFOD.", + "We propose a new structure-from-motion framework to recover accurate camera\nposes and point clouds from unordered images. Traditional SfM systems typically\nrely on the successful detection of repeatable keypoints across multiple views\nas the first step, which is difficult for texture-poor scenes, and poor\nkeypoint detection may break down the whole SfM system. We propose a new\ndetector-free SfM framework to draw benefits from the recent success of\ndetector-free matchers to avoid the early determination of keypoints, while\nsolving the multi-view inconsistency issue of detector-free matchers.\nSpecifically, our framework first reconstructs a coarse SfM model from\nquantized detector-free matches. Then, it refines the model by a novel\niterative refinement pipeline, which iterates between an attention-based\nmulti-view matching module to refine feature tracks and a geometry refinement\nmodule to improve the reconstruction accuracy. Experiments demonstrate that the\nproposed framework outperforms existing detector-based SfM systems on common\nbenchmark datasets. We also collect a texture-poor SfM dataset to demonstrate\nthe capability of our framework to reconstruct texture-poor scenes.", + "Experiments demonstrate that the\nproposed framework outperforms existing detector-based SfM systems on common\nbenchmark datasets. We also collect a texture-poor SfM dataset to demonstrate\nthe capability of our framework to reconstruct texture-poor scenes. Based on\nthis framework, we take $\\textit{first place}$ in Image Matching Challenge\n2023.", + "Surveillance videos are an essential component of daily life with various\ncritical applications, particularly in public security. However, current\nsurveillance video tasks mainly focus on classifying and localizing anomalous\nevents. Existing methods are limited to detecting and classifying the\npredefined events with unsatisfactory semantic understanding, although they\nhave obtained considerable performance. To address this issue, we propose a new\nresearch direction of surveillance video-and-language understanding, and\nconstruct the first multimodal surveillance video dataset. We manually annotate\nthe real-world surveillance dataset UCF-Crime with fine-grained event content\nand timing. Our newly annotated dataset, UCA (UCF-Crime Annotation), contains\n23,542 sentences, with an average length of 20 words, and its annotated videos\nare as long as 110.7 hours. Furthermore, we benchmark SOTA models for four\nmultimodal tasks on this newly created dataset, which serve as new baselines\nfor surveillance video-and-language understanding. Through our experiments, we\nfind that mainstream models used in previously publicly available datasets\nperform poorly on surveillance video, which demonstrates the new challenges in\nsurveillance video-and-language understanding.", + "Through our experiments, we\nfind that mainstream models used in previously publicly available datasets\nperform poorly on surveillance video, which demonstrates the new challenges in\nsurveillance video-and-language understanding. To validate the effectiveness of\nour UCA, we conducted experiments on multimodal anomaly detection. The results\ndemonstrate that our multimodal surveillance learning can improve the\nperformance of conventional anomaly detection tasks. All the experiments\nhighlight the necessity of constructing this dataset to advance surveillance\nAI. The link to our dataset is provided at:\nhttps://xuange923.github.io/Surveillance-Video-Understanding.", + "In this paper, we study a new problem, Film Removal (FR), which attempts to\nremove the interference of wrinkled transparent films and reconstruct the\noriginal information under films for industrial recognition systems. We first\nphysically model the imaging of industrial materials covered by the film.\nConsidering the specular highlight from the film can be effectively recorded by\nthe polarized camera, we build a practical dataset with polarization\ninformation containing paired data with and without transparent film. We aim to\nremove interference from the film (specular highlights and other degradations)\nwith an end-to-end framework. To locate the specular highlight, we use an angle\nestimation network to optimize the polarization angle with the minimized\nspecular highlight. The image with minimized specular highlight is set as a\nprior for supporting the reconstruction network. Based on the prior and the\npolarized images, the reconstruction network can decouple all degradations from\nthe film. Extensive experiments show that our framework achieves SOTA\nperformance in both image reconstruction and industrial downstream tasks. Our\ncode will be released at \\url{https://github.com/jqtangust/FilmRemoval}.", + "While large-scale pre-trained text-to-image models can synthesize diverse and\nhigh-quality human-centered images, novel challenges arise with a nuanced task\nof \"identity fine editing\": precisely modifying specific features of a subject\nwhile maintaining its inherent identity and context. Existing personalization\nmethods either require time-consuming optimization or learning additional\nencoders, adept in \"identity re-contextualization\". However, they often\nstruggle with detailed and sensitive tasks like human face editing. To address\nthese challenges, we introduce DreamSalon, a noise-guided, staged-editing\nframework, uniquely focusing on detailed image manipulations and\nidentity-context preservation. By discerning editing and boosting stages via\nthe frequency and gradient of predicted noises, DreamSalon first performs\ndetailed manipulations on specific features in the editing stage, guided by\nhigh-frequency information, and then employs stochastic denoising in the\nboosting stage to improve image quality. For more precise editing, DreamSalon\nsemantically mixes source and target textual prompts, guided by differences in\ntheir embedding covariances, to direct the model's focus on specific\nmanipulation areas.", + "For more precise editing, DreamSalon\nsemantically mixes source and target textual prompts, guided by differences in\ntheir embedding covariances, to direct the model's focus on specific\nmanipulation areas. Our experiments demonstrate DreamSalon's ability to\nefficiently and faithfully edit fine details on human faces, outperforming\nexisting methods both qualitatively and quantitatively.", + "This work breaks through the Base-New Tradeoff (BNT)dilemma in prompt tuning,\ni.e., the better the tuned model generalizes to the base (or target) task, the\nworse it generalizes to new tasks, and vice versa. Specifically, through an\nin-depth analysis of the learned features of the base and new tasks, we observe\nthat the BNT stems from a channel bias issue, i.e., the vast majority of\nfeature channels are occupied by base-specific knowledge, resulting in the\ncollapse of taskshared knowledge important to new tasks. To address this, we\npropose the Decoupled Prompt Tuning (DePT) framework, which decouples\nbase-specific knowledge from feature channels into an isolated feature space\nduring prompt tuning, so as to maximally preserve task-shared knowledge in the\noriginal feature space for achieving better zero-shot generalization on new\ntasks. Importantly, our DePT is orthogonal to existing prompt tuning methods,\nhence it can improve all of them. Extensive experiments on 11 datasets show the\nstrong flexibility and effectiveness of DePT. Our code and pretrained models\nare available at https://github.com/Koorye/DePT.", + "It is time-consuming to render high-resolution images in applications such as\nvideo games and virtual reality, and thus super-resolution technologies become\nincreasingly popular for real-time rendering. However, it is challenging to\npreserve sharp texture details, keep the temporal stability and avoid the\nghosting artifacts in real-time super-resolution rendering. To address this\nissue, we introduce radiance demodulation to separate the rendered image or\nradiance into a lighting component and a material component, considering the\nfact that the light component is smoother than the rendered image so that the\nhigh-resolution material component with detailed textures can be easily\nobtained. We perform the super-resolution on the lighting component only and\nre-modulate it with the high-resolution material component to obtain the final\nsuper-resolution image with more texture details. A reliable warping module is\nproposed by explicitly marking the occluded regions to avoid the ghosting\nartifacts. To further enhance the temporal stability, we design a\nframe-recurrent neural network and a temporal loss to aggregate the previous\nand current frames, which can better capture the spatial-temporal consistency\namong reconstructed frames.", + "To further enhance the temporal stability, we design a\nframe-recurrent neural network and a temporal loss to aggregate the previous\nand current frames, which can better capture the spatial-temporal consistency\namong reconstructed frames. As a result, our method is able to produce\ntemporally stable results in real-time rendering with high-quality details,\neven in the challenging 4 $\\times$ 4 super-resolution scenarios.", + "Implicit neural representation has paved the way for new approaches to\ndynamic scene reconstruction and rendering. Nonetheless, cutting-edge dynamic\nneural rendering methods rely heavily on these implicit representations, which\nfrequently struggle to capture the intricate details of objects in the scene.\nFurthermore, implicit methods have difficulty achieving real-time rendering in\ngeneral dynamic scenes, limiting their use in a variety of tasks. To address\nthe issues, we propose a deformable 3D Gaussians Splatting method that\nreconstructs scenes using 3D Gaussians and learns them in canonical space with\na deformation field to model monocular dynamic scenes. We also introduce an\nannealing smoothing training mechanism with no extra overhead, which can\nmitigate the impact of inaccurate poses on the smoothness of time interpolation\ntasks in real-world datasets. Through a differential Gaussian rasterizer, the\ndeformable 3D Gaussians not only achieve higher rendering quality but also\nreal-time rendering speed. Experiments show that our method outperforms\nexisting methods significantly in terms of both rendering quality and speed,\nmaking it well-suited for tasks such as novel-view synthesis, time\ninterpolation, and real-time rendering.", + "Multi-camera-based 3D object detection has made notable progress in the past\nseveral years. However, we observe that there are cases (e.g. faraway regions)\nin which popular 2D object detectors are more reliable than state-of-the-art 3D\ndetectors. In this paper, to improve the performance of query-based 3D object\ndetectors, we present a novel query generating approach termed QAF2D, which\ninfers 3D query anchors from 2D detection results. A 2D bounding box of an\nobject in an image is lifted to a set of 3D anchors by associating each sampled\npoint within the box with depth, yaw angle, and size candidates. Then, the\nvalidity of each 3D anchor is verified by comparing its projection in the image\nwith its corresponding 2D box, and only valid anchors are kept and used to\nconstruct queries. The class information of the 2D bounding box associated with\neach query is also utilized to match the predicted boxes with ground truth for\nthe set-based loss.", + "The class information of the 2D bounding box associated with\neach query is also utilized to match the predicted boxes with ground truth for\nthe set-based loss. The image feature extraction backbone is shared between the\n3D detector and 2D detector by adding a small number of prompt parameters. We\nintegrate QAF2D into three popular query-based 3D object detectors and carry\nout comprehensive evaluations on the nuScenes dataset. The largest improvement\nthat QAF2D can bring about on the nuScenes validation subset is $2.3\\%$ NDS and\n$2.7\\%$ mAP. Code is available at https://github.com/nullmax-vision/QAF2D.", + "For privacy and security concerns, the need to erase unwanted information\nfrom pre-trained vision models is becoming evident nowadays. In real-world\nscenarios, erasure requests originate at any time from both users and model\nowners. These requests usually form a sequence. Therefore, under such a\nsetting, selective information is expected to be continuously removed from a\npre-trained model while maintaining the rest. We define this problem as\ncontinual forgetting and identify two key challenges. (i) For unwanted\nknowledge, efficient and effective deleting is crucial. (ii) For remaining\nknowledge, the impact brought by the forgetting procedure should be minimal. To\naddress them, we propose Group Sparse LoRA (GS-LoRA). Specifically, towards\n(i), we use LoRA modules to fine-tune the FFN layers in Transformer blocks for\neach forgetting task independently, and towards (ii), a simple group sparse\nregularization is adopted, enabling automatic selection of specific LoRA groups\nand zeroing out the others. GS-LoRA is effective, parameter-efficient,\ndata-efficient, and easy to implement.", + "GS-LoRA is effective, parameter-efficient,\ndata-efficient, and easy to implement. We conduct extensive experiments on face\nrecognition, object detection and image classification and demonstrate that\nGS-LoRA manages to forget specific classes with minimal impact on other\nclasses. Codes will be released on \\url{https://github.com/bjzhb666/GS-LoRA}.", + "We present a new dataset called Real Acoustic Fields (RAF) that captures real\nacoustic room data from multiple modalities. The dataset includes high-quality\nand densely captured room impulse response data paired with multi-view images,\nand precise 6DoF pose tracking data for sound emitters and listeners in the\nrooms. We used this dataset to evaluate existing methods for novel-view\nacoustic synthesis and impulse response generation which previously relied on\nsynthetic data. In our evaluation, we thoroughly assessed existing audio and\naudio-visual models against multiple criteria and proposed settings to enhance\ntheir performance on real-world data. We also conducted experiments to\ninvestigate the impact of incorporating visual data (i.e., images and depth)\ninto neural acoustic field models. Additionally, we demonstrated the\neffectiveness of a simple sim2real approach, where a model is pre-trained with\nsimulated data and fine-tuned with sparse real-world data, resulting in\nsignificant improvements in the few-shot learning approach. RAF is the first\ndataset to provide densely captured room acoustic data, making it an ideal\nresource for researchers working on audio and audio-visual neural acoustic\nfield modeling techniques.", + "RAF is the first\ndataset to provide densely captured room acoustic data, making it an ideal\nresource for researchers working on audio and audio-visual neural acoustic\nfield modeling techniques. Demos and datasets are available on our project\npage: https://facebookresearch.github.io/real-acoustic-fields/", + "In this paper, we address web-scale visual entity recognition, specifically\nthe task of mapping a given query image to one of the 6 million existing\nentities in Wikipedia. One way of approaching a problem of such scale is using\ndual-encoder models (eg CLIP), where all the entity names and query images are\nembedded into a unified space, paving the way for an approximate k-NN search.\nAlternatively, it is also possible to re-purpose a captioning model to directly\ngenerate the entity names for a given image. In contrast, we introduce a novel\nGenerative Entity Recognition (GER) framework, which given an input image\nlearns to auto-regressively decode a semantic and discriminative ``code''\nidentifying the target entity. Our experiments demonstrate the efficacy of this\nGER paradigm, showcasing state-of-the-art performance on the challenging OVEN\nbenchmark. GER surpasses strong captioning, dual-encoder, visual matching and\nhierarchical classification baselines, affirming its advantage in tackling the\ncomplexities of web-scale recognition.", + "We introduce the new setting of open-vocabulary object 6D pose estimation, in\nwhich a textual prompt is used to specify the object of interest. In contrast\nto existing approaches, in our setting (i) the object of interest is specified\nsolely through the textual prompt, (ii) no object model (e.g., CAD or video\nsequence) is required at inference, and (iii) the object is imaged from two\nRGBD viewpoints of different scenes. To operate in this setting, we introduce a\nnovel approach that leverages a Vision-Language Model to segment the object of\ninterest from the scenes and to estimate its relative 6D pose. The key of our\napproach is a carefully devised strategy to fuse object-level information\nprovided by the prompt with local image features, resulting in a feature space\nthat can generalize to novel concepts. We validate our approach on a new\nbenchmark based on two popular datasets, REAL275 and Toyota-Light, which\ncollectively encompass 34 object instances appearing in four thousand image\npairs.", + "We validate our approach on a new\nbenchmark based on two popular datasets, REAL275 and Toyota-Light, which\ncollectively encompass 34 object instances appearing in four thousand image\npairs. The results demonstrate that our approach outperforms both a\nwell-established hand-crafted method and a recent deep learning-based baseline\nin estimating the relative 6D pose of objects in different scenes. Code and\ndataset are available at https://jcorsetti.github.io/oryon.", + "Annotating datasets for object detection is an expensive and time-consuming\nendeavor. To minimize this burden, active learning (AL) techniques are employed\nto select the most informative samples for annotation within a constrained\n\"annotation budget\". Traditional AL strategies typically rely on model\nuncertainty or sample diversity for query sampling, while more advanced methods\nhave focused on developing AL-specific object detector architectures to enhance\nperformance. However, these specialized approaches are not readily adaptable to\ndifferent object detectors due to the significant engineering effort required\nfor integration. To overcome this challenge, we introduce Plug and Play Active\nLearning (PPAL), a simple and effective AL strategy for object detection. PPAL\nis a two-stage method comprising uncertainty-based and diversity-based sampling\nphases. In the first stage, our Difficulty Calibrated Uncertainty Sampling\nleverage a category-wise difficulty coefficient that combines both\nclassification and localisation difficulties to re-weight instance\nuncertainties, from which we sample a candidate pool for the subsequent\ndiversity-based sampling.", + "In the first stage, our Difficulty Calibrated Uncertainty Sampling\nleverage a category-wise difficulty coefficient that combines both\nclassification and localisation difficulties to re-weight instance\nuncertainties, from which we sample a candidate pool for the subsequent\ndiversity-based sampling. In the second stage, we propose Category Conditioned\nMatching Similarity to better compute the similarities of multi-instance images\nas ensembles of their instance similarities, which is used by the k-Means++\nalgorithm to sample the final AL queries. PPAL makes no change to model\narchitectures or detector training pipelines; hence it can be easily\ngeneralized to different object detectors. We benchmark PPAL on the MS-COCO and\nPascal VOC datasets using different detector architectures and show that our\nmethod outperforms prior work by a large margin. Code is available at\nhttps://github.com/ChenhongyiYang/PPAL", + "Fine-tuning pre-trained vision-language models, like CLIP, has yielded\nsuccess on diverse downstream tasks. However, several pain points persist for\nthis paradigm: (i) directly tuning entire pre-trained models becomes both\ntime-intensive and computationally costly. Additionally, these tuned models\ntend to become highly specialized, limiting their practicality for real-world\ndeployment; (ii) recent studies indicate that pre-trained vision-language\nclassifiers may overly depend on spurious features -- patterns that correlate\nwith the target in training data, but are not related to the true labeling\nfunction; and (iii) existing studies on mitigating the reliance on spurious\nfeatures, largely based on the assumption that we can identify such features,\ndoes not provide definitive assurance for real-world applications. As a\npiloting study, this work focuses on exploring mitigating the reliance on\nspurious features for CLIP without using any group annotation. To this end, we\nsystematically study the existence of spurious correlation on CLIP and\nCILP+ERM. We first, following recent work on Deep Feature Reweighting (DFR),\nverify that last-layer retraining can greatly improve group robustness on\npretrained CLIP.", + "To this end, we\nsystematically study the existence of spurious correlation on CLIP and\nCILP+ERM. We first, following recent work on Deep Feature Reweighting (DFR),\nverify that last-layer retraining can greatly improve group robustness on\npretrained CLIP. In view of them, we advocate a lightweight representation\ncalibration method for fine-tuning CLIP, by first generating a calibration set\nusing the pretrained CLIP, and then calibrating representations of samples\nwithin this set through contrastive learning, all without the need for group\nlabels. Extensive experiments and in-depth visualizations on several benchmarks\nvalidate the effectiveness of our proposals, largely reducing reliance and\nsignificantly boosting the model generalization.", + "Recent advances in text-to-motion generation using diffusion and\nautoregressive models have shown promising results. However, these models often\nsuffer from a trade-off between real-time performance, high fidelity, and\nmotion editability. To address this gap, we introduce MMM, a novel yet simple\nmotion generation paradigm based on Masked Motion Model. MMM consists of two\nkey components: (1) a motion tokenizer that transforms 3D human motion into a\nsequence of discrete tokens in latent space, and (2) a conditional masked\nmotion transformer that learns to predict randomly masked motion tokens,\nconditioned on the pre-computed text tokens. By attending to motion and text\ntokens in all directions, MMM explicitly captures inherent dependency among\nmotion tokens and semantic mapping between motion and text tokens. During\ninference, this allows parallel and iterative decoding of multiple motion\ntokens that are highly consistent with fine-grained text descriptions,\ntherefore simultaneously achieving high-fidelity and high-speed motion\ngeneration. In addition, MMM has innate motion editability. By simply placing\nmask tokens in the place that needs editing, MMM automatically fills the gaps\nwhile guaranteeing smooth transitions between editing and non-editing parts.", + "In addition, MMM has innate motion editability. By simply placing\nmask tokens in the place that needs editing, MMM automatically fills the gaps\nwhile guaranteeing smooth transitions between editing and non-editing parts.\nExtensive experiments on the HumanML3D and KIT-ML datasets demonstrate that MMM\nsurpasses current leading methods in generating high-quality motion (evidenced\nby superior FID scores of 0.08 and 0.429), while offering advanced editing\nfeatures such as body-part modification, motion in-betweening, and the\nsynthesis of long motion sequences. In addition, MMM is two orders of magnitude\nfaster on a single mid-range GPU than editable motion diffusion models. Our\nproject page is available at \\url{https://exitudio.github.io/MMM-page}.", + "We present PEGASUS, a method for constructing a personalized generative 3D\nface avatar from monocular video sources. Our generative 3D avatar enables\ndisentangled controls to selectively alter the facial attributes (e.g., hair or\nnose) while preserving the identity. Our approach consists of two stages:\nsynthetic database generation and constructing a personalized generative\navatar. We generate a synthetic video collection of the target identity with\nvarying facial attributes, where the videos are synthesized by borrowing the\nattributes from monocular videos of diverse identities. Then, we build a\nperson-specific generative 3D avatar that can modify its attributes\ncontinuously while preserving its identity. Through extensive experiments, we\ndemonstrate that our method of generating a synthetic database and creating a\n3D generative avatar is the most effective in preserving identity while\nachieving high realism. Subsequently, we introduce a zero-shot approach to\nachieve the same goal of generative modeling more efficiently by leveraging a\npreviously constructed personalized generative model.", + "Despite significant recent progress in the field of autonomous driving,\nmodern methods still struggle and can incur serious accidents when encountering\nlong-tail unforeseen events and challenging urban scenarios. On the one hand,\nlarge language models (LLM) have shown impressive reasoning capabilities that\napproach \"Artificial General Intelligence\". On the other hand, previous\nautonomous driving methods tend to rely on limited-format inputs (e.g. sensor\ndata and navigation waypoints), restricting the vehicle's ability to understand\nlanguage information and interact with humans. To this end, this paper\nintroduces LMDrive, a novel language-guided, end-to-end, closed-loop autonomous\ndriving framework. LMDrive uniquely processes and integrates multi-modal sensor\ndata with natural language instructions, enabling interaction with humans and\nnavigation software in realistic instructional settings. To facilitate further\nresearch in language-based closed-loop autonomous driving, we also publicly\nrelease the corresponding dataset which includes approximately 64K\ninstruction-following data clips, and the LangAuto benchmark that tests the\nsystem's ability to handle complex instructions and challenging driving\nscenarios. Extensive closed-loop experiments are conducted to demonstrate\nLMDrive's effectiveness.", + "Extensive closed-loop experiments are conducted to demonstrate\nLMDrive's effectiveness. To the best of our knowledge, we're the very first\nwork to leverage LLMs for closed-loop end-to-end autonomous driving. Codes,\nmodels, and datasets can be found at https://github.com/opendilab/LMDrive", + "Perception plays a crucial role in various robot applications. However,\nexisting well-annotated datasets are biased towards autonomous driving\nscenarios, while unlabelled SLAM datasets are quickly over-fitted, and often\nlack environment and domain variations. To expand the frontier of these fields,\nwe introduce a comprehensive dataset named MCD (Multi-Campus Dataset),\nfeaturing a wide range of sensing modalities, high-accuracy ground truth, and\ndiverse challenging environments across three Eurasian university campuses. MCD\ncomprises both CCS (Classical Cylindrical Spinning) and NRE (Non-Repetitive\nEpicyclic) lidars, high-quality IMUs (Inertial Measurement Units), cameras, and\nUWB (Ultra-WideBand) sensors. Furthermore, in a pioneering effort, we introduce\nsemantic annotations of 29 classes over 59k sparse NRE lidar scans across three\ndomains, thus providing a novel challenge to existing semantic segmentation\nresearch upon this largely unexplored lidar modality.", + "Furthermore, in a pioneering effort, we introduce\nsemantic annotations of 29 classes over 59k sparse NRE lidar scans across three\ndomains, thus providing a novel challenge to existing semantic segmentation\nresearch upon this largely unexplored lidar modality. Finally, we propose, for\nthe first time to the best of our knowledge, continuous-time ground truth based\non optimization-based registration of lidar-inertial data on large survey-grade\nprior maps, which are also publicly released, each several times the size of\nexisting ones. We conduct a rigorous evaluation of numerous state-of-the-art\nalgorithms on MCD, report their performance, and highlight the challenges\nawaiting solutions from the research community.", + "We introduce CyberDemo, a novel approach to robotic imitation learning that\nleverages simulated human demonstrations for real-world tasks. By incorporating\nextensive data augmentation in a simulated environment, CyberDemo outperforms\ntraditional in-domain real-world demonstrations when transferred to the real\nworld, handling diverse physical and visual conditions. Regardless of its\naffordability and convenience in data collection, CyberDemo outperforms\nbaseline methods in terms of success rates across various tasks and exhibits\ngeneralizability with previously unseen objects. For example, it can rotate\nnovel tetra-valve and penta-valve, despite human demonstrations only involving\ntri-valves. Our research demonstrates the significant potential of simulated\nhuman demonstrations for real-world dexterous manipulation tasks. More details\ncan be found at https://cyber-demo.github.io", + "In this paper, we investigate a new problem called narrative action\nevaluation (NAE). NAE aims to generate professional commentary that evaluates\nthe execution of an action. Unlike traditional tasks such as score-based action\nquality assessment and video captioning involving superficial sentences, NAE\nfocuses on creating detailed narratives in natural language. These narratives\nprovide intricate descriptions of actions along with objective evaluations. NAE\nis a more challenging task because it requires both narrative flexibility and\nevaluation rigor. One existing possible solution is to use multi-task learning,\nwhere narrative language and evaluative information are predicted separately.\nHowever, this approach results in reduced performance for individual tasks\nbecause of variations between tasks and differences in modality between\nlanguage information and evaluation information. To address this, we propose a\nprompt-guided multimodal interaction framework. This framework utilizes a pair\nof transformers to facilitate the interaction between different modalities of\ninformation. It also uses prompts to transform the score regression task into a\nvideo-text matching task, thus enabling task interactivity. To support further\nresearch in this field, we re-annotate the MTL-AQA and FineGym datasets with\nhigh-quality and comprehensive action narration.", + "It also uses prompts to transform the score regression task into a\nvideo-text matching task, thus enabling task interactivity. To support further\nresearch in this field, we re-annotate the MTL-AQA and FineGym datasets with\nhigh-quality and comprehensive action narration. Additionally, we establish\nbenchmarks for NAE. Extensive experiment results prove that our method\noutperforms separate learning methods and naive multi-task learning methods.\nData and code are released at https://github.com/shiyi-zh0408/NAE_CVPR2024.", + "Anomaly detection (AD) aims to identify defective images and localize their\ndefects (if any). Ideally, AD models should be able to detect defects over many\nimage classes; without relying on hard-coded class names that can be\nuninformative or inconsistent across datasets; learn without anomaly\nsupervision; and be robust to the long-tailed distributions of real-world\napplications. To address these challenges, we formulate the problem of\nlong-tailed AD by introducing several datasets with different levels of class\nimbalance and metrics for performance evaluation. We then propose a novel\nmethod, LTAD, to detect defects from multiple and long-tailed classes, without\nrelying on dataset class names. LTAD combines AD by reconstruction and semantic\nAD modules. AD by reconstruction is implemented with a transformer-based\nreconstruction module. Semantic AD is implemented with a binary classifier,\nwhich relies on learned pseudo class names and a pretrained foundation model.\nThese modules are learned over two phases. Phase 1 learns the pseudo-class\nnames and a variational autoencoder (VAE) for feature synthesis that augments\nthe training data to combat long-tails.", + "These modules are learned over two phases. Phase 1 learns the pseudo-class\nnames and a variational autoencoder (VAE) for feature synthesis that augments\nthe training data to combat long-tails. Phase 2 then learns the parameters of\nthe reconstruction and classification modules of LTAD. Extensive experiments\nusing the proposed long-tailed datasets show that LTAD substantially\noutperforms the state-of-the-art methods for most forms of dataset imbalance.\nThe long-tailed dataset split is available at\nhttps://zenodo.org/records/10854201 .", + "Although soft prompt tuning is effective in efficiently adapting\nVision-Language (V&L) models for downstream tasks, it shows limitations in\ndealing with distribution shifts. We address this issue with Attribute-Guided\nPrompt Tuning (ArGue), making three key contributions. 1) In contrast to the\nconventional approach of directly appending soft prompts preceding class names,\nwe align the model with primitive visual attributes generated by Large Language\nModels (LLMs). We posit that a model's ability to express high confidence in\nthese attributes signifies its capacity to discern the correct class\nrationales. 2) We introduce attribute sampling to eliminate disadvantageous\nattributes, thus only semantically meaningful attributes are preserved. 3) We\npropose negative prompting, explicitly enumerating class-agnostic attributes to\nactivate spurious correlations and encourage the model to generate highly\northogonal probability distributions in relation to these negative features. In\nexperiments, our method significantly outperforms current state-of-the-art\nprompt tuning methods on both novel class prediction and out-of-distribution\ngeneralization tasks.", + "Recent advances in text-to-video generation have demonstrated the utility of\npowerful diffusion models. Nevertheless, the problem is not trivial when\nshaping diffusion models to animate static image (i.e., image-to-video\ngeneration). The difficulty originates from the aspect that the diffusion\nprocess of subsequent animated frames should not only preserve the faithful\nalignment with the given image but also pursue temporal coherence among\nadjacent frames. To alleviate this, we present TRIP, a new recipe of\nimage-to-video diffusion paradigm that pivots on image noise prior derived from\nstatic image to jointly trigger inter-frame relational reasoning and ease the\ncoherent temporal modeling via temporal residual learning. Technically, the\nimage noise prior is first attained through one-step backward diffusion process\nbased on both static image and noised video latent codes.", + "Technically, the\nimage noise prior is first attained through one-step backward diffusion process\nbased on both static image and noised video latent codes. Next, TRIP executes a\nresidual-like dual-path scheme for noise prediction: 1) a shortcut path that\ndirectly takes image noise prior as the reference noise of each frame to\namplify the alignment between the first frame and subsequent frames; 2) a\nresidual path that employs 3D-UNet over noised video and static image latent\ncodes to enable inter-frame relational reasoning, thereby easing the learning\nof the residual noise for each frame. Furthermore, both reference and residual\nnoise of each frame are dynamically merged via attention mechanism for final\nvideo generation. Extensive experiments on WebVid-10M, DTDB and MSR-VTT\ndatasets demonstrate the effectiveness of our TRIP for image-to-video\ngeneration. Please see our project page at https://trip-i2v.github.io/TRIP/.", + "To adequately utilize the available image evidence in multi-view video-based\navatar modeling, we propose TexVocab, a novel avatar representation that\nconstructs a texture vocabulary and associates body poses with texture maps for\nanimation. Given multi-view RGB videos, our method initially back-projects all\nthe available images in the training videos to the posed SMPL surface,\nproducing texture maps in the SMPL UV domain. Then we construct pairs of human\nposes and texture maps to establish a texture vocabulary for encoding dynamic\nhuman appearances under various poses. Unlike the commonly used joint-wise\nmanner, we further design a body-part-wise encoding strategy to learn the\nstructural effects of the kinematic chain. Given a driving pose, we query the\npose feature hierarchically by decomposing the pose vector into several body\nparts and interpolating the texture features for synthesizing fine-grained\nhuman dynamics. Overall, our method is able to create animatable human avatars\nwith detailed and dynamic appearances from RGB videos, and the experiments show\nthat our method outperforms state-of-the-art approaches. The project page can\nbe found at https://texvocab.github.io/.", + "2D keypoints are commonly used as an additional cue to refine estimated 3D\nhuman meshes. Current methods optimize the pose and shape parameters with a\nreprojection loss on the provided 2D keypoints. Such an approach, while simple\nand intuitive, has limited effectiveness because the optimal solution is hard\nto find in ambiguous parameter space and may sacrifice depth. Additionally,\ndivergent gradients from distal joints complicate and deviate the refinement of\nproximal joints in the kinematic chain. To address these, we introduce\nKinematic-Tree Rotation (KITRO), a novel mesh refinement strategy that\nexplicitly models depth and human kinematic-tree structure. KITRO treats\nrefinement from a bone-wise perspective. Unlike previous methods which perform\ngradient-based optimizations, our method calculates bone directions in closed\nform. By accounting for the 2D pose, bone length, and parent joint's depth, the\ncalculation results in two possible directions for each child joint. We then\nuse a decision tree to trace binary choices for all bones along the human\nskeleton's kinematic-tree to select the most probable hypothesis.", + "We then\nuse a decision tree to trace binary choices for all bones along the human\nskeleton's kinematic-tree to select the most probable hypothesis. Our\nexperiments across various datasets and baseline models demonstrate that KITRO\nsignificantly improves 3D joint estimation accuracy and achieves an ideal 2D\nfit simultaneously. Our code available at: https://github.com/MartaYang/KITRO.", + "We present GigaPose, a fast, robust, and accurate method for CAD-based novel\nobject pose estimation in RGB images. GigaPose first leverages discriminative\n\"templates\", rendered images of the CAD models, to recover the out-of-plane\nrotation and then uses patch correspondences to estimate the four remaining\nparameters. Our approach samples templates in only a two-degrees-of-freedom\nspace instead of the usual three and matches the input image to the templates\nusing fast nearest-neighbor search in feature space, results in a speedup\nfactor of 35x compared to the state of the art. Moreover, GigaPose is\nsignificantly more robust to segmentation errors. Our extensive evaluation on\nthe seven core datasets of the BOP challenge demonstrates that it achieves\nstate-of-the-art accuracy and can be seamlessly integrated with existing\nrefinement methods. Additionally, we show the potential of GigaPose with 3D\nmodels predicted by recent work on 3D reconstruction from a single image,\nrelaxing the need for CAD models and making 6D pose object estimation much more\nconvenient. Our source code and trained models are publicly available at\nhttps://github.com/nv-nguyen/gigaPose", + "Vanilla text-to-image diffusion models struggle with generating accurate\nhuman images, commonly resulting in imperfect anatomies such as unnatural\npostures or disproportionate limbs.Existing methods address this issue mostly\nby fine-tuning the model with extra images or adding additional controls --\nhuman-centric priors such as pose or depth maps -- during the image generation\nphase. This paper explores the integration of these human-centric priors\ndirectly into the model fine-tuning stage, essentially eliminating the need for\nextra conditions at the inference stage. We realize this idea by proposing a\nhuman-centric alignment loss to strengthen human-related information from the\ntextual prompts within the cross-attention maps. To ensure semantic detail\nrichness and human structural accuracy during fine-tuning, we introduce\nscale-aware and step-wise constraints within the diffusion process, according\nto an in-depth analysis of the cross-attention layer. Extensive experiments\nshow that our method largely improves over state-of-the-art text-to-image\nmodels to synthesize high-quality human images based on user-written prompts.\nProject page: \\url{https://hcplayercvpr2024.github.io}.", + "This paper presents a video inversion approach for zero-shot video editing,\nwhich models the input video with low-rank representation during the inversion\nprocess. The existing video editing methods usually apply the typical 2D DDIM\ninversion or naive spatial-temporal DDIM inversion before editing, which\nleverages time-varying representation for each frame to derive noisy latent.\nUnlike most existing approaches, we propose a Spatial-Temporal\nExpectation-Maximization (STEM) inversion, which formulates the dense video\nfeature under an expectation-maximization manner and iteratively estimates a\nmore compact basis set to represent the whole video. Each frame applies the\nfixed and global representation for inversion, which is more friendly for\ntemporal consistency during reconstruction and editing. Extensive qualitative\nand quantitative experiments demonstrate that our STEM inversion can achieve\nconsistent improvement on two state-of-the-art video editing methods. Project\npage: https://stem-inv.github.io/page/.", + "Trackers that follow Siamese paradigm utilize similarity matching between\ntemplate and search region features for tracking. Many methods have been\nexplored to enhance tracking performance by incorporating tracking history to\nbetter handle scenarios involving target appearance variations such as\ndeformation and occlusion. However, the utilization of historical information\nin existing methods is insufficient and incomprehensive, which typically\nrequires repetitive training and introduces a large amount of computation. In\nthis paper, we show that by providing a tracker that follows Siamese paradigm\nwith precise and updated historical information, a significant performance\nimprovement can be achieved with completely unchanged parameters. Based on\nthis, we propose a historical prompt network that uses refined historical\nforeground masks and historical visual features of the target to provide\ncomprehensive and precise prompts for the tracker. We build a novel tracker\ncalled HIPTrack based on the historical prompt network, which achieves\nconsiderable performance improvements without the need to retrain the entire\nmodel. We conduct experiments on seven datasets and experimental results\ndemonstrate that our method surpasses the current state-of-the-art trackers on\nLaSOT, LaSOText, GOT-10k and NfS.", + "We conduct experiments on seven datasets and experimental results\ndemonstrate that our method surpasses the current state-of-the-art trackers on\nLaSOT, LaSOText, GOT-10k and NfS. Furthermore, the historical prompt network\ncan seamlessly integrate as a plug-and-play module into existing trackers,\nproviding performance enhancements. The source code is available at\nhttps://github.com/WenRuiCai/HIPTrack.", + "Existing photorealistic relightable hand models require extensive\nidentity-specific observations in different views, poses, and illuminations,\nand face challenges in generalizing to natural illuminations and novel\nidentities. To bridge this gap, we present URHand, the first universal\nrelightable hand model that generalizes across viewpoints, poses,\nilluminations, and identities. Our model allows few-shot personalization using\nimages captured with a mobile phone, and is ready to be photorealistically\nrendered under novel illuminations. To simplify the personalization process\nwhile retaining photorealism, we build a powerful universal relightable prior\nbased on neural relighting from multi-view images of hands captured in a light\nstage with hundreds of identities. The key challenge is scaling the\ncross-identity training while maintaining personalized fidelity and sharp\ndetails without compromising generalization under natural illuminations. To\nthis end, we propose a spatially varying linear lighting model as the neural\nrenderer that takes physics-inspired shading as input feature. By removing\nnon-linear activations and bias, our specifically designed lighting model\nexplicitly keeps the linearity of light transport.", + "To\nthis end, we propose a spatially varying linear lighting model as the neural\nrenderer that takes physics-inspired shading as input feature. By removing\nnon-linear activations and bias, our specifically designed lighting model\nexplicitly keeps the linearity of light transport. This enables single-stage\ntraining from light-stage data while generalizing to real-time rendering under\narbitrary continuous illuminations across diverse identities. In addition, we\nintroduce the joint learning of a physically based model and our neural\nrelighting model, which further improves fidelity and generalization. Extensive\nexperiments show that our approach achieves superior performance over existing\nmethods in terms of both quality and generalizability. We also demonstrate\nquick personalization of URHand from a short phone scan of an unseen identity.", + "While recent advances in neural radiance field enable realistic digitization\nfor large-scale scenes, the image-capturing process is still time-consuming and\nlabor-intensive. Previous works attempt to automate this process using the\nNext-Best-View (NBV) policy for active 3D reconstruction. However, the existing\nNBV policies heavily rely on hand-crafted criteria, limited action space, or\nper-scene optimized representations. These constraints limit their\ncross-dataset generalizability. To overcome them, we propose GenNBV, an\nend-to-end generalizable NBV policy. Our policy adopts a reinforcement learning\n(RL)-based framework and extends typical limited action space to 5D free space.\nIt empowers our agent drone to scan from any viewpoint, and even interact with\nunseen geometries during training. To boost the cross-dataset generalizability,\nwe also propose a novel multi-source state embedding, including geometric,\nsemantic, and action representations. We establish a benchmark using the Isaac\nGym simulator with the Houses3K and OmniObject3D datasets to evaluate this NBV\npolicy.", + "To boost the cross-dataset generalizability,\nwe also propose a novel multi-source state embedding, including geometric,\nsemantic, and action representations. We establish a benchmark using the Isaac\nGym simulator with the Houses3K and OmniObject3D datasets to evaluate this NBV\npolicy. Experiments demonstrate that our policy achieves a 98.26% and 97.12%\ncoverage ratio on unseen building-scale objects from these datasets,\nrespectively, outperforming prior solutions.", + "Multi-view clustering (MVC) aims at exploring category structures among\nmulti-view data in self-supervised manners. Multiple views provide more\ninformation than single views and thus existing MVC methods can achieve\nsatisfactory performance. However, their performance might seriously degenerate\nwhen the views are noisy in practical multi-view scenarios. In this paper, we\nformally investigate the drawback of noisy views and then propose a\ntheoretically grounded deep MVC method (namely MVCAN) to address this issue.\nSpecifically, we propose a novel MVC objective that enables un-shared\nparameters and inconsistent clustering predictions across multiple views to\nreduce the side effects of noisy views. Furthermore, a two-level multi-view\niterative optimization is designed to generate robust learning targets for\nrefining individual views' representation learning. Theoretical analysis\nreveals that MVCAN works by achieving the multi-view consistency,\ncomplementarity, and noise robustness. Finally, experiments on extensive public\ndatasets demonstrate that MVCAN outperforms state-of-the-art methods and is\nrobust against the existence of noisy views.", + "The vision and language generative models have been overgrown in recent\nyears. For video generation, various open-sourced models and public-available\nservices have been developed to generate high-quality videos. However, these\nmethods often use a few metrics, e.g., FVD or IS, to evaluate the performance.\nWe argue that it is hard to judge the large conditional generative models from\nthe simple metrics since these models are often trained on very large datasets\nwith multi-aspect abilities. Thus, we propose a novel framework and pipeline\nfor exhaustively evaluating the performance of the generated videos. Our\napproach involves generating a diverse and comprehensive list of 700 prompts\nfor text-to-video generation, which is based on an analysis of real-world user\ndata and generated with the assistance of a large language model. Then, we\nevaluate the state-of-the-art video generative models on our carefully designed\nbenchmark, in terms of visual qualities, content qualities, motion qualities,\nand text-video alignment with 17 well-selected objective metrics. To obtain the\nfinal leaderboard of the models, we further fit a series of coefficients to\nalign the objective metrics to the users' opinions.", + "To obtain the\nfinal leaderboard of the models, we further fit a series of coefficients to\nalign the objective metrics to the users' opinions. Based on the proposed human\nalignment method, our final score shows a higher correlation than simply\naveraging the metrics, showing the effectiveness of the proposed evaluation\nmethod.", + "3D occupancy prediction is an important task for the robustness of\nvision-centric autonomous driving, which aims to predict whether each point is\noccupied in the surrounding 3D space. Existing methods usually require 3D\noccupancy labels to produce meaningful results. However, it is very laborious\nto annotate the occupancy status of each voxel. In this paper, we propose\nSelfOcc to explore a self-supervised way to learn 3D occupancy using only video\nsequences. We first transform the images into the 3D space (e.g., bird's eye\nview) to obtain 3D representation of the scene. We directly impose constraints\non the 3D representations by treating them as signed distance fields. We can\nthen render 2D images of previous and future frames as self-supervision signals\nto learn the 3D representations. We propose an MVS-embedded strategy to\ndirectly optimize the SDF-induced weights with multiple depth proposals.", + "We can\nthen render 2D images of previous and future frames as self-supervision signals\nto learn the 3D representations. We propose an MVS-embedded strategy to\ndirectly optimize the SDF-induced weights with multiple depth proposals. Our\nSelfOcc outperforms the previous best method SceneRF by 58.7% using a single\nframe as input on SemanticKITTI and is the first self-supervised work that\nproduces reasonable 3D occupancy for surround cameras on nuScenes. SelfOcc\nproduces high-quality depth and achieves state-of-the-art results on novel\ndepth synthesis, monocular depth estimation, and surround-view depth estimation\non the SemanticKITTI, KITTI-2015, and nuScenes, respectively. Code:\nhttps://github.com/huang-yh/SelfOcc.", + "Simultaneous localization and mapping (SLAM) is a fundamental task for\nnumerous applications such as autonomous navigation and exploration. Despite\nmany SLAM datasets have been released, current SLAM solutions still struggle to\nhave sustained and resilient performance. One major issue is the absence of\nhigh-quality datasets including diverse all-weather conditions and a reliable\nmetric for assessing robustness. This limitation significantly restricts the\nscalability and generalizability of SLAM technologies, impacting their\ndevelopment, validation, and deployment. To address this problem, we present\nSubT-MRS, an extremely challenging real-world dataset designed to push SLAM\ntowards all-weather environments to pursue the most robust SLAM performance. It\ncontains multi-degraded environments including over 30 diverse scenes such as\nstructureless corridors, varying lighting conditions, and perceptual obscurants\nlike smoke and dust; multimodal sensors such as LiDAR, fisheye camera, IMU, and\nthermal camera; and multiple locomotions like aerial, legged, and wheeled\nrobots. We develop accuracy and robustness evaluation tracks for SLAM and\nintroduced novel robustness metrics.", + "We develop accuracy and robustness evaluation tracks for SLAM and\nintroduced novel robustness metrics. Comprehensive studies are performed,\nrevealing new observations, challenges, and opportunities for future research.", + "Federated learning achieves effective performance in modeling decentralized\ndata. In practice, client data are not well-labeled, which makes it potential\nfor federated unsupervised learning (FUSL) with non-IID data. However, the\nperformance of existing FUSL methods suffers from insufficient representations,\ni.e., (1) representation collapse entanglement among local and global models,\nand (2) inconsistent representation spaces among local models. The former\nindicates that representation collapse in local model will subsequently impact\nthe global model and other local models. The latter means that clients model\ndata representation with inconsistent parameters due to the deficiency of\nsupervision signals. In this work, we propose FedU2 which enhances generating\nuniform and unified representation in FUSL with non-IID data. Specifically,\nFedU2 consists of flexible uniform regularizer (FUR) and efficient unified\naggregator (EUA). FUR in each client avoids representation collapse via\ndispersing samples uniformly, and EUA in server promotes unified representation\nby constraining consistent client model updating.", + "Specifically,\nFedU2 consists of flexible uniform regularizer (FUR) and efficient unified\naggregator (EUA). FUR in each client avoids representation collapse via\ndispersing samples uniformly, and EUA in server promotes unified representation\nby constraining consistent client model updating. To extensively validate the\nperformance of FedU2, we conduct both cross-device and cross-silo evaluation\nexperiments on two benchmark datasets, i.e., CIFAR10 and CIFAR100.", + "We study the zero-shot Composed Image Retrieval (ZS-CIR) task, which is to\nretrieve the target image given a reference image and a description without\ntraining on the triplet datasets. Previous works generate pseudo-word tokens by\nprojecting the reference image features to the text embedding space. However,\nthey focus on the global visual representation, ignoring the representation of\ndetailed attributes, e.g., color, object number and layout. To address this\nchallenge, we propose a Knowledge-Enhanced Dual-stream zero-shot composed image\nretrieval framework (KEDs). KEDs implicitly models the attributes of the\nreference images by incorporating a database. The database enriches the\npseudo-word tokens by providing relevant images and captions, emphasizing\nshared attribute information in various aspects. In this way, KEDs recognizes\nthe reference image from diverse perspectives. Moreover, KEDs adopts an extra\nstream that aligns pseudo-word tokens with textual concepts, leveraging\npseudo-triplets mined from image-text pairs. The pseudo-word tokens generated\nin this stream are explicitly aligned with fine-grained semantics in the text\nembedding space. Extensive experiments on widely used benchmarks, i.e.", + "The pseudo-word tokens generated\nin this stream are explicitly aligned with fine-grained semantics in the text\nembedding space. Extensive experiments on widely used benchmarks, i.e.\nImageNet-R, COCO object, Fashion-IQ and CIRR, show that KEDs outperforms\nprevious zero-shot composed image retrieval methods.", + "Recent studies have shown promising performance in open-vocabulary object\ndetection (OVD) by utilizing pseudo labels (PLs) from pretrained vision and\nlanguage models (VLMs). However, teacher-student self-training, a powerful and\nwidely used paradigm to leverage PLs, is rarely explored for OVD. This work\nidentifies two challenges of using self-training in OVD: noisy PLs from VLMs\nand frequent distribution changes of PLs. To address these challenges, we\npropose SAS-Det that tames self-training for OVD from two key perspectives.\nFirst, we present a split-and-fusion (SAF) head that splits a standard\ndetection into an open-branch and a closed-branch. This design can reduce noisy\nsupervision from pseudo boxes. Moreover, the two branches learn complementary\nknowledge from different training data, significantly enhancing performance\nwhen fused together. Second, in our view, unlike in closed-set tasks, the PL\ndistributions in OVD are solely determined by the teacher model.", + "Moreover, the two branches learn complementary\nknowledge from different training data, significantly enhancing performance\nwhen fused together. Second, in our view, unlike in closed-set tasks, the PL\ndistributions in OVD are solely determined by the teacher model. We introduce a\nperiodic update strategy to decrease the number of updates to the teacher,\nthereby decreasing the frequency of changes in PL distributions, which\nstabilizes the training process. Extensive experiments demonstrate SAS-Det is\nboth efficient and effective. SAS-Det outperforms recent models of the same\nscale by a clear margin and achieves 37.4 AP50 and 29.1 APr on novel categories\nof the COCO and LVIS benchmarks, respectively. Code is available at\n\\url{https://github.com/xiaofeng94/SAS-Det}.", + "Many contemporary studies utilize grid-based models for neural field\nrepresentation, but a systematic analysis of grid-based models is still\nmissing, hindering the improvement of those models. Therefore, this paper\nintroduces a theoretical framework for grid-based models. This framework points\nout that these models' approximation and generalization behaviors are\ndetermined by grid tangent kernels (GTK), which are intrinsic properties of\ngrid-based models. The proposed framework facilitates a consistent and\nsystematic analysis of diverse grid-based models. Furthermore, the introduced\nframework motivates the development of a novel grid-based model named the\nMultiplicative Fourier Adaptive Grid (MulFAGrid). The numerical analysis\ndemonstrates that MulFAGrid exhibits a lower generalization bound than its\npredecessors, indicating its robust generalization performance. Empirical\nstudies reveal that MulFAGrid achieves state-of-the-art performance in various\ntasks, including 2D image fitting, 3D signed distance field (SDF)\nreconstruction, and novel view synthesis, demonstrating superior representation\nability. The project website is available at\nhttps://sites.google.com/view/cvpr24-2034-submission/home.", + "Depth completion aims to derive a dense depth map from sparse depth\nmeasurements with a synchronized color image. Current state-of-the-art (SOTA)\nmethods are predominantly propagation-based, which work as an iterative\nrefinement on the initial estimated dense depth. However, the initial depth\nestimations mostly result from direct applications of convolutional layers on\nthe sparse depth map. In this paper, we present a Bilateral Propagation Network\n(BP-Net), that propagates depth at the earliest stage to avoid directly\nconvolving on sparse data. Specifically, our approach propagates the target\ndepth from nearby depth measurements via a non-linear model, whose coefficients\nare generated through a multi-layer perceptron conditioned on both\n\\emph{radiometric difference} and \\emph{spatial distance}. By integrating\nbilateral propagation with multi-modal fusion and depth refinement in a\nmulti-scale framework, our BP-Net demonstrates outstanding performance on both\nindoor and outdoor scenes. It achieves SOTA on the NYUv2 dataset and ranks 1st\non the KITTI depth completion benchmark at the time of submission.", + "It achieves SOTA on the NYUv2 dataset and ranks 1st\non the KITTI depth completion benchmark at the time of submission. Experimental\nresults not only show the effectiveness of bilateral propagation but also\nemphasize the significance of early-stage propagation in contrast to the\nrefinement stage. Our code and trained models will be available on the project\npage.", + "Transformers have been successfully applied in the field of video-based 3D\nhuman pose estimation. However, the high computational costs of these video\npose transformers (VPTs) make them impractical on resource-constrained devices.\nIn this paper, we present a plug-and-play pruning-and-recovering framework,\ncalled Hourglass Tokenizer (HoT), for efficient transformer-based 3D human pose\nestimation from videos. Our HoT begins with pruning pose tokens of redundant\nframes and ends with recovering full-length tokens, resulting in a few pose\ntokens in the intermediate transformer blocks and thus improving the model\nefficiency. To effectively achieve this, we propose a token pruning cluster\n(TPC) that dynamically selects a few representative tokens with high semantic\ndiversity while eliminating the redundancy of video frames. In addition, we\ndevelop a token recovering attention (TRA) to restore the detailed\nspatio-temporal information based on the selected tokens, thereby expanding the\nnetwork output to the original full-length temporal resolution for fast\ninference.", + "In addition, we\ndevelop a token recovering attention (TRA) to restore the detailed\nspatio-temporal information based on the selected tokens, thereby expanding the\nnetwork output to the original full-length temporal resolution for fast\ninference. Extensive experiments on two benchmark datasets (i.e., Human3.6M and\nMPI-INF-3DHP) demonstrate that our method can achieve both high efficiency and\nestimation accuracy compared to the original VPT models. For instance, applying\nto MotionBERT and MixSTE on Human3.6M, our HoT can save nearly 50% FLOPs\nwithout sacrificing accuracy and nearly 40% FLOPs with only 0.2% accuracy drop,\nrespectively. Code and models are available at\nhttps://github.com/NationalGAILab/HoT.", + "Diffusion models have recently brought a powerful revolution in image\ngeneration. Despite showing impressive generative capabilities, most of these\nmodels rely on the current sample to denoise the next one, possibly resulting\nin denoising instability. In this paper, we reinterpret the iterative denoising\nprocess as model optimization and leverage a moving average mechanism to\nensemble all the prior samples. Instead of simply applying moving average to\nthe denoised samples at different timesteps, we first map the denoised samples\nto data space and then perform moving average to avoid distribution shift\nacross timesteps. In view that diffusion models evolve the recovery from\nlow-frequency components to high-frequency details, we further decompose the\nsamples into different frequency components and execute moving average\nseparately on each component. We name the complete approach \"Moving Average\nSampling in Frequency domain (MASF)\". MASF could be seamlessly integrated into\nmainstream pre-trained diffusion models and sampling schedules. Extensive\nexperiments on both unconditional and conditional diffusion models demonstrate\nthat our MASF leads to superior performances compared to the baselines, with\nalmost negligible additional complexity cost.", + "We introduce Gaussian Articulated Template Model GART, an explicit,\nefficient, and expressive representation for non-rigid articulated subject\ncapturing and rendering from monocular videos. GART utilizes a mixture of\nmoving 3D Gaussians to explicitly approximate a deformable subject's geometry\nand appearance. It takes advantage of a categorical template model prior (SMPL,\nSMAL, etc.) with learnable forward skinning while further generalizing to more\ncomplex non-rigid deformations with novel latent bones. GART can be\nreconstructed via differentiable rendering from monocular videos in seconds or\nminutes and rendered in novel poses faster than 150fps.", + "Prompt learning in pretrained visual-language models has shown remarkable\nflexibility across various downstream tasks. Leveraging its inherent\nlightweight nature, recent research attempted to integrate the powerful\npretrained models into federated learning frameworks to simultaneously reduce\ncommunication costs and promote local training on insufficient data. Despite\nthese efforts, current federated prompt learning methods lack specialized\ndesigns to systematically address severe data heterogeneities, e.g., data\ndistribution with both label and feature shifts involved. To address this\nchallenge, we present Federated Prompts Cooperation via Optimal Transport\n(FedOTP), which introduces efficient collaborative prompt learning strategies\nto capture diverse category traits on a per-client basis. Specifically, for\neach client, we learn a global prompt to extract consensus knowledge among\nclients, and a local prompt to capture client-specific category\ncharacteristics. Unbalanced Optimal Transport is then employed to align local\nvisual features with these prompts, striking a balance between global consensus\nand local personalization. By relaxing one of the equality constraints, FedOTP\nenables prompts to focus solely on the core regions of image patches. Extensive\nexperiments on datasets with various types of heterogeneities have demonstrated\nthat our FedOTP outperforms the state-of-the-art methods.", + "We present a new method for text-driven motion transfer - synthesizing a\nvideo that complies with an input text prompt describing the target objects and\nscene while maintaining an input video's motion and scene layout. Prior methods\nare confined to transferring motion across two subjects within the same or\nclosely related object categories and are applicable for limited domains (e.g.,\nhumans). In this work, we consider a significantly more challenging setting in\nwhich the target and source objects differ drastically in shape and\nfine-grained motion characteristics (e.g., translating a jumping dog into a\ndolphin). To this end, we leverage a pre-trained and fixed text-to-video\ndiffusion model, which provides us with generative and motion priors. The\npillar of our method is a new space-time feature loss derived directly from the\nmodel. This loss guides the generation process to preserve the overall motion\nof the input video while complying with the target object in terms of shape and\nfine-grained motion traits.", + "We introduce a framework for online learning from a single continuous video\nstream -- the way people and animals learn, without mini-batches, data\naugmentation or shuffling. This poses great challenges given the high\ncorrelation between consecutive video frames and there is very little prior\nwork on it. Our framework allows us to do a first deep dive into the topic and\nincludes a collection of streams and tasks composed from two existing video\ndatasets, plus methodology for performance evaluation that considers both\nadaptation and generalization. We employ pixel-to-pixel modelling as a\npractical and flexible way to switch between pre-training and single-stream\nevaluation as well as between arbitrary tasks, without ever requiring changes\nto models and always using the same pixel loss. Equipped with this framework we\nobtained large single-stream learning gains from pre-training with a novel\nfamily of future prediction tasks, found that momentum hurts, and that the pace\nof weight updates matters. The combination of these insights leads to matching\nthe performance of IID learning with batch size 1, when using the same\narchitecture and without costly replay buffers.", + "We present a Multi-Instance Generation (MIG) task, simultaneously generating\nmultiple instances with diverse controls in one image. Given a set of\npredefined coordinates and their corresponding descriptions, the task is to\nensure that generated instances are accurately at the designated locations and\nthat all instances' attributes adhere to their corresponding description. This\nbroadens the scope of current research on Single-instance generation, elevating\nit to a more versatile and practical dimension. Inspired by the idea of divide\nand conquer, we introduce an innovative approach named Multi-Instance\nGeneration Controller (MIGC) to address the challenges of the MIG task.\nInitially, we break down the MIG task into several subtasks, each involving the\nshading of a single instance. To ensure precise shading for each instance, we\nintroduce an instance enhancement attention mechanism. Lastly, we aggregate all\nthe shaded instances to provide the necessary information for accurately\ngenerating multiple instances in stable diffusion (SD). To evaluate how well\ngeneration models perform on the MIG task, we provide a COCO-MIG benchmark\nalong with an evaluation pipeline.", + "Lastly, we aggregate all\nthe shaded instances to provide the necessary information for accurately\ngenerating multiple instances in stable diffusion (SD). To evaluate how well\ngeneration models perform on the MIG task, we provide a COCO-MIG benchmark\nalong with an evaluation pipeline. Extensive experiments were conducted on the\nproposed COCO-MIG benchmark, as well as on various commonly used benchmarks.\nThe evaluation results illustrate the exceptional control capabilities of our\nmodel in terms of quantity, position, attribute, and interaction. Code and\ndemos will be released at https://migcproject.github.io/.", + "Open-vocabulary object detection (OVD) has been studied with Vision-Language\nModels (VLMs) to detect novel objects beyond the pre-trained categories.\nPrevious approaches improve the generalization ability to expand the knowledge\nof the detector, using 'positive' pseudo-labels with additional 'class' names,\ne.g., sock, iPod, and alligator. To extend the previous methods in two aspects,\nwe propose Retrieval-Augmented Losses and visual Features (RALF). Our method\nretrieves related 'negative' classes and augments loss functions. Also, visual\nfeatures are augmented with 'verbalized concepts' of classes, e.g., worn on the\nfeet, handheld music player, and sharp teeth. Specifically, RALF consists of\ntwo modules: Retrieval Augmented Losses (RAL) and Retrieval-Augmented visual\nFeatures (RAF). RAL constitutes two losses reflecting the semantic similarity\nwith negative vocabularies. In addition, RAF augments visual features with the\nverbalized concepts from a large language model (LLM). Our experiments\ndemonstrate the effectiveness of RALF on COCO and LVIS benchmark datasets.", + "RAL constitutes two losses reflecting the semantic similarity\nwith negative vocabularies. In addition, RAF augments visual features with the\nverbalized concepts from a large language model (LLM). Our experiments\ndemonstrate the effectiveness of RALF on COCO and LVIS benchmark datasets. We\nachieve improvement up to 3.4 box AP$_{50}^{\\text{N}}$ on novel categories of\nthe COCO dataset and 3.6 mask AP$_{\\text{r}}$ gains on the LVIS dataset. Code\nis available at https://github.com/mlvlab/RALF .", + "While excellent in transfer learning, Vision-Language models (VLMs) come with\nhigh computational costs due to their large number of parameters. To address\nthis issue, removing parameters via model pruning is a viable solution.\nHowever, existing techniques for VLMs are task-specific, and thus require\npruning the network from scratch for each new task of interest. In this work,\nwe explore a new direction: Task-Agnostic Vision-Language Pruning (TA-VLP).\nGiven a pretrained VLM, the goal is to find a unique pruned counterpart\ntransferable to multiple unknown downstream tasks. In this challenging setting,\nthe transferable representations already encoded in the pretrained model are a\nkey aspect to preserve. Thus, we propose Multimodal Flow Pruning (MULTIFLOW), a\nfirst, gradient-free, pruning framework for TA-VLP where: (i) the importance of\na parameter is expressed in terms of its magnitude and its information flow, by\nincorporating the saliency of the neurons it connects; and (ii) pruning is\ndriven by the emergent (multimodal) distribution of the VLM parameters after\npretraining.", + "We benchmark eight state-of-the-art pruning algorithms in the\ncontext of TA-VLP, experimenting with two VLMs, three vision-language tasks,\nand three pruning ratios. Our experimental results show that MULTIFLOW\noutperforms recent sophisticated, combinatorial competitors in the vast\nmajority of the cases, paving the way towards addressing TA-VLP. The code is\npublicly available at https://github.com/FarinaMatteo/multiflow.", + "This paper proposes LLaFS, the first attempt to leverage large language\nmodels (LLMs) in few-shot segmentation. In contrast to the conventional\nfew-shot segmentation methods that only rely on the limited and biased\ninformation from the annotated support images, LLaFS leverages the vast prior\nknowledge gained by LLM as an effective supplement and directly uses the LLM to\nsegment images in a few-shot manner. To enable the text-based LLM to handle\nimage-related tasks, we carefully design an input instruction that allows the\nLLM to produce segmentation results represented as polygons, and propose a\nregion-attribute table to simulate the human visual mechanism and provide\nmulti-modal guidance. We also synthesize pseudo samples and use curriculum\nlearning for pretraining to augment data and achieve better optimization. LLaFS\nachieves state-of-the-art results on multiple datasets, showing the potential\nof using LLMs for few-shot computer vision tasks.", + "While large multimodal models (LMMs) have achieved remarkable progress,\ngenerating pixel-level masks for image reasoning tasks involving multiple\nopen-world targets remains a challenge. To bridge this gap, we introduce\nPixelLM, an effective and efficient LMM for pixel-level reasoning and\nunderstanding. Central to PixelLM is a novel, lightweight pixel decoder and a\ncomprehensive segmentation codebook. The decoder efficiently produces masks\nfrom the hidden embeddings of the codebook tokens, which encode detailed\ntarget-relevant information. With this design, PixelLM harmonizes with the\nstructure of popular LMMs and avoids the need for additional costly\nsegmentation models. Furthermore, we propose a target refinement loss to\nenhance the model's ability to differentiate between multiple targets, leading\nto substantially improved mask quality. To advance research in this area, we\nconstruct MUSE, a high-quality multi-target reasoning segmentation benchmark.\nPixelLM excels across various pixel-level image reasoning and understanding\ntasks, outperforming well-established methods in multiple benchmarks, including\nMUSE, single- and multi-referring segmentation. Comprehensive ablations confirm\nthe efficacy of each proposed component. All code, models, and datasets will be\npublicly available.", + "Image-goal navigation is a challenging task that requires an agent to\nnavigate to a goal indicated by an image in unfamiliar environments. Existing\nmethods utilizing diverse scene memories suffer from inefficient exploration\nsince they use all historical observations for decision-making without\nconsidering the goal-relevant fraction. To address this limitation, we present\nMemoNav, a novel memory model for image-goal navigation, which utilizes a\nworking memory-inspired pipeline to improve navigation performance.\nSpecifically, we employ three types of navigation memory. The node features on\na map are stored in the short-term memory (STM), as these features are\ndynamically updated. A forgetting module then retains the informative STM\nfraction to increase efficiency. We also introduce long-term memory (LTM) to\nlearn global scene representations by progressively aggregating STM features.\nSubsequently, a graph attention module encodes the retained STM and the LTM to\ngenerate working memory (WM) which contains the scene features essential for\nefficient navigation. The synergy among these three memory types boosts\nnavigation performance by enabling the agent to learn and leverage\ngoal-relevant scene features within a topological map.", + "The synergy among these three memory types boosts\nnavigation performance by enabling the agent to learn and leverage\ngoal-relevant scene features within a topological map. Our evaluation on\nmulti-goal tasks demonstrates that MemoNav significantly outperforms previous\nmethods across all difficulty levels in both Gibson and Matterport3D scenes.\nQualitative results further illustrate that MemoNav plans more efficient\nroutes.", + "Existing depth sensors are imperfect and may provide inaccurate depth values\nin challenging scenarios, such as in the presence of transparent or reflective\nobjects. In this work, we present a general framework that leverages\npolarization imaging to improve inaccurate depth measurements from various\ndepth sensors. Previous polarization-based depth enhancement methods focus on\nutilizing pure physics-based formulas for a single sensor. In contrast, our\nmethod first adopts a learning-based strategy where a neural network is trained\nto estimate a dense and complete depth map from polarization data and a sensor\ndepth map from different sensors. To further improve the performance, we\npropose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively\nutilize RGB-based models pre-trained on large-scale datasets, as the size of\nthe polarization dataset is limited to train a strong model from scratch. We\nconducted extensive experiments on a public dataset, and the results\ndemonstrate that the proposed method performs favorably compared to existing\ndepth enhancement baselines. Code and demos are available at\nhttps://lastbasket.github.io/PPFT/.", + "Despite the great success of deep learning in stereo matching, recovering\naccurate disparity maps is still challenging. Currently, L1 and cross-entropy\nare the two most widely used losses for stereo network training. Compared with\nthe former, the latter usually performs better thanks to its probability\nmodeling and direct supervision to the cost volume. However, how to accurately\nmodel the stereo ground-truth for cross-entropy loss remains largely\nunder-explored. Existing works simply assume that the ground-truth\ndistributions are uni-modal, which ignores the fact that most of the edge\npixels can be multi-modal. In this paper, a novel adaptive multi-modal\ncross-entropy loss (ADL) is proposed to guide the networks to learn different\ndistribution patterns for each pixel. Moreover, we optimize the disparity\nestimator to further alleviate the bleeding or misalignment artifacts in\ninference. Extensive experimental results show that our method is generic and\ncan help classic stereo networks regain state-of-the-art performance. In\nparticular, GANet with our method ranks $1^{st}$ on both the KITTI 2015 and\n2012 benchmarks among the published methods.", + "Extensive experimental results show that our method is generic and\ncan help classic stereo networks regain state-of-the-art performance. In\nparticular, GANet with our method ranks $1^{st}$ on both the KITTI 2015 and\n2012 benchmarks among the published methods. Meanwhile, excellent\nsynthetic-to-realistic generalization performance can be achieved by simply\nreplacing the traditional loss with ours.", + "Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve\nstate-of-the-art performance with improved efficiency in various computer\nvision tasks. This suggests a promising paradigm shift of adapting pre-trained\nViT models to Federated Learning (FL) settings. However, the challenge of data\nheterogeneity among FL clients presents a significant hurdle in effectively\ndeploying ViT models. Existing Generalized FL (GFL) and Personalized FL (PFL)\nmethods have limitations in balancing performance across both global and local\ndata distributions. In this paper, we present a novel algorithm, SGPT, that\nintegrates GFL and PFL approaches by employing a unique combination of both\nshared and group-specific prompts. This design enables SGPT to capture both\ncommon and group-specific features. A key feature of SGPT is its prompt\nselection module, which facilitates the training of a single global model\ncapable of automatically adapting to diverse local client data distributions\nwithout the need for local fine-tuning.", + "This design enables SGPT to capture both\ncommon and group-specific features. A key feature of SGPT is its prompt\nselection module, which facilitates the training of a single global model\ncapable of automatically adapting to diverse local client data distributions\nwithout the need for local fine-tuning. To effectively train the prompts, we\nutilize block coordinate descent (BCD), learning from common feature\ninformation (shared prompts), and then more specialized knowledge (group\nprompts) iteratively. Theoretically, we justify that learning the proposed\nprompts can reduce the gap between global and local performance. Empirically,\nwe conduct experiments on both label and feature heterogeneity settings in\ncomparison with state-of-the-art baselines, along with extensive ablation\nstudies, to substantiate the superior performance of SGPT.", + "Neural Radiance Fields (NeRFs) have demonstrated remarkable potential in\ncapturing complex 3D scenes with high fidelity. However, one persistent\nchallenge that hinders the widespread adoption of NeRFs is the computational\nbottleneck due to the volumetric rendering. On the other hand, 3D Gaussian\nsplatting (3DGS) has recently emerged as an alternative representation that\nleverages a 3D Gaussisan-based representation and adopts the rasterization\npipeline to render the images rather than volumetric rendering, achieving very\nfast rendering speed and promising image quality. However, a significant\ndrawback arises as 3DGS entails a substantial number of 3D Gaussians to\nmaintain the high fidelity of the rendered images, which requires a large\namount of memory and storage. To address this critical issue, we place a\nspecific emphasis on two key objectives: reducing the number of Gaussian points\nwithout sacrificing performance and compressing the Gaussian attributes, such\nas view-dependent color and covariance. To this end, we propose a learnable\nmask strategy that significantly reduces the number of Gaussians while\npreserving high performance.", + "To this end, we propose a learnable\nmask strategy that significantly reduces the number of Gaussians while\npreserving high performance. In addition, we propose a compact but effective\nrepresentation of view-dependent color by employing a grid-based neural field\nrather than relying on spherical harmonics. Finally, we learn codebooks to\ncompactly represent the geometric attributes of Gaussian by vector\nquantization. With model compression techniques such as quantization and\nentropy coding, we consistently show over 25$\\times$ reduced storage and\nenhanced rendering speed, while maintaining the quality of the scene\nrepresentation, compared to 3DGS. Our work provides a comprehensive framework\nfor 3D scene representation, achieving high performance, fast training,\ncompactness, and real-time rendering. Our project page is available at\nhttps://maincold2.github.io/c3dgs/.", + "We propose the task of Panoptic Scene Completion (PSC) which extends the\nrecently popular Semantic Scene Completion (SSC) task with instance-level\ninformation to produce a richer understanding of the 3D scene. Our PSC proposal\nutilizes a hybrid mask-based technique on the non-empty voxels from sparse\nmulti-scale completions. Whereas the SSC literature overlooks uncertainty which\nis critical for robotics applications, we instead propose an efficient\nensembling to estimate both voxel-wise and instance-wise uncertainties along\nPSC. This is achieved by building on a multi-input multi-output (MIMO)\nstrategy, while improving performance and yielding better uncertainty for\nlittle additional compute. Additionally, we introduce a technique to aggregate\npermutation-invariant mask predictions. Our experiments demonstrate that our\nmethod surpasses all baselines in both Panoptic Scene Completion and\nuncertainty estimation on three large-scale autonomous driving datasets. Our\ncode and data are available at https://astra-vision.github.io/PaSCo .", + "We present GALA, a framework that takes as input a single-layer clothed 3D\nhuman mesh and decomposes it into complete multi-layered 3D assets. The outputs\ncan then be combined with other assets to create novel clothed human avatars\nwith any pose. Existing reconstruction approaches often treat clothed humans as\na single-layer of geometry and overlook the inherent compositionality of humans\nwith hairstyles, clothing, and accessories, thereby limiting the utility of the\nmeshes for downstream applications. Decomposing a single-layer mesh into\nseparate layers is a challenging task because it requires the synthesis of\nplausible geometry and texture for the severely occluded regions. Moreover,\neven with successful decomposition, meshes are not normalized in terms of poses\nand body shapes, failing coherent composition with novel identities and poses.\nTo address these challenges, we propose to leverage the general knowledge of a\npretrained 2D diffusion model as geometry and appearance prior for humans and\nother assets. We first separate the input mesh using the 3D surface\nsegmentation extracted from multi-view 2D segmentations.", + "To address these challenges, we propose to leverage the general knowledge of a\npretrained 2D diffusion model as geometry and appearance prior for humans and\nother assets. We first separate the input mesh using the 3D surface\nsegmentation extracted from multi-view 2D segmentations. Then we synthesize the\nmissing geometry of different layers in both posed and canonical spaces using a\nnovel pose-guided Score Distillation Sampling (SDS) loss. Once we complete\ninpainting high-fidelity 3D geometry, we also apply the same SDS loss to its\ntexture to obtain the complete appearance including the initially occluded\nregions. Through a series of decomposition steps, we obtain multiple layers of\n3D assets in a shared canonical space normalized in terms of poses and human\nshapes, hence supporting effortless composition to novel identities and\nreanimation with novel poses. Our experiments demonstrate the effectiveness of\nour approach for decomposition, canonicalization, and composition tasks\ncompared to existing solutions.", + "Recent advances in 3D face stylization have made significant strides in few\nto zero-shot settings. However, the degree of stylization achieved by existing\nmethods is often not sufficient for practical applications because they are\nmostly based on statistical 3D Morphable Models (3DMM) with limited variations.\nTo this end, we propose a method that can produce a highly stylized 3D face\nmodel with desired topology. Our methods train a surface deformation network\nwith 3DMM and translate its domain to the target style using a paired exemplar.\nThe network achieves stylization of the 3D face mesh by mimicking the style of\nthe target using a differentiable renderer and directional CLIP losses.\nAdditionally, during the inference process, we utilize a Mesh Agnostic Encoder\n(MAGE) that takes deformation target, a mesh of diverse topologies as input to\nthe stylization process and encodes its shape into our latent space. The\nresulting stylized face model can be animated by commonly used 3DMM blend\nshapes. A set of quantitative and qualitative evaluations demonstrate that our\nmethod can produce highly stylized face meshes according to a given style and\noutput them in a desired topology.", + "The\nresulting stylized face model can be animated by commonly used 3DMM blend\nshapes. A set of quantitative and qualitative evaluations demonstrate that our\nmethod can produce highly stylized face meshes according to a given style and\noutput them in a desired topology. We also demonstrate example applications of\nour method including image-based stylized avatar generation, linear\ninterpolation of geometric styles, and facial animation of stylized avatars.", + "3D building reconstruction from monocular remote sensing images is an\nimportant and challenging research problem that has received increasing\nattention in recent years, owing to its low cost of data acquisition and\navailability for large-scale applications. However, existing methods rely on\nexpensive 3D-annotated samples for fully-supervised training, restricting their\napplication to large-scale cross-city scenarios. In this work, we propose\nMLS-BRN, a multi-level supervised building reconstruction network that can\nflexibly utilize training samples with different annotation levels to achieve\nbetter reconstruction results in an end-to-end manner. To alleviate the demand\non full 3D supervision, we design two new modules, Pseudo Building Bbox\nCalculator and Roof-Offset guided Footprint Extractor, as well as new tasks and\ntraining strategies for different types of samples. Experimental results on\nseveral public and new datasets demonstrate that our proposed MLS-BRN achieves\ncompetitive performance using much fewer 3D-annotated samples, and\nsignificantly improves the footprint extraction and 3D reconstruction\nperformance compared with current state-of-the-art. The code and datasets of\nthis work will be released at https://github.com/opendatalab/MLS-BRN.git.", + "With recent developments in Embodied Artificial Intelligence (EAI) research,\nthere has been a growing demand for high-quality, large-scale interactive scene\ngeneration. While prior methods in scene synthesis have prioritized the\nnaturalness and realism of the generated scenes, the physical plausibility and\ninteractivity of scenes have been largely left unexplored. To address this\ndisparity, we introduce PhyScene, a novel method dedicated to generating\ninteractive 3D scenes characterized by realistic layouts, articulated objects,\nand rich physical interactivity tailored for embodied agents. Based on a\nconditional diffusion model for capturing scene layouts, we devise novel\nphysics- and interactivity-based guidance mechanisms that integrate constraints\nfrom object collision, room layout, and object reachability. Through extensive\nexperiments, we demonstrate that PhyScene effectively leverages these guidance\nfunctions for physically interactable scene synthesis, outperforming existing\nstate-of-the-art scene synthesis methods by a large margin. Our findings\nsuggest that the scenes generated by PhyScene hold considerable potential for\nfacilitating diverse skill acquisition among agents within interactive\nenvironments, thereby catalyzing further advancements in embodied AI research.\nProject website: http://physcene.github.io.", + "In this work, we aim to improve the 3D reasoning ability of Transformers in\nmulti-view 3D human pose estimation. Recent works have focused on end-to-end\nlearning-based transformer designs, which struggle to resolve geometric\ninformation accurately, particularly during occlusion. Instead, we propose a\nnovel hybrid model, MVGFormer, which has a series of geometric and appearance\nmodules organized in an iterative manner. The geometry modules are\nlearning-free and handle all viewpoint-dependent 3D tasks geometrically which\nnotably improves the model's generalization ability. The appearance modules are\nlearnable and are dedicated to estimating 2D poses from image signals\nend-to-end which enables them to achieve accurate estimates even when occlusion\noccurs, leading to a model that is both accurate and generalizable to new\ncameras and geometries. We evaluate our approach for both in-domain and\nout-of-domain settings, where our model consistently outperforms\nstate-of-the-art methods, and especially does so by a significant margin in the\nout-of-domain setting. We will release the code and models:\nhttps://github.com/XunshanMan/MVGFormer.", + "A long-standing goal of 3D human reconstruction is to create lifelike and\nfully detailed 3D humans from single-view images. The main challenge lies in\ninferring unknown body shapes, appearances, and clothing details in areas not\nvisible in the images. To address this, we propose SiTH, a novel pipeline that\nuniquely integrates an image-conditioned diffusion model into a 3D mesh\nreconstruction workflow. At the core of our method lies the decomposition of\nthe challenging single-view reconstruction problem into generative\nhallucination and reconstruction subproblems. For the former, we employ a\npowerful generative diffusion model to hallucinate unseen back-view appearance\nbased on the input images. For the latter, we leverage skinned body meshes as\nguidance to recover full-body texture meshes from the input and back-view\nimages. SiTH requires as few as 500 3D human scans for training while\nmaintaining its generality and robustness to diverse images. Extensive\nevaluations on two 3D human benchmarks, including our newly created one,\nhighlighted our method's superior accuracy and perceptual quality in 3D\ntextured human reconstruction.", + "Extensive\nevaluations on two 3D human benchmarks, including our newly created one,\nhighlighted our method's superior accuracy and perceptual quality in 3D\ntextured human reconstruction. Our code and evaluation benchmark are available\nat https://ait.ethz.ch/sith", + "Facial Attribute Classification (FAC) holds substantial promise in widespread\napplications. However, FAC models trained by traditional methodologies can be\nunfair by exhibiting accuracy inconsistencies across varied data\nsubpopulations. This unfairness is largely attributed to bias in data, where\nsome spurious attributes (e.g., Male) statistically correlate with the target\nattribute (e.g., Smiling). Most of existing fairness-aware methods rely on the\nlabels of spurious attributes, which may be unavailable in practice. This work\nproposes a novel, generation-based two-stage framework to train a fair FAC\nmodel on biased data without additional annotation. Initially, we identify the\npotential spurious attributes based on generative models. Notably, it enhances\ninterpretability by explicitly showing the spurious attributes in image space.\nFollowing this, for each image, we first edit the spurious attributes with a\nrandom degree sampled from a uniform distribution, while keeping target\nattribute unchanged. Then we train a fair FAC model by fostering model\ninvariance to these augmentation. Extensive experiments on three common\ndatasets demonstrate the effectiveness of our method in promoting fairness in\nFAC without compromising accuracy. Codes are in\nhttps://github.com/heqianpei/DiGA.", + "We propose a novel compact and efficient neural BRDF offering highly\nversatile material representation, yet with very-light memory and neural\ncomputation consumption towards achieving real-time rendering. The results in\nFigure 1, rendered at full HD resolution on a current desktop machine, show\nthat our system achieves real-time rendering with a wide variety of\nappearances, which is approached by the following two designs. On the one hand,\nnoting that bidirectional reflectance is distributed in a very sparse\nhigh-dimensional subspace, we propose to project the BRDF into two\nlow-dimensional components, i.e., two hemisphere feature-grids for incoming and\noutgoing directions, respectively. On the other hand, learnable neural\nreflectance primitives are distributed on our highly-tailored spherical surface\ngrid, which offer informative features for each component and alleviate the\nconventional heavy feature learning network to a much smaller one, leading to\nvery fast evaluation. These primitives are centrally stored in a codebook and\ncan be shared across multiple grids and even across materials, based on the\nlow-cost indices stored in material-specific spherical surface grids.", + "These primitives are centrally stored in a codebook and\ncan be shared across multiple grids and even across materials, based on the\nlow-cost indices stored in material-specific spherical surface grids. Our\nneural BRDF, which is agnostic to the material, provides a unified framework\nthat can represent a variety of materials in consistent manner. Comprehensive\nexperimental results on measured BRDF compression, Monte Carlo simulated BRDF\nacceleration, and extension to spatially varying effect demonstrate the\nsuperior quality and generalizability achieved by the proposed scheme.", + "Video stabilization is a longstanding computer vision problem, particularly\npixel-level synthesis solutions for video stabilization which synthesize full\nframes add to the complexity of this task. These techniques aim to stabilize\nvideos by synthesizing full frames while enhancing the stability of the\nconsidered video. This intensifies the complexity of the task due to the\ndistinct mix of unique motion profiles and visual content present in each video\nsequence, making robust generalization with fixed parameters difficult. In our\nstudy, we introduce a novel approach to enhance the performance of pixel-level\nsynthesis solutions for video stabilization by adapting these models to\nindividual input video sequences. The proposed adaptation exploits low-level\nvisual cues accessible during test-time to improve both the stability and\nquality of resulting videos. We highlight the efficacy of our methodology of\n\"test-time adaptation\" through simple fine-tuning of one of these models,\nfollowed by significant stability gain via the integration of meta-learning\ntechniques. Notably, significant improvement is achieved with only a single\nadaptation step. The versatility of the proposed algorithm is demonstrated by\nconsistently improving the performance of various pixel-level synthesis models\nfor video stabilization in real-world scenarios.", + "Text-to-video generation aims to produce a video based on a given prompt.\nRecently, several commercial video models have been able to generate plausible\nvideos with minimal noise, excellent details, and high aesthetic scores.\nHowever, these models rely on large-scale, well-filtered, high-quality videos\nthat are not accessible to the community. Many existing research works, which\ntrain models using the low-quality WebVid-10M dataset, struggle to generate\nhigh-quality videos because the models are optimized to fit WebVid-10M. In this\nwork, we explore the training scheme of video models extended from Stable\nDiffusion and investigate the feasibility of leveraging low-quality videos and\nsynthesized high-quality images to obtain a high-quality video model. We first\nanalyze the connection between the spatial and temporal modules of video models\nand the distribution shift to low-quality videos. We observe that full training\nof all modules results in a stronger coupling between spatial and temporal\nmodules than only training temporal modules. Based on this stronger coupling,\nwe shift the distribution to higher quality without motion degradation by\nfinetuning spatial modules with high-quality images, resulting in a generic\nhigh-quality video model.", + "Based on this stronger coupling,\nwe shift the distribution to higher quality without motion degradation by\nfinetuning spatial modules with high-quality images, resulting in a generic\nhigh-quality video model. Evaluations are conducted to demonstrate the\nsuperiority of the proposed method, particularly in picture quality, motion,\nand concept composition.", + "Flow-based super-resolution (SR) models have demonstrated astonishing\ncapabilities in generating high-quality images. However, these methods\nencounter several challenges during image generation, such as grid artifacts,\nexploding inverses, and suboptimal results due to a fixed sampling temperature.\nTo overcome these issues, this work introduces a conditional learned prior to\nthe inference phase of a flow-based SR model. This prior is a latent code\npredicted by our proposed latent module conditioned on the low-resolution\nimage, which is then transformed by the flow model into an SR image. Our\nframework is designed to seamlessly integrate with any contemporary flow-based\nSR model without modifying its architecture or pre-trained weights. We evaluate\nthe effectiveness of our proposed framework through extensive experiments and\nablation analyses. The proposed framework successfully addresses all the\ninherent issues in flow-based SR models and enhances their performance in\nvarious SR scenarios. Our code is available at:\nhttps://github.com/liyuantsao/BFSR", + "In this paper, we propose a novel abstraction-aware sketch-based image\nretrieval framework capable of handling sketch abstraction at varied levels.\nPrior works had mainly focused on tackling sub-factors such as drawing style\nand order, we instead attempt to model abstraction as a whole, and propose\nfeature-level and retrieval granularity-level designs so that the system builds\ninto its DNA the necessary means to interpret abstraction. On learning\nabstraction-aware features, we for the first-time harness the rich semantic\nembedding of pre-trained StyleGAN model, together with a novel\nabstraction-level mapper that deciphers the level of abstraction and\ndynamically selects appropriate dimensions in the feature matrix\ncorrespondingly, to construct a feature matrix embedding that can be freely\ntraversed to accommodate different levels of abstraction. For granularity-level\nabstraction understanding, we dictate that the retrieval model should not treat\nall abstraction-levels equally and introduce a differentiable surrogate Acc.@q\nloss to inject that understanding into the system. Different to the\ngold-standard triplet loss, our Acc.", + "For granularity-level\nabstraction understanding, we dictate that the retrieval model should not treat\nall abstraction-levels equally and introduce a differentiable surrogate Acc.@q\nloss to inject that understanding into the system. Different to the\ngold-standard triplet loss, our Acc.@q loss uniquely allows a sketch to\nnarrow/broaden its focus in terms of how stringent the evaluation should be -\nthe more abstract a sketch, the less stringent (higher q). Extensive\nexperiments depict our method to outperform existing state-of-the-arts in\nstandard SBIR tasks along with challenging scenarios like early retrieval,\nforensic sketch-photo matching, and style-invariant retrieval.", + "3D-aware Generative Adversarial Networks (GANs) have shown remarkable\nprogress in learning to generate multi-view-consistent images and 3D geometries\nof scenes from collections of 2D images via neural volume rendering. Yet, the\nsignificant memory and computational costs of dense sampling in volume\nrendering have forced 3D GANs to adopt patch-based training or employ\nlow-resolution rendering with post-processing 2D super resolution, which\nsacrifices multiview consistency and the quality of resolved geometry.\nConsequently, 3D GANs have not yet been able to fully resolve the rich 3D\ngeometry present in 2D images. In this work, we propose techniques to scale\nneural volume rendering to the much higher resolution of native 2D images,\nthereby resolving fine-grained 3D geometry with unprecedented detail. Our\napproach employs learning-based samplers for accelerating neural rendering for\n3D GAN training using up to 5 times fewer depth samples. This enables us to\nexplicitly \"render every pixel\" of the full-resolution image during training\nand inference without post-processing superresolution in 2D.", + "Our\napproach employs learning-based samplers for accelerating neural rendering for\n3D GAN training using up to 5 times fewer depth samples. This enables us to\nexplicitly \"render every pixel\" of the full-resolution image during training\nand inference without post-processing superresolution in 2D. Together with our\nstrategy to learn high-quality surface geometry, our method synthesizes\nhigh-resolution 3D geometry and strictly view-consistent images while\nmaintaining image quality on par with baselines relying on post-processing\nsuper resolution. We demonstrate state-of-the-art 3D gemetric quality on FFHQ\nand AFHQ, setting a new standard for unsupervised learning of 3D shapes in 3D\nGANs.", + "Despite the impressive generative capabilities of diffusion models, existing\ndiffusion model-based style transfer methods require inference-stage\noptimization (e.g. fine-tuning or textual inversion of style) which is\ntime-consuming, or fails to leverage the generative ability of large-scale\ndiffusion models. To address these issues, we introduce a novel artistic style\ntransfer method based on a pre-trained large-scale diffusion model without any\noptimization. Specifically, we manipulate the features of self-attention layers\nas the way the cross-attention mechanism works; in the generation process,\nsubstituting the key and value of content with those of style image. This\napproach provides several desirable characteristics for style transfer\nincluding 1) preservation of content by transferring similar styles into\nsimilar image patches and 2) transfer of style based on similarity of local\ntexture (e.g. edge) between content and style images. Furthermore, we introduce\nquery preservation and attention temperature scaling to mitigate the issue of\ndisruption of original content, and initial latent Adaptive Instance\nNormalization (AdaIN) to deal with the disharmonious color (failure to transfer\nthe colors of style).", + "edge) between content and style images. Furthermore, we introduce\nquery preservation and attention temperature scaling to mitigate the issue of\ndisruption of original content, and initial latent Adaptive Instance\nNormalization (AdaIN) to deal with the disharmonious color (failure to transfer\nthe colors of style). Our experimental results demonstrate that our proposed\nmethod surpasses state-of-the-art methods in both conventional and\ndiffusion-based style transfer baselines.", + "Redundancy is a persistent challenge in Capsule Networks (CapsNet),leading to\nhigh computational costs and parameter counts. Although previous works have\nintroduced pruning after the initial capsule layer, dynamic routing's fully\nconnected nature and non-orthogonal weight matrices reintroduce redundancy in\ndeeper layers. Besides, dynamic routing requires iterating to converge, further\nincreasing computational demands. In this paper, we propose an Orthogonal\nCapsule Network (OrthCaps) to reduce redundancy, improve routing performance\nand decrease parameter counts. Firstly, an efficient pruned capsule layer is\nintroduced to discard redundant capsules. Secondly, dynamic routing is replaced\nwith orthogonal sparse attention routing, eliminating the need for iterations\nand fully connected structures. Lastly, weight matrices during routing are\northogonalized to sustain low capsule similarity, which is the first approach\nto introduce orthogonality into CapsNet as far as we know. Our experiments on\nbaseline datasets affirm the efficiency and robustness of OrthCaps in\nclassification tasks, in which ablation studies validate the criticality of\neach component.", + "Our experiments on\nbaseline datasets affirm the efficiency and robustness of OrthCaps in\nclassification tasks, in which ablation studies validate the criticality of\neach component. Remarkably, OrthCaps-Shallow outperforms other Capsule Network\nbenchmarks on four datasets, utilizing only 110k parameters, which is a mere\n1.25% of a standard Capsule Network's total. To the best of our knowledge, it\nachieves the smallest parameter count among existing Capsule Networks.\nSimilarly, OrthCaps-Deep demonstrates competitive performance across four\ndatasets, utilizing only 1.2% of the parameters required by its counterparts.", + "The goal of Universal Cross-Domain Retrieval (UCDR) is to achieve robust\nperformance in generalized test scenarios, wherein data may belong to strictly\nunknown domains and categories during training. Recently, pre-trained models\nwith prompt tuning have shown strong generalization capabilities and attained\nnoteworthy achievements in various downstream tasks, such as few-shot learning\nand video-text retrieval. However, applying them directly to UCDR may not\nsufficiently to handle both domain shift (i.e., adapting to unfamiliar domains)\nand semantic shift (i.e., transferring to unknown categories). To this end, we\npropose \\textbf{Pro}mpting-to-\\textbf{S}imulate (ProS), the first method to\napply prompt tuning for UCDR. ProS employs a two-step process to simulate\nContent-aware Dynamic Prompts (CaDP) which can impact models to produce\ngeneralized features for UCDR. Concretely, in Prompt Units Learning stage, we\nintroduce two Prompt Units to individually capture domain and semantic\nknowledge in a mask-and-align way.", + "Concretely, in Prompt Units Learning stage, we\nintroduce two Prompt Units to individually capture domain and semantic\nknowledge in a mask-and-align way. Then, in Context-aware Simulator Learning\nstage, we train a Content-aware Prompt Simulator under a simulated test\nscenarios to produce the corresponding CaDP. Extensive experiments conducted on\nthree benchmark datasets show that our method achieves new state-of-the-art\nperformance without bringing excessive parameters. Our method is publicly\navailable at https://github.com/fangkaipeng/ProS.", + "While head-mounted devices are becoming more compact, they provide egocentric\nviews with significant self-occlusions of the device user. Hence, existing\nmethods often fail to accurately estimate complex 3D poses from egocentric\nviews. In this work, we propose a new transformer-based framework to improve\negocentric stereo 3D human pose estimation, which leverages the scene\ninformation and temporal context of egocentric stereo videos. Specifically, we\nutilize 1) depth features from our 3D scene reconstruction module with\nuniformly sampled windows of egocentric stereo frames, and 2) human joint\nqueries enhanced by temporal features of the video inputs. Our method is able\nto accurately estimate human poses even in challenging scenarios, such as\ncrouching and sitting. Furthermore, we introduce two new benchmark datasets,\ni.e., UnrealEgo2 and UnrealEgo-RW (RealWorld). The proposed datasets offer a\nmuch larger number of egocentric stereo views with a wider variety of human\nmotions than the existing datasets, allowing comprehensive evaluation of\nexisting and upcoming methods. Our extensive experiments show that the proposed\napproach significantly outperforms previous methods.", + "The proposed datasets offer a\nmuch larger number of egocentric stereo views with a wider variety of human\nmotions than the existing datasets, allowing comprehensive evaluation of\nexisting and upcoming methods. Our extensive experiments show that the proposed\napproach significantly outperforms previous methods. We will release\nUnrealEgo2, UnrealEgo-RW, and trained models on our project page.", + "Recent advances in the diffusion models have significantly improved\ntext-to-image generation. However, generating videos from text is a more\nchallenging task than generating images from text, due to the much larger\ndataset and higher computational cost required. Most existing video generation\nmethods use either a 3D U-Net architecture that considers the temporal\ndimension or autoregressive generation. These methods require large datasets\nand are limited in terms of computational costs compared to text-to-image\ngeneration. To tackle these challenges, we propose a simple but effective novel\ngrid diffusion for text-to-video generation without temporal dimension in\narchitecture and a large text-video paired dataset. We can generate a\nhigh-quality video using a fixed amount of GPU memory regardless of the number\nof frames by representing the video as a grid image. Additionally, since our\nmethod reduces the dimensions of the video to the dimensions of the image,\nvarious image-based methods can be applied to videos, such as text-guided video\nmanipulation from image manipulation. Our proposed method outperforms the\nexisting methods in both quantitative and qualitative evaluations,\ndemonstrating the suitability of our model for real-world video generation.", + "Detecting objects in low-light scenarios presents a persistent challenge, as\ndetectors trained on well-lit data exhibit significant performance degradation\non low-light data due to low visibility. Previous methods mitigate this issue\nby exploring image enhancement or object detection techniques with real\nlow-light image datasets. However, the progress is impeded by the inherent\ndifficulties about collecting and annotating low-light images. To address this\nchallenge, we propose to boost low-light object detection with zero-shot\nday-night domain adaptation, which aims to generalize a detector from well-lit\nscenarios to low-light ones without requiring real low-light data. Revisiting\nRetinex theory in the low-level vision, we first design a reflectance\nrepresentation learning module to learn Retinex-based illumination invariance\nin images with a carefully designed illumination invariance reinforcement\nstrategy. Next, an interchange-redecomposition-coherence procedure is\nintroduced to improve over the vanilla Retinex image decomposition process by\nperforming two sequential image decompositions and introducing a\nredecomposition cohering loss. Extensive experiments on ExDark, DARK FACE, and\nCODaN datasets show strong low-light generalizability of our method.", + "Extensive experiments on ExDark, DARK FACE, and\nCODaN datasets show strong low-light generalizability of our method. Our code\nis available at https://github.com/ZPDu/DAI-Net.", + "The recent advancements in text-to-3D generation mark a significant milestone\nin generative models, unlocking new possibilities for creating imaginative 3D\nassets across various real-world scenarios. While recent advancements in\ntext-to-3D generation have shown promise, they often fall short in rendering\ndetailed and high-quality 3D models. This problem is especially prevalent as\nmany methods base themselves on Score Distillation Sampling (SDS). This paper\nidentifies a notable deficiency in SDS, that it brings inconsistent and\nlow-quality updating direction for the 3D model, causing the over-smoothing\neffect. To address this, we propose a novel approach called Interval Score\nMatching (ISM). ISM employs deterministic diffusing trajectories and utilizes\ninterval-based score matching to counteract over-smoothing. Furthermore, we\nincorporate 3D Gaussian Splatting into our text-to-3D generation pipeline.\nExtensive experiments show that our model largely outperforms the\nstate-of-the-art in quality and training efficiency.", + "A versatile medical image segmentation model applicable to images acquired\nwith diverse equipment and protocols can facilitate model deployment and\nmaintenance. However, building such a model typically demands a large, diverse,\nand fully annotated dataset, which is challenging to obtain due to the\nlabor-intensive nature of data curation. To address this challenge, we propose\na cost-effective alternative that harnesses multi-source data with only partial\nor sparse segmentation labels for training, substantially reducing the cost of\ndeveloping a versatile model. We devise strategies for model\nself-disambiguation, prior knowledge incorporation, and imbalance mitigation to\ntackle challenges associated with inconsistently labeled multi-source data,\nincluding label ambiguity and modality, dataset, and class imbalances.\nExperimental results on a multi-modal dataset compiled from eight different\nsources for abdominal structure segmentation have demonstrated the\neffectiveness and superior performance of our method compared to\nstate-of-the-art alternative approaches. We anticipate that its cost-saving\nfeatures, which optimize the utilization of existing annotated data and reduce\nannotation efforts for new data, will have a significant impact in the field.", + "Learned reweighting (LRW) approaches to supervised learning use an\noptimization criterion to assign weights for training instances, in order to\nmaximize performance on a representative validation dataset. We pose and\nformalize the problem of optimized selection of the validation set used in LRW\ntraining, to improve classifier generalization. In particular, we show that\nusing hard-to-classify instances in the validation set has both a theoretical\nconnection to, and strong empirical evidence of generalization. We provide an\nefficient algorithm for training this meta-optimized model, as well as a simple\ntrain-twice heuristic for careful comparative study. We demonstrate that LRW\nwith easy validation data performs consistently worse than LRW with hard\nvalidation data, establishing the validity of our meta-optimization problem.\nOur proposed algorithm outperforms a wide range of baselines on a range of\ndatasets and domain shift challenges (Imagenet-1K, CIFAR-100, Clothing-1M,\nCAMELYON, WILDS, etc.), with ~1% gains using VIT-B on Imagenet.", + "), with ~1% gains using VIT-B on Imagenet. We also show\nthat using naturally hard examples for validation (Imagenet-R / Imagenet-A) in\nLRW training for Imagenet improves performance on both clean and naturally hard\ntest instances by 1-2%. Secondary analyses show that using hard validation data\nin an LRW framework improves margins on test data, hinting at the mechanism\nunderlying our empirical gains. We believe this work opens up new research\ndirections for the meta-optimization of meta-learning in a supervised learning\ncontext.", + "In this paper, we address the challenge of reconstructing general articulated\n3D objects from a single video. Existing works employing dynamic neural\nradiance fields have advanced the modeling of articulated objects like humans\nand animals from videos, but face challenges with piece-wise rigid general\narticulated objects due to limitations in their deformation models. To tackle\nthis, we propose Quasi-Rigid Blend Skinning, a novel deformation model that\nenhances the rigidity of each part while maintaining flexible deformation of\nthe joints. Our primary insight combines three distinct approaches: 1) an\nenhanced bone rigging system for improved component modeling, 2) the use of\nquasi-sparse skinning weights to boost part rigidity and reconstruction\nfidelity, and 3) the application of geodesic point assignment for precise\nmotion and seamless deformation. Our method outperforms previous works in\nproducing higher-fidelity 3D reconstructions of general articulated objects, as\ndemonstrated on both real and synthetic datasets. Project page:\nhttps://chaoyuesong.github.io/REACTO.", + "In this work, we explore egocentric whole-body motion capture using a single\nfisheye camera, which simultaneously estimates human body and hand motion. This\ntask presents significant challenges due to three factors: the lack of\nhigh-quality datasets, fisheye camera distortion, and human body\nself-occlusion. To address these challenges, we propose a novel approach that\nleverages FisheyeViT to extract fisheye image features, which are subsequently\nconverted into pixel-aligned 3D heatmap representations for 3D human body pose\nprediction. For hand tracking, we incorporate dedicated hand detection and hand\npose estimation networks for regressing 3D hand poses. Finally, we develop a\ndiffusion-based whole-body motion prior model to refine the estimated\nwhole-body motion while accounting for joint uncertainties. To train these\nnetworks, we collect a large synthetic dataset, EgoWholeBody, comprising\n840,000 high-quality egocentric images captured across a diverse range of\nwhole-body motion sequences. Quantitative and qualitative evaluations\ndemonstrate the effectiveness of our method in producing high-quality\nwhole-body motion estimates from a single egocentric camera.", + "Open-vocabulary querying in 3D space is challenging but essential for scene\nunderstanding tasks such as object localization and segmentation.\nLanguage-embedded scene representations have made progress by incorporating\nlanguage features into 3D spaces. However, their efficacy heavily depends on\nneural networks that are resource-intensive in training and rendering. Although\nrecent 3D Gaussians offer efficient and high-quality novel view synthesis,\ndirectly embedding language features in them leads to prohibitive memory usage\nand decreased performance. In this work, we introduce Language Embedded 3D\nGaussians, a novel scene representation for open-vocabulary query tasks.\nInstead of embedding high-dimensional raw semantic features on 3D Gaussians, we\npropose a dedicated quantization scheme that drastically alleviates the memory\nrequirement, and a novel embedding procedure that achieves smoother yet high\naccuracy query, countering the multi-view feature inconsistencies and the\nhigh-frequency inductive bias in point-based representations. Our comprehensive\nexperiments show that our representation achieves the best visual quality and\nlanguage querying accuracy across current language-embedded representations,\nwhile maintaining real-time rendering frame rates on a single desktop GPU.", + "Movie trailers are an essential tool for promoting films and attracting\naudiences. However, the process of creating trailers can be time-consuming and\nexpensive. To streamline this process, we propose an automatic trailer\ngeneration framework that generates plausible trailers from a full movie by\nautomating shot selection and composition. Our approach draws inspiration from\nmachine translation techniques and models the movies and trailers as sequences\nof shots, thus formulating the trailer generation problem as a\nsequence-to-sequence task. We introduce Trailer Generation Transformer (TGT), a\ndeep-learning framework utilizing an encoder-decoder architecture. TGT movie\nencoder is tasked with contextualizing each movie shot representation via\nself-attention, while the autoregressive trailer decoder predicts the feature\nrepresentation of the next trailer shot, accounting for the relevance of shots'\ntemporal order in trailers. Our TGT significantly outperforms previous methods\non a comprehensive suite of metrics.", + "In recent several years, the information bottleneck (IB) principle provides\nan information-theoretic framework for deep multi-view clustering (MVC) by\ncompressing multi-view observations while preserving the relevant information\nof multiple views. Although existing IB-based deep MVC methods have achieved\nhuge success, they rely on variational approximation and distribution\nassumption to estimate the lower bound of mutual information, which is a\nnotoriously hard and impractical problem in high-dimensional multi-view spaces.\nIn this work, we propose a new differentiable information bottleneck (DIB)\nmethod, which provides a deterministic and analytical MVC solution by fitting\nthe mutual information without the necessity of variational approximation.\nSpecifically, we first propose to directly fit the mutual information of\nhigh-dimensional spaces by leveraging normalized kernel Gram matrix, which does\nnot require any auxiliary neural estimator to estimate the lower bound of\nmutual information. Then, based on the new mutual information measurement, a\ndeterministic multi-view neural network with analytical gradients is explicitly\ntrained to parameterize IB principle, which derives a deterministic compression\nof input variables from different views.", + "Then, based on the new mutual information measurement, a\ndeterministic multi-view neural network with analytical gradients is explicitly\ntrained to parameterize IB principle, which derives a deterministic compression\nof input variables from different views. Finally, a triplet consistency\ndiscovery mechanism is devised, which is capable of mining the feature\nconsistency, cluster consistency and joint consistency based on the\ndeterministic and compact representations. Extensive experimental results show\nthe superiority of our DIB method on 6 benchmarks compared with 13\nstate-of-the-art baselines.", + "Quantization is of significance for compressing the over-parameterized deep\nneural models and deploying them on resource-limited devices. Fixed-precision\nquantization suffers from performance drop due to the limited numerical\nrepresentation ability. Conversely, mixed-precision quantization (MPQ) is\nadvocated to compress the model effectively by allocating heterogeneous\nbit-width for layers. MPQ is typically organized into a searching-retraining\ntwo-stage process. Previous works only focus on determining the optimal\nbit-width configuration in the first stage efficiently, while ignoring the\nconsiderable time costs in the second stage. However, retraining always\nconsumes hundreds of GPU-hours on the cutting-edge GPUs, thus hindering\ndeployment efficiency significantly. In this paper, we devise a one-shot\ntraining-searching paradigm for mixed-precision model compression.\nSpecifically, in the first stage, all potential bit-width configurations are\ncoupled and thus optimized simultaneously within a set of shared weights.\nHowever, our observations reveal a previously unseen and severe bit-width\ninterference phenomenon among highly coupled weights during optimization,\nleading to considerable performance degradation under a high compression ratio.", + "Specifically, in the first stage, all potential bit-width configurations are\ncoupled and thus optimized simultaneously within a set of shared weights.\nHowever, our observations reveal a previously unseen and severe bit-width\ninterference phenomenon among highly coupled weights during optimization,\nleading to considerable performance degradation under a high compression ratio.\nTo tackle this problem, we first design a bit-width scheduler to dynamically\nfreeze the most turbulent bit-width of layers during training, to ensure the\nrest bit-widths converged properly. Then, taking inspiration from information\ntheory, we present an information distortion mitigation technique to align the\nbehaviour of the bad-performing bit-widths to the well-performing ones.", + "Large language models (LLMs)-based image captioning has the capability of\ndescribing objects not explicitly observed in training data; yet novel objects\noccur frequently, necessitating the requirement of sustaining up-to-date object\nknowledge for open-world comprehension. Instead of relying on large amounts of\ndata and/or scaling up network parameters, we introduce a highly effective\nretrieval-augmented image captioning method that prompts LLMs with object names\nretrieved from External Visual--name memory (EVCap). We build ever-changing\nobject knowledge memory using objects' visuals and names, enabling us to (i)\nupdate the memory at a minimal cost and (ii) effortlessly augment LLMs with\nretrieved object names by utilizing a lightweight and fast-to-train model. Our\nmodel, which was trained only on the COCO dataset, can adapt to out-of-domain\nwithout requiring additional fine-tuning or re-training. Our experiments\nconducted on benchmarks and synthetic commonsense-violating data show that\nEVCap, with only 3.97M trainable parameters, exhibits superior performance\ncompared to other methods based on frozen pre-trained LLMs.", + "Our experiments\nconducted on benchmarks and synthetic commonsense-violating data show that\nEVCap, with only 3.97M trainable parameters, exhibits superior performance\ncompared to other methods based on frozen pre-trained LLMs. Its performance is\nalso competitive to specialist SOTAs that require extensive training.", + "Creating high-quality 3D models of clothed humans from single images for\nreal-world applications is crucial. Despite recent advancements, accurately\nreconstructing humans in complex poses or with loose clothing from in-the-wild\nimages, along with predicting textures for unseen areas, remains a significant\nchallenge. A key limitation of previous methods is their insufficient prior\nguidance in transitioning from 2D to 3D and in texture prediction. In response,\nwe introduce SIFU (Side-view Conditioned Implicit Function for Real-world\nUsable Clothed Human Reconstruction), a novel approach combining a Side-view\nDecoupling Transformer with a 3D Consistent Texture Refinement pipeline.SIFU\nemploys a cross-attention mechanism within the transformer, using SMPL-X\nnormals as queries to effectively decouple side-view features in the process of\nmapping 2D features to 3D. This method not only improves the precision of the\n3D models but also their robustness, especially when SMPL-X estimates are not\nperfect. Our texture refinement process leverages text-to-image diffusion-based\nprior to generate realistic and consistent textures for invisible views.", + "This method not only improves the precision of the\n3D models but also their robustness, especially when SMPL-X estimates are not\nperfect. Our texture refinement process leverages text-to-image diffusion-based\nprior to generate realistic and consistent textures for invisible views.\nThrough extensive experiments, SIFU surpasses SOTA methods in both geometry and\ntexture reconstruction, showcasing enhanced robustness in complex scenarios and\nachieving an unprecedented Chamfer and P2S measurement. Our approach extends to\npractical applications such as 3D printing and scene building, demonstrating\nits broad utility in real-world scenarios. Project page\nhttps://river-zhang.github.io/SIFU-projectpage/ .", + "We present WinSyn, a unique dataset and testbed for creating high-quality\nsynthetic data with procedural modeling techniques. The dataset contains\nhigh-resolution photographs of windows, selected from locations around the\nworld, with 89,318 individual window crops showcasing diverse geometric and\nmaterial characteristics. We evaluate a procedural model by training semantic\nsegmentation networks on both synthetic and real images and then comparing\ntheir performances on a shared test set of real images. Specifically, we\nmeasure the difference in mean Intersection over Union (mIoU) and determine the\neffective number of real images to match synthetic data's training performance.\nWe design a baseline procedural model as a benchmark and provide 21,290\nsynthetically generated images. By tuning the procedural model, key factors are\nidentified which significantly influence the model's fidelity in replicating\nreal-world scenarios. Importantly, we highlight the challenge of procedural\nmodeling using current techniques, especially in their ability to replicate the\nspatial semantics of real-world scenarios. This insight is critical because of\nthe potential of procedural models to bridge to hidden scene aspects such as\ndepth, reflectivity, material properties, and lighting conditions.", + "This paper aims to address a common challenge in deep learning-based image\ntransformation methods, such as image enhancement and super-resolution, which\nheavily rely on precisely aligned paired datasets with pixel-level alignments.\nHowever, creating precisely aligned paired images presents significant\nchallenges and hinders the advancement of methods trained on such data. To\novercome this challenge, this paper introduces a novel and simple Frequency\nDistribution Loss (FDL) for computing distribution distance within the\nfrequency domain. Specifically, we transform image features into the frequency\ndomain using Discrete Fourier Transformation (DFT). Subsequently, frequency\ncomponents (amplitude and phase) are processed separately to form the FDL loss\nfunction. Our method is empirically proven effective as a training constraint\ndue to the thoughtful utilization of global information in the frequency\ndomain. Extensive experimental evaluations, focusing on image enhancement and\nsuper-resolution tasks, demonstrate that FDL outperforms existing\nmisalignment-robust loss functions. Furthermore, we explore the potential of\nour FDL for image style transfer that relies solely on completely misaligned\ndata. Our code is available at: https://github.com/eezkni/FDL", + "In this paper, we present a novel sequence generation-based framework for\nlane detection, called Lane2Seq. It unifies various lane detection formats by\ncasting lane detection as a sequence generation task. This is different from\nprevious lane detection methods, which depend on well-designed task-specific\nhead networks and corresponding loss functions. Lane2Seq only adopts a plain\ntransformer-based encoder-decoder architecture with a simple cross-entropy\nloss. Additionally, we propose a new multi-format model tuning based on\nreinforcement learning to incorporate the task-specific knowledge into\nLane2Seq. Experimental results demonstrate that such a simple sequence\ngeneration paradigm not only unifies lane detection but also achieves\ncompetitive performance on benchmarks. For example, Lane2Seq gets 97.95\\% and\n97.42\\% F1 score on Tusimple and LLAMAS datasets, establishing a new\nstate-of-the-art result for two benchmarks.", + "We present MM-AU, a novel dataset for Multi-Modal Accident video\nUnderstanding. MM-AU contains 11,727 in-the-wild ego-view accident videos, each\nwith temporally aligned text descriptions. We annotate over 2.23 million object\nboxes and 58,650 pairs of video-based accident reasons, covering 58 accident\ncategories. MM-AU supports various accident understanding tasks, particularly\nmultimodal video diffusion to understand accident cause-effect chains for safe\ndriving. With MM-AU, we present an Abductive accident Video understanding\nframework for Safe Driving perception (AdVersa-SD). AdVersa-SD performs video\ndiffusion via an Object-Centric Video Diffusion (OAVD) method which is driven\nby an abductive CLIP model. This model involves a contrastive interaction loss\nto learn the pair co-occurrence of normal, near-accident, accident frames with\nthe corresponding text descriptions, such as accident reasons, prevention\nadvice, and accident categories. OAVD enforces the causal region learning while\nfixing the content of the original frame background in video generation, to\nfind the dominant cause-effect chain for certain accidents.", + "OAVD enforces the causal region learning while\nfixing the content of the original frame background in video generation, to\nfind the dominant cause-effect chain for certain accidents. Extensive\nexperiments verify the abductive ability of AdVersa-SD and the superiority of\nOAVD against the state-of-the-art diffusion models. Additionally, we provide\ncareful benchmark evaluations for object detection and accident reason\nanswering since AdVersa-SD relies on precise object and accident reason\ninformation.", + "Gated cameras flood-illuminate a scene and capture the time-gated impulse\nresponse of a scene. By employing nanosecond-scale gates, existing sensors are\ncapable of capturing mega-pixel gated images, delivering dense depth improving\non today's LiDAR sensors in spatial resolution and depth precision. Although\ngated depth estimation methods deliver a million of depth estimates per frame,\ntheir resolution is still an order below existing RGB imaging methods. In this\nwork, we combine high-resolution stereo HDR RCCB cameras with gated imaging,\nallowing us to exploit depth cues from active gating, multi-view RGB and\nmulti-view NIR sensing -- multi-view and gated cues across the entire spectrum.\nThe resulting capture system consists only of low-cost CMOS sensors and\nflood-illumination. We propose a novel stereo-depth estimation method that is\ncapable of exploiting these multi-modal multi-view depth cues, including the\nactive illumination that is measured by the RCCB camera when removing the\nIR-cut filter. The proposed method achieves accurate depth at long ranges,\noutperforming the next best existing method by 39% for ranges of 100 to 220m in\nMAE on accumulated LiDAR ground-truth.", + "The proposed method achieves accurate depth at long ranges,\noutperforming the next best existing method by 39% for ranges of 100 to 220m in\nMAE on accumulated LiDAR ground-truth. Our code, models and datasets are\navailable at https://light.princeton.edu/gatedrccbstereo/ .", + "Short-form UGC video platforms, like Kwai and TikTok, have been an emerging\nand irreplaceable mainstream media form, thriving on user-friendly engagement,\nand kaleidoscope creation, etc. However, the advancing content-generation\nmodes, e.g., special effects, and sophisticated processing workflows, e.g.,\nde-artifacts, have introduced significant challenges to recent UGC video\nquality assessment: (i) the ambiguous contents hinder the identification of\nquality-determined regions. (ii) the diverse and complicated hybrid distortions\nare hard to distinguish. To tackle the above challenges and assist in the\ndevelopment of short-form videos, we establish the first large-scale\nKaleidoscope short Video database for Quality assessment, termed KVQ, which\ncomprises 600 user-uploaded short videos and 3600 processed videos through the\ndiverse practical processing workflows, including pre-processing, transcoding,\nand enhancement. Among them, the absolute quality score of each video and\npartial ranking score among indistinguishable samples are provided by a team of\nprofessional researchers specializing in image processing.", + "Among them, the absolute quality score of each video and\npartial ranking score among indistinguishable samples are provided by a team of\nprofessional researchers specializing in image processing. Based on this\ndatabase, we propose the first short-form video quality evaluator, i.e., KSVQE,\nwhich enables the quality evaluator to identify the quality-determined\nsemantics with the content understanding of large vision language models (i.e.,\nCLIP) and distinguish the distortions with the distortion understanding module.\nExperimental results have shown the effectiveness of KSVQE on our KVQ database\nand popular VQA databases.", + "Learning 3D human-object interaction relation is pivotal to embodied AI and\ninteraction modeling. Most existing methods approach the goal by learning to\npredict isolated interaction elements, e.g., human contact, object affordance,\nand human-object spatial relation, primarily from the perspective of either the\nhuman or the object. Which underexploit certain correlations between the\ninteraction counterparts (human and object), and struggle to address the\nuncertainty in interactions. Actually, objects' functionalities potentially\naffect humans' interaction intentions, which reveals what the interaction is.\nMeanwhile, the interacting humans and objects exhibit matching geometric\nstructures, which presents how to interact. In light of this, we propose\nharnessing these inherent correlations between interaction counterparts to\nmitigate the uncertainty and jointly anticipate the above interaction elements\nin 3D space. To achieve this, we present LEMON (LEarning 3D huMan-Object\niNteraction relation), a unified model that mines interaction intentions of the\ncounterparts and employs curvatures to guide the extraction of geometric\ncorrelations, combining them to anticipate the interaction elements.", + "To achieve this, we present LEMON (LEarning 3D huMan-Object\niNteraction relation), a unified model that mines interaction intentions of the\ncounterparts and employs curvatures to guide the extraction of geometric\ncorrelations, combining them to anticipate the interaction elements. Besides,\nthe 3D Interaction Relation dataset (3DIR) is collected to serve as the test\nbed for training and evaluation. Extensive experiments demonstrate the\nsuperiority of LEMON over methods estimating each element in isolation.", + "The rise of new video modalities like virtual reality or autonomous driving\nhas increased the demand for efficient multi-view video compression methods,\nboth in terms of rate-distortion (R-D) performance and in terms of delay and\nruntime. While most recent stereo video compression approaches have shown\npromising performance, they compress left and right views sequentially, leading\nto poor parallelization and runtime performance. This work presents Low-Latency\nneural codec for Stereo video Streaming (LLSS), a novel parallel stereo video\ncoding method designed for fast and efficient low-latency stereo video\nstreaming. Instead of using a sequential cross-view motion compensation like\nexisting methods, LLSS introduces a bidirectional feature shifting module to\ndirectly exploit mutual information among views and encode them effectively\nwith a joint cross-view prior model for entropy coding. Thanks to this design,\nLLSS processes left and right views in parallel, minimizing latency; all while\nsubstantially improving R-D performance compared to both existing neural and\nconventional codecs.", + "This paper studies the problem of concept-based interpretability of\ntransformer representations for videos. Concretely, we seek to explain the\ndecision-making process of video transformers based on high-level,\nspatiotemporal concepts that are automatically discovered. Prior research on\nconcept-based interpretability has concentrated solely on image-level tasks.\nComparatively, video models deal with the added temporal dimension, increasing\ncomplexity and posing challenges in identifying dynamic concepts over time. In\nthis work, we systematically address these challenges by introducing the first\nVideo Transformer Concept Discovery (VTCD) algorithm. To this end, we propose\nan efficient approach for unsupervised identification of units of video\ntransformer representations - concepts, and ranking their importance to the\noutput of a model. The resulting concepts are highly interpretable, revealing\nspatio-temporal reasoning mechanisms and object-centric representations in\nunstructured video models. Performing this analysis jointly over a diverse set\nof supervised and self-supervised representations, we discover that some of\nthese mechanism are universal in video transformers. Finally, we show that VTCD\ncan be used for fine-grained action recognition and video object segmentation.", + "Although Multimodal Large Language Models (MLLMs) have demonstrated promising\nversatile capabilities, their performance is still inferior to specialized\nmodels on downstream tasks, which makes adaptation necessary to enhance their\nutility. However, fine-tuning methods require independent training for every\nmodel, leading to huge computation and memory overheads. In this paper, we\npropose a novel setting where we aim to improve the performance of diverse\nMLLMs with a group of shared parameters optimized for a downstream task. To\nachieve this, we propose Transferable Visual Prompting (TVP), a simple and\neffective approach to generate visual prompts that can transfer to different\nmodels and improve their performance on downstream tasks after trained on only\none model. We introduce two strategies to address the issue of cross-model\nfeature corruption of existing visual prompting methods and enhance the\ntransferability of the learned prompts, including 1) Feature Consistency\nAlignment: which imposes constraints to the prompted feature changes to\nmaintain task-agnostic knowledge; 2) Task Semantics Enrichment: which\nencourages the prompted images to contain richer task-specific semantics with\nlanguage guidance.", + "We validate the effectiveness of TVP through extensive\nexperiments with 6 modern MLLMs on a wide variety of tasks ranging from object\nrecognition and counting to multimodal reasoning and hallucination correction.", + "Single point-supervised object detection is gaining attention due to its\ncost-effectiveness. However, existing approaches focus on generating horizontal\nbounding boxes (HBBs) while ignoring oriented bounding boxes (OBBs) commonly\nused for objects in aerial images. This paper proposes PointOBB, the first\nsingle Point-based OBB generation method, for oriented object detection.\nPointOBB operates through the collaborative utilization of three distinctive\nviews: an original view, a resized view, and a rotated/flipped (rot/flp) view.\nUpon the original view, we leverage the resized and rot/flp views to build a\nscale augmentation module and an angle acquisition module, respectively. In the\nformer module, a Scale-Sensitive Consistency (SSC) loss is designed to enhance\nthe deep network's ability to perceive the object scale. For accurate object\nangle predictions, the latter module incorporates self-supervised learning to\npredict angles, which is associated with a scale-guided Dense-to-Sparse (DS)\nmatching strategy for aggregating dense angles corresponding to sparse objects.\nThe resized and rot/flp views are switched using a progressive multi-view\nswitching strategy during training to achieve coupled optimization of scale and\nangle.", + "The resized and rot/flp views are switched using a progressive multi-view\nswitching strategy during training to achieve coupled optimization of scale and\nangle. Experimental results on the DIOR-R and DOTA-v1.0 datasets demonstrate\nthat PointOBB achieves promising performance, and significantly outperforms\npotential point-supervised baselines.", + "3D shape generation from text is a fundamental task in 3D representation\nlearning. The text-shape pairs exhibit a hierarchical structure, where a\ngeneral text like ``chair\" covers all 3D shapes of the chair, while more\ndetailed prompts refer to more specific shapes. Furthermore, both text and 3D\nshapes are inherently hierarchical structures. However, existing Text2Shape\nmethods, such as SDFusion, do not exploit that. In this work, we propose\nHyperSDFusion, a dual-branch diffusion model that generates 3D shapes from a\ngiven text. Since hyperbolic space is suitable for handling hierarchical data,\nwe propose to learn the hierarchical representations of text and 3D shapes in\nhyperbolic space. First, we introduce a hyperbolic text-image encoder to learn\nthe sequential and multi-modal hierarchical features of text in hyperbolic\nspace. In addition, we design a hyperbolic text-graph convolution module to\nlearn the hierarchical features of text in hyperbolic space. In order to fully\nutilize these text features, we introduce a dual-branch structure to embed text\nfeatures in 3D feature space.", + "In addition, we design a hyperbolic text-graph convolution module to\nlearn the hierarchical features of text in hyperbolic space. In order to fully\nutilize these text features, we introduce a dual-branch structure to embed text\nfeatures in 3D feature space. At last, to endow the generated 3D shapes with a\nhierarchical structure, we devise a hyperbolic hierarchical loss. Our method is\nthe first to explore the hyperbolic hierarchical representation for\ntext-to-shape generation. Experimental results on the existing text-to-shape\npaired dataset, Text2Shape, achieved state-of-the-art results. We release our\nimplementation under HyperSDFusion.github.io.", + "Recently, visually-situated text parsing (VsTP) has experienced notable\nadvancements, driven by the increasing demand for automated document\nunderstanding and the emergence of Generative Large Language Models (LLMs)\ncapable of processing document-based questions. Various methods have been\nproposed to address the challenging problem of VsTP. However, due to the\ndiversified targets and heterogeneous schemas, previous works usually design\ntask-specific architectures and objectives for individual tasks, which\ninadvertently leads to modal isolation and complex workflow. In this paper, we\npropose a unified paradigm for parsing visually-situated text across diverse\nscenarios. Specifically, we devise a universal model, called OmniParser, which\ncan simultaneously handle three typical visually-situated text parsing tasks:\ntext spotting, key information extraction, and table recognition. In\nOmniParser, all tasks share the unified encoder-decoder architecture, the\nunified objective: point-conditioned text generation, and the unified input &\noutput representation: prompt & structured sequences.", + "In\nOmniParser, all tasks share the unified encoder-decoder architecture, the\nunified objective: point-conditioned text generation, and the unified input &\noutput representation: prompt & structured sequences. Extensive experiments\ndemonstrate that the proposed OmniParser achieves state-of-the-art (SOTA) or\nhighly competitive performances on 7 datasets for the three visually-situated\ntext parsing tasks, despite its unified, concise design. The code is available\nat https://github.com/AlibabaResearch/AdvancedLiterateMachinery.", + "A major focus of clinical imaging workflow is disease diagnosis and\nmanagement, leading to medical imaging datasets strongly tied to specific\nclinical objectives. This scenario has led to the prevailing practice of\ndeveloping task-specific segmentation models, without gaining insights from\nwidespread imaging cohorts. Inspired by the training program of medical\nradiology residents, we propose a shift towards universal medical image\nsegmentation, a paradigm aiming to build medical image understanding foundation\nmodels by leveraging the diversity and commonality across clinical targets,\nbody regions, and imaging modalities. Towards this goal, we develop Hermes, a\nnovel context-prior learning approach to address the challenges of data\nheterogeneity and annotation differences in medical image segmentation. In a\nlarge collection of eleven diverse datasets (2,438 3D images) across five\nmodalities (CT, PET, T1, T2 and cine MRI) and multiple body regions, we\ndemonstrate the merit of the universal paradigm over the traditional paradigm\non addressing multiple tasks within a single model. By exploiting the synergy\nacross tasks, Hermes achieves state-of-the-art performance on all testing\ndatasets and shows superior model scalability.", + "By exploiting the synergy\nacross tasks, Hermes achieves state-of-the-art performance on all testing\ndatasets and shows superior model scalability. Results on two additional\ndatasets reveals Hermes' strong performance for transfer learning, incremental\nlearning, and generalization to downstream tasks. Hermes's learned priors\ndemonstrate an appealing trait to reflect the intricate relations among tasks\nand modalities, which aligns with the established anatomical and imaging\nprinciples in radiology. The code is available:\nhttps://github.com/yhygao/universal-medical-image-segmentation.", + "In this paper, we propose a method to extract physically-based rendering\n(PBR) materials from a single real-world image. We do so in two steps: first,\nwe map regions of the image to material concepts using a diffusion model, which\nallows the sampling of texture images resembling each material in the scene.\nSecond, we benefit from a separate network to decompose the generated textures\ninto Spatially Varying BRDFs (SVBRDFs), providing us with materials ready to be\nused in rendering applications. Our approach builds on existing synthetic\nmaterial libraries with SVBRDF ground truth, but also exploits a\ndiffusion-generated RGB texture dataset to allow generalization to new samples\nusing unsupervised domain adaptation (UDA). Our contributions are thoroughly\nevaluated on synthetic and real-world datasets. We further demonstrate the\napplicability of our method for editing 3D scenes with materials estimated from\nreal photographs. The code and models will be made open-source. Project page:\nhttps://astra-vision.github.io/MaterialPalette/", + "With the prevalence of the Pretraining-Finetuning paradigm in transfer\nlearning, the robustness of downstream tasks has become a critical concern. In\nthis work, we delve into adversarial robustness in transfer learning and reveal\nthe critical role of initialization, including both the pretrained model and\nthe linear head. First, we discover the necessity of an adversarially robust\npretrained model. Specifically, we reveal that with a standard pretrained\nmodel, Parameter-Efficient Finetuning (PEFT) methods either fail to be\nadversarially robust or continue to exhibit significantly degraded adversarial\nrobustness on downstream tasks, even with adversarial training during\nfinetuning. Leveraging a robust pretrained model, surprisingly, we observe that\na simple linear probing can outperform full finetuning and other PEFT methods\nwith random initialization on certain datasets. We further identify that linear\nprobing excels in preserving robustness from the robust pretraining.", + "Leveraging a robust pretrained model, surprisingly, we observe that\na simple linear probing can outperform full finetuning and other PEFT methods\nwith random initialization on certain datasets. We further identify that linear\nprobing excels in preserving robustness from the robust pretraining. Based on\nthis, we propose Robust Linear Initialization (RoLI) for adversarial\nfinetuning, which initializes the linear head with the weights obtained by\nadversarial linear probing to maximally inherit the robustness from\npretraining. Across five different image classification datasets, we\ndemonstrate the effectiveness of RoLI and achieve new state-of-the-art results.\nOur code is available at \\url{https://github.com/DongXzz/RoLI}.", + "Text-to-image customization, which aims to synthesize text-driven images for\nthe given subjects, has recently revolutionized content creation. Existing\nworks follow the pseudo-word paradigm, i.e., represent the given subjects as\npseudo-words and then compose them with the given text. However, the inherent\nentangled influence scope of pseudo-words with the given text results in a\ndual-optimum paradox, i.e., the similarity of the given subjects and the\ncontrollability of the given text could not be optimal simultaneously. We\npresent RealCustom that, for the first time, disentangles similarity from\ncontrollability by precisely limiting subject influence to relevant parts only,\nachieved by gradually narrowing real text word from its general connotation to\nthe specific subject and using its cross-attention to distinguish relevance.", + "We\npresent RealCustom that, for the first time, disentangles similarity from\ncontrollability by precisely limiting subject influence to relevant parts only,\nachieved by gradually narrowing real text word from its general connotation to\nthe specific subject and using its cross-attention to distinguish relevance.\nSpecifically, RealCustom introduces a novel \"train-inference\" decoupled\nframework: (1) during training, RealCustom learns general alignment between\nvisual conditions to original textual conditions by a novel adaptive scoring\nmodule to adaptively modulate influence quantity; (2) during inference, a novel\nadaptive mask guidance strategy is proposed to iteratively update the influence\nscope and influence quantity of the given subjects to gradually narrow the\ngeneration of the real text word. Comprehensive experiments demonstrate the\nsuperior real-time customization ability of RealCustom in the open domain,\nachieving both unprecedented similarity of the given subjects and\ncontrollability of the given text for the first time. The project page is\nhttps://corleone-huang.github.io/realcustom/.", + "Volumetric optical microscopy using non-diffracting beams enables rapid\nimaging of 3D volumes by projecting them axially to 2D images but lacks crucial\ndepth information. Addressing this, we introduce MicroDiffusion, a pioneering\ntool facilitating high-quality, depth-resolved 3D volume reconstruction from\nlimited 2D projections. While existing Implicit Neural Representation (INR)\nmodels often yield incomplete outputs and Denoising Diffusion Probabilistic\nModels (DDPM) excel at capturing details, our method integrates INR's\nstructural coherence with DDPM's fine-detail enhancement capabilities. We\npretrain an INR model to transform 2D axially-projected images into a\npreliminary 3D volume. This pretrained INR acts as a global prior guiding\nDDPM's generative process through a linear interpolation between INR outputs\nand noise inputs. This strategy enriches the diffusion process with structured\n3D information, enhancing detail and reducing noise in localized 2D images.", + "This pretrained INR acts as a global prior guiding\nDDPM's generative process through a linear interpolation between INR outputs\nand noise inputs. This strategy enriches the diffusion process with structured\n3D information, enhancing detail and reducing noise in localized 2D images. By\nconditioning the diffusion model on the closest 2D projection, MicroDiffusion\nsubstantially enhances fidelity in resulting 3D reconstructions, surpassing INR\nand standard DDPM outputs with unparalleled image quality and structural\nfidelity. Our code and dataset are available at\nhttps://github.com/UCSC-VLAA/MicroDiffusion.", + "Successfully addressing a wide variety of tasks is a core ability of\nautonomous agents, requiring flexibly adapting the underlying decision-making\nstrategies and, as we argue in this work, also adapting the perception modules.\nAn analogical argument would be the human visual system, which uses top-down\nsignals to focus attention determined by the current task. Similarly, we adapt\npre-trained large vision models conditioned on specific downstream tasks in the\ncontext of multi-task policy learning. We introduce task-conditioned adapters\nthat do not require finetuning any pre-trained weights, combined with a single\npolicy trained with behavior cloning and capable of addressing multiple tasks.\nWe condition the visual adapters on task embeddings, which can be selected at\ninference if the task is known, or alternatively inferred from a set of example\ndemonstrations. To this end, we propose a new optimization-based estimator. We\nevaluate the method on a wide variety of tasks from the CortexBench benchmark\nand show that, compared to existing work, it can be addressed with a single\npolicy. In particular, we demonstrate that adapting visual features is a key\ndesign choice and that the method generalizes to unseen tasks given a few\ndemonstrations.", + "In the digital era, QR codes serve as a linchpin connecting virtual and\nphysical realms. Their pervasive integration across various applications\nhighlights the demand for aesthetically pleasing codes without compromised\nscannability. However, prevailing methods grapple with the intrinsic challenge\nof balancing customization and scannability. Notably, stable-diffusion models\nhave ushered in an epoch of high-quality, customizable content generation. This\npaper introduces Text2QR, a pioneering approach leveraging these advancements\nto address a fundamental challenge: concurrently achieving user-defined\naesthetics and scanning robustness. To ensure stable generation of aesthetic QR\ncodes, we introduce the QR Aesthetic Blueprint (QAB) module, generating a\nblueprint image exerting control over the entire generation process.\nSubsequently, the Scannability Enhancing Latent Refinement (SELR) process\nrefines the output iteratively in the latent space, enhancing scanning\nrobustness. This approach harnesses the potent generation capabilities of\nstable-diffusion models, navigating the trade-off between image aesthetics and\nQR code scannability.", + "This approach harnesses the potent generation capabilities of\nstable-diffusion models, navigating the trade-off between image aesthetics and\nQR code scannability. Our experiments demonstrate the seamless fusion of visual\nappeal with the practical utility of aesthetic QR codes, markedly outperforming\nprior methods. Codes are available at \\url{https://github.com/mulns/Text2QR}", + "To advance research in learning-based defogging algorithms, various synthetic\nfog datasets have been developed. However, existing datasets created using the\nAtmospheric Scattering Model (ASM) or real-time rendering engines often\nstruggle to produce photo-realistic foggy images that accurately mimic the\nactual imaging process. This limitation hinders the effective generalization of\nmodels from synthetic to real data. In this paper, we introduce an end-to-end\nsimulation pipeline designed to generate photo-realistic foggy images. This\npipeline comprehensively considers the entire physically-based foggy scene\nimaging process, closely aligning with real-world image capture methods. Based\non this pipeline, we present a new synthetic fog dataset named SynFog, which\nfeatures both sky light and active lighting conditions, as well as three levels\nof fog density. Experimental results demonstrate that models trained on SynFog\nexhibit superior performance in visual perception and detection accuracy\ncompared to others when applied to real-world foggy images.", + "The zero-shot performance of existing vision-language models (VLMs) such as\nCLIP is limited by the availability of large-scale, aligned image and text\ndatasets in specific domains. In this work, we leverage two complementary\nsources of information -- descriptions of categories generated by large\nlanguage models (LLMs) and abundant, fine-grained image classification datasets\n-- to improve the zero-shot classification performance of VLMs across\nfine-grained domains. On the technical side, we develop methods to train VLMs\nwith this \"bag-level\" image-text supervision. We find that simply using these\nattributes at test-time does not improve performance, but our training\nstrategy, for example, on the iNaturalist dataset, leads to an average\nimprovement of 4-5% in zero-shot classification accuracy for novel categories\nof birds and flowers. Similar improvements are observed in domains where a\nsubset of the categories was used to fine-tune the model. By prompting LLMs in\nvarious ways, we generate descriptions that capture visual appearance, habitat,\nand geographic regions and pair them with existing attributes such as the\ntaxonomic structure of the categories.", + "By prompting LLMs in\nvarious ways, we generate descriptions that capture visual appearance, habitat,\nand geographic regions and pair them with existing attributes such as the\ntaxonomic structure of the categories. We systematically evaluate their ability\nto improve zero-shot categorization in natural domains. Our findings suggest\nthat geographic priors can be just as effective and are complementary to visual\nappearance. Our method also outperforms prior work on prompt-based tuning of\nVLMs. We release the benchmark, consisting of 14 datasets at\nhttps://github.com/cvl-umass/AdaptCLIPZS , which will contribute to future\nresearch in zero-shot recognition.", + "Research into dynamic 3D scene understanding has primarily focused on\nshort-term change tracking from dense observations, while little attention has\nbeen paid to long-term changes with sparse observations. We address this gap\nwith MoRE, a novel approach for multi-object relocalization and reconstruction\nin evolving environments. We view these environments as \"living scenes\" and\nconsider the problem of transforming scans taken at different points in time\ninto a 3D reconstruction of the object instances, whose accuracy and\ncompleteness increase over time. At the core of our method lies an\nSE(3)-equivariant representation in a single encoder-decoder network, trained\non synthetic data. This representation enables us to seamlessly tackle instance\nmatching, registration, and reconstruction. We also introduce a joint\noptimization algorithm that facilitates the accumulation of point clouds\noriginating from the same instance across multiple scans taken at different\npoints in time. We validate our method on synthetic and real-world data and\ndemonstrate state-of-the-art performance in both end-to-end performance and\nindividual subtasks.", + "Over the past decade, most methods in visual place recognition (VPR) have\nused neural networks to produce feature representations. These networks\ntypically produce a global representation of a place image using only this\nimage itself and neglect the cross-image variations (e.g. viewpoint and\nillumination), which limits their robustness in challenging scenes. In this\npaper, we propose a robust global representation method with cross-image\ncorrelation awareness for VPR, named CricaVPR. Our method uses the attention\nmechanism to correlate multiple images within a batch. These images can be\ntaken in the same place with different conditions or viewpoints, or even\ncaptured from different places. Therefore, our method can utilize the\ncross-image variations as a cue to guide the representation learning, which\nensures more robust features are produced. To further facilitate the\nrobustness, we propose a multi-scale convolution-enhanced adaptation method to\nadapt pre-trained visual foundation models to the VPR task, which introduces\nthe multi-scale local information to further enhance the cross-image\ncorrelation-aware representation.", + "To further facilitate the\nrobustness, we propose a multi-scale convolution-enhanced adaptation method to\nadapt pre-trained visual foundation models to the VPR task, which introduces\nthe multi-scale local information to further enhance the cross-image\ncorrelation-aware representation. Experimental results show that our method\noutperforms state-of-the-art methods by a large margin with significantly less\ntraining time. The code is released at https://github.com/Lu-Feng/CricaVPR.", + "Text-to-image (T2I) diffusion models, notably the unCLIP models (e.g.,\nDALL-E-2), achieve state-of-the-art (SOTA) performance on various compositional\nT2I benchmarks, at the cost of significant computational resources. The unCLIP\nstack comprises T2I prior and diffusion image decoder. The T2I prior model\nalone adds a billion parameters compared to the Latent Diffusion Models, which\nincreases the computational and high-quality data requirements. We introduce\nECLIPSE, a novel contrastive learning method that is both parameter and\ndata-efficient. ECLIPSE leverages pre-trained vision-language models (e.g.,\nCLIP) to distill the knowledge into the prior model. We demonstrate that the\nECLIPSE trained prior, with only 3.3% of the parameters and trained on a mere\n2.8% of the data, surpasses the baseline T2I priors with an average of 71.6%\npreference score under resource-limited setting.", + "We demonstrate that the\nECLIPSE trained prior, with only 3.3% of the parameters and trained on a mere\n2.8% of the data, surpasses the baseline T2I priors with an average of 71.6%\npreference score under resource-limited setting. It also attains performance on\npar with SOTA big models, achieving an average of 63.36% preference score in\nterms of the ability to follow the text compositions. Extensive experiments on\ntwo unCLIP diffusion image decoders, Karlo and Kandinsky, affirm that ECLIPSE\npriors consistently deliver high performance while significantly reducing\nresource dependency.", + "Consistency learning is a central strategy to tackle unlabeled data in\nsemi-supervised medical image segmentation (SSMIS), which enforces the model to\nproduce consistent predictions under the perturbation. However, most current\napproaches solely focus on utilizing a specific single perturbation, which can\nonly cope with limited cases, while employing multiple perturbations\nsimultaneously is hard to guarantee the quality of consistency learning. In\nthis paper, we propose an Adaptive Bidirectional Displacement (ABD) approach to\nsolve the above challenge. Specifically, we first design a bidirectional patch\ndisplacement based on reliable prediction confidence for unlabeled data to\ngenerate new samples, which can effectively suppress uncontrollable regions and\nstill retain the influence of input perturbations. Meanwhile, to enforce the\nmodel to learn the potentially uncontrollable content, a bidirectional\ndisplacement operation with inverse confidence is proposed for the labeled\nimages, which generates samples with more unreliable information to facilitate\nmodel learning. Extensive experiments show that ABD achieves new\nstate-of-the-art performances for SSMIS, significantly improving different\nbaselines. Source code is available at https://github.com/chy-upc/ABD.", + "We present a simple yet effective technique to estimate lighting in a single\ninput image. Current techniques rely heavily on HDR panorama datasets to train\nneural networks to regress an input with limited field-of-view to a full\nenvironment map. However, these approaches often struggle with real-world,\nuncontrolled settings due to the limited diversity and size of their datasets.\nTo address this problem, we leverage diffusion models trained on billions of\nstandard images to render a chrome ball into the input image. Despite its\nsimplicity, this task remains challenging: the diffusion models often insert\nincorrect or inconsistent objects and cannot readily generate images in HDR\nformat. Our research uncovers a surprising relationship between the appearance\nof chrome balls and the initial diffusion noise map, which we utilize to\nconsistently generate high-quality chrome balls. We further fine-tune an LDR\ndiffusion model (Stable Diffusion XL) with LoRA, enabling it to perform\nexposure bracketing for HDR light estimation. Our method produces convincing\nlight estimates across diverse settings and demonstrates superior\ngeneralization to in-the-wild scenarios.", + "Exemplar-free Class Incremental Learning (EFCIL) aims to sequentially learn\ntasks with access only to data from the current one. EFCIL is of interest\nbecause it mitigates concerns about privacy and long-term storage of data,\nwhile at the same time alleviating the problem of catastrophic forgetting in\nincremental learning. In this work, we introduce task-adaptive saliency for\nEFCIL and propose a new framework, which we call Task-Adaptive Saliency\nSupervision (TASS), for mitigating the negative effects of saliency drift\nbetween different tasks. We first apply boundary-guided saliency to maintain\ntask adaptivity and \\textit{plasticity} on model attention. Besides, we\nintroduce task-agnostic low-level signals as auxiliary supervision to increase\nthe \\textit{stability} of model attention. Finally, we introduce a module for\ninjecting and recovering saliency noise to increase the robustness of saliency\npreservation.", + "Besides, we\nintroduce task-agnostic low-level signals as auxiliary supervision to increase\nthe \\textit{stability} of model attention. Finally, we introduce a module for\ninjecting and recovering saliency noise to increase the robustness of saliency\npreservation. Our experiments demonstrate that our method can better preserve\nsaliency maps across tasks and achieve state-of-the-art results on the\nCIFAR-100, Tiny-ImageNet, and ImageNet-Subset EFCIL benchmarks. Code is\navailable at \\url{https://github.com/scok30/tass}.", + "Classifier-Free Guidance (CFG) has been widely used in text-to-image\ndiffusion models, where the CFG scale is introduced to control the strength of\ntext guidance on the whole image space. However, we argue that a global CFG\nscale results in spatial inconsistency on varying semantic strengths and\nsuboptimal image quality. To address this problem, we present a novel approach,\nSemantic-aware Classifier-Free Guidance (S-CFG), to customize the guidance\ndegrees for different semantic units in text-to-image diffusion models.\nSpecifically, we first design a training-free semantic segmentation method to\npartition the latent image into relatively independent semantic regions at each\ndenoising step. In particular, the cross-attention map in the denoising U-net\nbackbone is renormalized for assigning each patch to the corresponding token,\nwhile the self-attention map is used to complete the semantic regions. Then, to\nbalance the amplification of diverse semantic units, we adaptively adjust the\nCFG scales across different semantic regions to rescale the text guidance\ndegrees into a uniform level.", + "Then, to\nbalance the amplification of diverse semantic units, we adaptively adjust the\nCFG scales across different semantic regions to rescale the text guidance\ndegrees into a uniform level. Finally, extensive experiments demonstrate the\nsuperiority of S-CFG over the original CFG strategy on various text-to-image\ndiffusion models, without requiring any extra training cost. our codes are\navailable at https://github.com/SmilesDZgk/S-CFG.", + "All-in-one (AiO) frameworks restore various adverse weather degradations with\na single set of networks jointly. To handle various weather conditions, an AiO\nframework is expected to adaptively learn weather-specific knowledge for\ndifferent degradations and shared knowledge for common patterns. However,\nexisting methods: 1) rely on extra supervision signals, which are usually\nunknown in real-world applications; 2) employ fixed network structures, which\nrestrict the diversity of weather-specific knowledge. In this paper, we propose\na Language-driven Restoration framework (LDR) to alleviate the aforementioned\nissues. First, we leverage the power of pre-trained vision-language (PVL)\nmodels to enrich the diversity of weather-specific knowledge by reasoning about\nthe occurrence, type, and severity of degradation, generating description-based\ndegradation priors. Then, with the guidance of degradation prior, we sparsely\nselect restoration experts from a candidate list dynamically based on a\nMixture-of-Experts (MoE) structure. This enables us to adaptively learn the\nweather-specific and shared knowledge to handle various weather conditions\n(e.g., unknown or mixed weather).", + "This enables us to adaptively learn the\nweather-specific and shared knowledge to handle various weather conditions\n(e.g., unknown or mixed weather). Experiments on extensive restoration\nscenarios show our superior performance (see Fig. 1). The source code will be\nmade available.", + "Distribution shift widely exists in medical images acquired from different\nmedical centres and poses a significant obstacle to deploying the pre-trained\nsemantic segmentation model in real-world applications. Test-time adaptation\nhas proven its effectiveness in tackling the cross-domain distribution shift\nduring inference. However, most existing methods achieve adaptation by updating\nthe pre-trained models, rendering them susceptible to error accumulation and\ncatastrophic forgetting when encountering a series of distribution shifts\n(i.e., under the continual test-time adaptation setup). To overcome these\nchallenges caused by updating the models, in this paper, we freeze the\npre-trained model and propose the Visual Prompt-based Test-Time Adaptation\n(VPTTA) method to train a specific prompt for each test image to align the\nstatistics in the batch normalization layers. Specifically, we present the\nlow-frequency prompt, which is lightweight with only a few parameters and can\nbe effectively trained in a single iteration. To enhance prompt initialization,\nwe equip VPTTA with a memory bank to benefit the current prompt from previous\nones. Additionally, we design a warm-up mechanism, which mixes source and\ntarget statistics to construct warm-up statistics, thereby facilitating the\ntraining process.", + "To enhance prompt initialization,\nwe equip VPTTA with a memory bank to benefit the current prompt from previous\nones. Additionally, we design a warm-up mechanism, which mixes source and\ntarget statistics to construct warm-up statistics, thereby facilitating the\ntraining process. Extensive experiments demonstrate the superiority of our\nVPTTA over other state-of-the-art methods on two medical image segmentation\nbenchmark tasks. The code and weights of pre-trained source models are\navailable at https://github.com/Chen-Ziyang/VPTTA.", + "This paper presents a novel Kinematics and Trajectory Prior\nKnowledge-Enhanced Transformer (KTPFormer), which overcomes the weakness in\nexisting transformer-based methods for 3D human pose estimation that the\nderivation of Q, K, V vectors in their self-attention mechanisms are all based\non simple linear mapping. We propose two prior attention modules, namely\nKinematics Prior Attention (KPA) and Trajectory Prior Attention (TPA) to take\nadvantage of the known anatomical structure of the human body and motion\ntrajectory information, to facilitate effective learning of global dependencies\nand features in the multi-head self-attention. KPA models kinematic\nrelationships in the human body by constructing a topology of kinematics, while\nTPA builds a trajectory topology to learn the information of joint motion\ntrajectory across frames. Yielding Q, K, V vectors with prior knowledge, the\ntwo modules enable KTPFormer to model both spatial and temporal correlations\nsimultaneously. Extensive experiments on three benchmarks (Human3.6M,\nMPI-INF-3DHP and HumanEva) show that KTPFormer achieves superior performance in\ncomparison to state-of-the-art methods.", + "Extensive experiments on three benchmarks (Human3.6M,\nMPI-INF-3DHP and HumanEva) show that KTPFormer achieves superior performance in\ncomparison to state-of-the-art methods. More importantly, our KPA and TPA\nmodules have lightweight plug-and-play designs and can be integrated into\nvarious transformer-based networks (i.e., diffusion-based) to improve the\nperformance with only a very small increase in the computational overhead. The\ncode is available at: https://github.com/JihuaPeng/KTPFormer.", + "Visual Relationship Detection (VRD) has seen significant advancements with\nTransformer-based architectures recently. However, we identify two key\nlimitations in a conventional label assignment for training Transformer-based\nVRD models, which is a process of mapping a ground-truth (GT) to a prediction.\nUnder the conventional assignment, an unspecialized query is trained since a\nquery is expected to detect every relation, which makes it difficult for a\nquery to specialize in specific relations. Furthermore, a query is also\ninsufficiently trained since a GT is assigned only to a single prediction,\ntherefore near-correct or even correct predictions are suppressed by being\nassigned no relation as a GT. To address these issues, we propose Groupwise\nQuery Specialization and Quality-Aware Multi-Assignment (SpeaQ). Groupwise\nQuery Specialization trains a specialized query by dividing queries and\nrelations into disjoint groups and directing a query in a specific query group\nsolely toward relations in the corresponding relation group. Quality-Aware\nMulti-Assignment further facilitates the training by assigning a GT to multiple\npredictions that are significantly close to a GT in terms of a subject, an\nobject, and the relation in between.", + "Quality-Aware\nMulti-Assignment further facilitates the training by assigning a GT to multiple\npredictions that are significantly close to a GT in terms of a subject, an\nobject, and the relation in between. Experimental results and analyses show\nthat SpeaQ effectively trains specialized queries, which better utilize the\ncapacity of a model, resulting in consistent performance gains with zero\nadditional inference cost across multiple VRD models and benchmarks. Code is\navailable at https://github.com/mlvlab/SpeaQ.", + "This paper introduces LeftRefill, an innovative approach to efficiently\nharness large Text-to-Image (T2I) diffusion models for reference-guided image\nsynthesis. As the name implies, LeftRefill horizontally stitches reference and\ntarget views together as a whole input. The reference image occupies the left\nside, while the target canvas is positioned on the right. Then, LeftRefill\npaints the right-side target canvas based on the left-side reference and\nspecific task instructions. Such a task formulation shares some similarities\nwith contextual inpainting, akin to the actions of a human painter. This novel\nformulation efficiently learns both structural and textured correspondence\nbetween reference and target without other image encoders or adapters. We\ninject task and view information through cross-attention modules in T2I models,\nand further exhibit multi-view reference ability via the re-arranged\nself-attention modules. These enable LeftRefill to perform consistent\ngeneration as a generalized model without requiring test-time fine-tuning or\nmodel modifications. Thus, LeftRefill can be seen as a simple yet unified\nframework to address reference-guided synthesis.", + "These enable LeftRefill to perform consistent\ngeneration as a generalized model without requiring test-time fine-tuning or\nmodel modifications. Thus, LeftRefill can be seen as a simple yet unified\nframework to address reference-guided synthesis. As an exemplar, we leverage\nLeftRefill to address two different challenges: reference-guided inpainting and\nnovel view synthesis, based on the pre-trained StableDiffusion. Codes and\nmodels are released at https://github.com/ewrfcas/LeftRefill.", + "We present personalized residuals and localized attention-guided sampling for\nefficient concept-driven generation using text-to-image diffusion models. Our\nmethod first represents concepts by freezing the weights of a pretrained\ntext-conditioned diffusion model and learning low-rank residuals for a small\nsubset of the model's layers. The residual-based approach then directly enables\napplication of our proposed sampling technique, which applies the learned\nresiduals only in areas where the concept is localized via cross-attention and\napplies the original diffusion weights in all other regions. Localized sampling\ntherefore combines the learned identity of the concept with the existing\ngenerative prior of the underlying diffusion model. We show that personalized\nresiduals effectively capture the identity of a concept in ~3 minutes on a\nsingle GPU without the use of regularization images and with fewer parameters\nthan previous models, and localized sampling allows using the original model as\nstrong prior for large parts of the image.", + "We present Condition-Aware Neural Network (CAN), a new method for adding\ncontrol to image generative models. In parallel to prior conditional control\nmethods, CAN controls the image generation process by dynamically manipulating\nthe weight of the neural network. This is achieved by introducing a\ncondition-aware weight generation module that generates conditional weight for\nconvolution/linear layers based on the input condition. We test CAN on\nclass-conditional image generation on ImageNet and text-to-image generation on\nCOCO. CAN consistently delivers significant improvements for diffusion\ntransformer models, including DiT and UViT. In particular, CAN combined with\nEfficientViT (CaT) achieves 2.78 FID on ImageNet 512x512, surpassing DiT-XL/2\nwhile requiring 52x fewer MACs per sampling step.", + "Route planning for navigation under partial observability plays a crucial\nrole in modern robotics and autonomous driving. Existing route planning\napproaches can be categorized into two main classes: traditional autoregressive\nand diffusion-based methods. The former often fails due to its myopic nature,\nwhile the latter either assumes full observability or struggles to adapt to\nunfamiliar scenarios, due to strong couplings with behavior cloning from\nexperts. To address these deficiencies, we propose a versatile diffusion-based\napproach for both 2D and 3D route planning under partial observability.\nSpecifically, our value-guided diffusion policy first generates plans to\npredict actions across various timesteps, providing ample foresight to the\nplanning. It then employs a differentiable planner with state estimations to\nderive a value function, directing the agent's exploration and goal-seeking\nbehaviors without seeking experts while explicitly addressing partial\nobservability. During inference, our policy is further enhanced by a\nbest-plan-selection strategy, substantially boosting the planning success rate.\nMoreover, we propose projecting point clouds, derived from RGB-D inputs, onto\n2D grid-based bird-eye-view maps via semantic segmentation, generalizing to 3D\nenvironments.", + "During inference, our policy is further enhanced by a\nbest-plan-selection strategy, substantially boosting the planning success rate.\nMoreover, we propose projecting point clouds, derived from RGB-D inputs, onto\n2D grid-based bird-eye-view maps via semantic segmentation, generalizing to 3D\nenvironments. This simple yet effective adaption enables zero-shot transfer\nfrom 2D-trained policy to 3D, cutting across the laborious training for 3D\npolicy, and thus certifying our versatility. Experimental results demonstrate\nour superior performance, particularly in navigating situations beyond expert\ndemonstrations, surpassing state-of-the-art autoregressive and diffusion-based\nbaselines for both 2D and 3D scenarios.", + "In Re-identification (ReID), recent advancements yield noteworthy progress in\nboth unimodal and cross-modal retrieval tasks. However, the challenge persists\nin developing a unified framework that could effectively handle varying\nmultimodal data, including RGB, infrared, sketches, and textual information.\nAdditionally, the emergence of large-scale models shows promising performance\nin various vision tasks but the foundation model in ReID is still blank. In\nresponse to these challenges, a novel multimodal learning paradigm for ReID is\nintroduced, referred to as All-in-One (AIO), which harnesses a frozen\npre-trained big model as an encoder, enabling effective multimodal retrieval\nwithout additional fine-tuning. The diverse multimodal data in AIO are\nseamlessly tokenized into a unified space, allowing the modality-shared frozen\nencoder to extract identity-consistent features comprehensively across all\nmodalities. Furthermore, a meticulously crafted ensemble of cross-modality\nheads is designed to guide the learning trajectory. AIO is the \\textbf{first}\nframework to perform all-in-one ReID, encompassing four commonly used\nmodalities.", + "Furthermore, a meticulously crafted ensemble of cross-modality\nheads is designed to guide the learning trajectory. AIO is the \\textbf{first}\nframework to perform all-in-one ReID, encompassing four commonly used\nmodalities. Experiments on cross-modal and multimodal ReID reveal that AIO not\nonly adeptly handles various modal data but also excels in challenging\ncontexts, showcasing exceptional performance in zero-shot and domain\ngeneralization scenarios.", + "Steganography is the art of hiding secret data into the cover media for\ncovert communication. In recent years, more and more deep neural network\n(DNN)-based steganographic schemes are proposed to train steganographic\nnetworks for secret embedding and recovery, which are shown to be promising.\nCompared with the handcrafted steganographic tools, steganographic networks\ntend to be large in size. It raises concerns on how to imperceptibly and\neffectively transmit these networks to the sender and receiver to facilitate\nthe covert communication. To address this issue, we propose in this paper a\nPurified and Unified Steganographic Network (PUSNet). It performs an ordinary\nmachine learning task in a purified network, which could be triggered into\nsteganographic networks for secret embedding or recovery using different keys.\nWe formulate the construction of the PUSNet into a sparse weight filling\nproblem to flexibly switch between the purified and steganographic networks. We\nfurther instantiate our PUSNet as an image denoising network with two\nsteganographic networks concealed for secret image embedding and recovery.", + "We formulate the construction of the PUSNet into a sparse weight filling\nproblem to flexibly switch between the purified and steganographic networks. We\nfurther instantiate our PUSNet as an image denoising network with two\nsteganographic networks concealed for secret image embedding and recovery.\nComprehensive experiments demonstrate that our PUSNet achieves good performance\non secret image embedding, secret image recovery, and image denoising in a\nsingle architecture. It is also shown to be capable of imperceptibly carrying\nthe steganographic networks in a purified network. Code is available at\n\\url{https://github.com/albblgb/PUSNet}", + "Test-time adaptation (TTA) aims to improve model generalizability when test\ndata diverges from training distribution, offering the distinct advantage of\nnot requiring access to training data and processes, especially valuable in the\ncontext of large pre-trained models. However, current TTA methods fail to\naddress the fundamental issue: covariate shift, i.e., the decreased\ngeneralizability can be attributed to the model's reliance on the marginal\ndistribution of the training data, which may impair model calibration and\nintroduce confirmation bias. To address this, we propose a novel energy-based\nperspective, enhancing the model's perception of target data distributions\nwithout requiring access to training data or processes. Building on this\nperspective, we introduce $\\textbf{T}$est-time $\\textbf{E}$nergy\n$\\textbf{A}$daptation ($\\textbf{TEA}$), which transforms the trained classifier\ninto an energy-based model and aligns the model's distribution with the test\ndata's, enhancing its ability to perceive test distributions and thus improving\noverall generalizability. Extensive experiments across multiple tasks,\nbenchmarks and architectures demonstrate TEA's superior generalization\nperformance against state-of-the-art methods.", + "Extensive experiments across multiple tasks,\nbenchmarks and architectures demonstrate TEA's superior generalization\nperformance against state-of-the-art methods. Further in-depth analyses reveal\nthat TEA can equip the model with a comprehensive perception of test\ndistribution, ultimately paving the way toward improved generalization and\ncalibration.", + "This paper studies the problem of structured 3D reconstruction using\nwireframes that consist of line segments and junctions, focusing on the\ncomputation of structured boundary geometries of scenes. Instead of leveraging\nmatching-based solutions from 2D wireframes (or line segments) for 3D wireframe\nreconstruction as done in prior arts, we present NEAT, a rendering-distilling\nformulation using neural fields to represent 3D line segments with 2D\nobservations, and bipartite matching for perceiving and distilling of a sparse\nset of 3D global junctions. The proposed {NEAT} enjoys the joint optimization\nof the neural fields and the global junctions from scratch, using\nview-dependent 2D observations without precomputed cross-view feature matching.\nComprehensive experiments on the DTU and BlendedMVS datasets demonstrate our\nNEAT's superiority over state-of-the-art alternatives for 3D wireframe\nreconstruction.", + "Comprehensive experiments on the DTU and BlendedMVS datasets demonstrate our\nNEAT's superiority over state-of-the-art alternatives for 3D wireframe\nreconstruction. Moreover, the distilled 3D global junctions by NEAT, are a\nbetter initialization than SfM points, for the recently-emerged 3D Gaussian\nSplatting for high-fidelity novel view synthesis using about 20 times fewer\ninitial 3D points. Project page: \\url{https://xuenan.net/neat}.", + "Multi-modal Large Language Models (MLLMs) have shown remarkable capabilities\nin various multi-modal tasks. Nevertheless, their performance in fine-grained\nimage understanding tasks is still limited. To address this issue, this paper\nproposes a new framework to enhance the fine-grained image understanding\nabilities of MLLMs. Specifically, we present a new method for constructing the\ninstruction tuning dataset at a low cost by leveraging annotations in existing\ndatasets. A self-consistent bootstrapping method is also introduced to extend\nexisting dense object annotations into high-quality\nreferring-expression-bounding-box pairs. These methods enable the generation of\nhigh-quality instruction data which includes a wide range of fundamental\nabilities essential for fine-grained image perception. Moreover, we argue that\nthe visual encoder should be tuned during instruction tuning to mitigate the\ngap between full image perception and fine-grained image perception.\nExperimental results demonstrate the superior performance of our method. For\ninstance, our model exhibits a 5.2% accuracy improvement over Qwen-VL on GQA\nand surpasses the accuracy of Kosmos-2 by 24.7% on RefCOCO_val.", + "Experimental results demonstrate the superior performance of our method. For\ninstance, our model exhibits a 5.2% accuracy improvement over Qwen-VL on GQA\nand surpasses the accuracy of Kosmos-2 by 24.7% on RefCOCO_val. We have also\nattained the top rank on the leaderboard of MMBench. This promising performance\nis achieved by training on only publicly available data, making it easily\nreproducible. The models, datasets, and codes are publicly available at\nhttps://github.com/SY-Xuan/Pink.", + "Recovering sharp images from dual-pixel (DP) pairs with disparity-dependent\nblur is a challenging task.~Existing blur map-based deblurring methods have\ndemonstrated promising results. In this paper, we propose, to the best of our\nknowledge, the first framework that introduces the contrastive language-image\npre-training framework (CLIP) to accurately estimate the blur map from a DP\npair unsupervisedly. To achieve this, we first carefully design text prompts to\nenable CLIP to understand blur-related geometric prior knowledge from the DP\npair. Then, we propose a format to input a stereo DP pair to CLIP without any\nfine-tuning, despite the fact that CLIP is pre-trained on monocular images.\nGiven the estimated blur map, we introduce a blur-prior attention block, a\nblur-weighting loss, and a blur-aware loss to recover the all-in-focus image.\nOur method achieves state-of-the-art performance in extensive experiments (see\nFig.~\\ref{fig:teaser}).", + "Multimodal summarization with multimodal output (MSMO) has emerged as a\npromising research direction. Nonetheless, numerous limitations exist within\nexisting public MSMO datasets, including insufficient maintenance, data\ninaccessibility, limited size, and the absence of proper categorization, which\npose significant challenges. To address these challenges and provide a\ncomprehensive dataset for this new direction, we have meticulously curated the\n\\textbf{MMSum} dataset. Our new dataset features (1) Human-validated summaries\nfor both video and textual content, providing superior human instruction and\nlabels for multimodal learning. (2) Comprehensively and meticulously arranged\ncategorization, spanning 17 principal categories and 170 subcategories to\nencapsulate a diverse array of real-world scenarios. (3) Benchmark tests\nperformed on the proposed dataset to assess various tasks and methods,\nincluding \\textit{video summarization}, \\textit{text summarization}, and\n\\textit{multimodal summarization}.", + "(3) Benchmark tests\nperformed on the proposed dataset to assess various tasks and methods,\nincluding \\textit{video summarization}, \\textit{text summarization}, and\n\\textit{multimodal summarization}. To champion accessibility and collaboration,\nwe will release the \\textbf{MMSum} dataset and the data collection tool as\nfully open-source resources, fostering transparency and accelerating future\ndevelopments. Our project website can be found\nat~\\url{https://mmsum-dataset.github.io/}", + "Multi-modal Large Language Models (MLLMs) tuned on machine-generated\ninstruction-following data have demonstrated remarkable performance in various\nmulti-modal understanding and generation tasks. However, the hallucinations\ninherent in machine-generated data, which could lead to hallucinatory outputs\nin MLLMs, remain under-explored. This work aims to investigate various\nhallucinations (i.e., object, relation, attribute hallucinations) and mitigate\nthose hallucinatory toxicities in large-scale machine-generated visual\ninstruction datasets. Drawing on the human ability to identify factual errors,\nwe present a novel hallucination detection and elimination framework,\nHalluciDoctor, based on the cross-checking paradigm. We use our framework to\nidentify and eliminate hallucinations in the training data automatically.\nInterestingly, HalluciDoctor also indicates that spurious correlations arising\nfrom long-tail object co-occurrences contribute to hallucinations. Based on\nthat, we execute counterfactual visual instruction expansion to balance data\ndistribution, thereby enhancing MLLMs' resistance to hallucinations.\nComprehensive experiments on hallucination evaluation benchmarks show that our\nmethod successfully mitigates 44.6% hallucinations relatively and maintains\ncompetitive performance compared to LLaVA.", + "Comprehensive experiments on hallucination evaluation benchmarks show that our\nmethod successfully mitigates 44.6% hallucinations relatively and maintains\ncompetitive performance compared to LLaVA. The data and code for this paper are\npublicly available. \\url{https://github.com/Yuqifan1117/HalluciDoctor}.", + "Few-Shot Class Incremental Learning (FSCIL) is a task that requires a model\nto learn new classes incrementally without forgetting when only a few samples\nfor each class are given. FSCIL encounters two significant challenges:\ncatastrophic forgetting and overfitting, and these challenges have driven prior\nstudies to primarily rely on shallow models, such as ResNet-18. Even though\ntheir limited capacity can mitigate both forgetting and overfitting issues, it\nleads to inadequate knowledge transfer during few-shot incremental sessions. In\nthis paper, we argue that large models such as vision and language transformers\npre-trained on large datasets can be excellent few-shot incremental learners.\nTo this end, we propose a novel FSCIL framework called PriViLege, Pre-trained\nVision and Language transformers with prompting functions and knowledge\ndistillation. Our framework effectively addresses the challenges of\ncatastrophic forgetting and overfitting in large models through new pre-trained\nknowledge tuning (PKT) and two losses: entropy-based divergence loss and\nsemantic knowledge distillation loss.", + "Our framework effectively addresses the challenges of\ncatastrophic forgetting and overfitting in large models through new pre-trained\nknowledge tuning (PKT) and two losses: entropy-based divergence loss and\nsemantic knowledge distillation loss. Experimental results show that the\nproposed PriViLege significantly outperforms the existing state-of-the-art\nmethods with a large margin, e.g., +9.38% in CUB200, +20.58% in CIFAR-100, and\n+13.36% in miniImageNet. Our implementation code is available at\nhttps://github.com/KHU-AGI/PriViLege.", + "In this paper, we present a method to reconstruct the world and multiple\ndynamic humans in 3D from a monocular video input. As a key idea, we represent\nboth the world and multiple humans via the recently emerging 3D Gaussian\nSplatting (3D-GS) representation, enabling to conveniently and efficiently\ncompose and render them together. In particular, we address the scenarios with\nseverely limited and sparse observations in 3D human reconstruction, a common\nchallenge encountered in the real world. To tackle this challenge, we introduce\na novel approach to optimize the 3D-GS representation in a canonical space by\nfusing the sparse cues in the common space, where we leverage a pre-trained 2D\ndiffusion model to synthesize unseen views while keeping the consistency with\nthe observed 2D appearances. We demonstrate our method can reconstruct\nhigh-quality animatable 3D humans in various challenging examples, in the\npresence of occlusion, image crops, few-shot, and extremely sparse\nobservations.", + "We demonstrate our method can reconstruct\nhigh-quality animatable 3D humans in various challenging examples, in the\npresence of occlusion, image crops, few-shot, and extremely sparse\nobservations. After reconstruction, our method is capable of not only rendering\nthe scene in any novel views at arbitrary time instances, but also editing the\n3D scene by removing individual humans or applying different motions for each\nhuman. Through various experiments, we demonstrate the quality and efficiency\nof our methods over alternative existing approaches.", + "Personalization has emerged as a prominent aspect within the field of\ngenerative AI, enabling the synthesis of individuals in diverse contexts and\nstyles, while retaining high-fidelity to their identities. However, the process\nof personalization presents inherent challenges in terms of time and memory\nrequirements. Fine-tuning each personalized model needs considerable GPU time\ninvestment, and storing a personalized model per subject can be demanding in\nterms of storage capacity. To overcome these challenges, we propose\nHyperDreamBooth-a hypernetwork capable of efficiently generating a small set of\npersonalized weights from a single image of a person. By composing these\nweights into the diffusion model, coupled with fast finetuning, HyperDreamBooth\ncan generate a person's face in various contexts and styles, with high subject\ndetails while also preserving the model's crucial knowledge of diverse styles\nand semantic modifications. Our method achieves personalization on faces in\nroughly 20 seconds, 25x faster than DreamBooth and 125x faster than Textual\nInversion, using as few as one reference image, with the same quality and style\ndiversity as DreamBooth.", + "Our method achieves personalization on faces in\nroughly 20 seconds, 25x faster than DreamBooth and 125x faster than Textual\nInversion, using as few as one reference image, with the same quality and style\ndiversity as DreamBooth. Also our method yields a model that is 10000x smaller\nthan a normal DreamBooth model. Project page: https://hyperdreambooth.github.io", + "This paper studies the problem of language-guided reflection separation,\nwhich aims at addressing the ill-posed reflection separation problem by\nintroducing language descriptions to provide layer content. We propose a\nunified framework to solve this problem, which leverages the cross-attention\nmechanism with contrastive learning strategies to construct the correspondence\nbetween language descriptions and image layers. A gated network design and a\nrandomized training strategy are employed to tackle the recognizable layer\nambiguity. The effectiveness of the proposed method is validated by the\nsignificant performance advantage over existing reflection separation methods\non both quantitative and qualitative comparisons.", + "Humans can infer 3D structure from 2D images of an object based on past\nexperience and improve their 3D understanding as they see more images. Inspired\nby this behavior, we introduce SAP3D, a system for 3D reconstruction and novel\nview synthesis from an arbitrary number of unposed images. Given a few unposed\nimages of an object, we adapt a pre-trained view-conditioned diffusion model\ntogether with the camera poses of the images via test-time fine-tuning. The\nadapted diffusion model and the obtained camera poses are then utilized as\ninstance-specific priors for 3D reconstruction and novel view synthesis. We\nshow that as the number of input images increases, the performance of our\napproach improves, bridging the gap between optimization-based prior-less 3D\nreconstruction methods and single-image-to-3D diffusion-based methods. We\ndemonstrate our system on real images as well as standard synthetic benchmarks.\nOur ablation studies confirm that this adaption behavior is key for more\naccurate 3D understanding.", + "Sparse LiDAR point clouds cause severe loss of detail of static structures\nand reduce the density of static points available for navigation. Reduced\ndensity can be detrimental to navigation under several scenarios. We observe\nthat despite high sparsity, in most cases, the global topology of LiDAR\noutlining the static structures can be inferred. We utilize this property to\nobtain a backbone skeleton of a LiDAR scan in the form of a single connected\ncomponent that is a proxy to its global topology. We utilize the backbone to\naugment new points along static structures to overcome sparsity. Newly\nintroduced points could correspond to existing static structures or to static\npoints that were earlier obstructed by dynamic objects. To the best of our\nknowledge, we are the first to use such a strategy for sparse LiDAR point\nclouds. Existing solutions close to our approach fail to identify and preserve\nthe global static LiDAR topology and generate sub-optimal points. We propose\nGLiDR, a Graph Generative network that is topologically regularized using\n0-dimensional Persistent Homology ($\\mathcal{PH}$) constraints.", + "Existing solutions close to our approach fail to identify and preserve\nthe global static LiDAR topology and generate sub-optimal points. We propose\nGLiDR, a Graph Generative network that is topologically regularized using\n0-dimensional Persistent Homology ($\\mathcal{PH}$) constraints. This enables\nGLiDR to introduce newer static points along a topologically consistent global\nstatic LiDAR backbone. GLiDR generates precise static points using $32\\times$\nsparser dynamic scans and performs better than the baselines across three\ndatasets. GLiDR generates a valuable byproduct - an accurate binary\nsegmentation mask of static and dynamic objects that are helpful for navigation\nplanning and safety in constrained environments. The newly introduced static\npoints allow GLiDR to outperform LiDAR-based navigation using SLAM in several\nsettings. Source code is available at https://kshitijbhat.github.io/glidr", + "Weakly supervised semantic segmentation (WSSS) with image-level labels aims\nto achieve segmentation tasks without dense annotations. However, attributed to\nthe frequent coupling of co-occurring objects and the limited supervision from\nimage-level labels, the challenging co-occurrence problem is widely present and\nleads to false activation of objects in WSSS. In this work, we devise a\n'Separate and Conquer' scheme SeCo to tackle this issue from dimensions of\nimage space and feature space. In the image space, we propose to 'separate' the\nco-occurring objects with image decomposition by subdividing images into\npatches. Importantly, we assign each patch a category tag from Class Activation\nMaps (CAMs), which spatially helps remove the co-context bias and guide the\nsubsequent representation. In the feature space, we propose to 'conquer' the\nfalse activation by enhancing semantic representation with multi-granularity\nknowledge contrast. To this end, a dual-teacher-single-student architecture is\ndesigned and tag-guided contrast is conducted, which guarantee the correctness\nof knowledge and further facilitate the discrepancy among co-contexts.", + "To this end, a dual-teacher-single-student architecture is\ndesigned and tag-guided contrast is conducted, which guarantee the correctness\nof knowledge and further facilitate the discrepancy among co-contexts. We\nstreamline the multi-staged WSSS pipeline end-to-end and tackle this issue\nwithout external supervision. Extensive experiments are conducted, validating\nthe efficiency of our method and the superiority over previous single-staged\nand even multi-staged competitors on PASCAL VOC and MS COCO. Code is available\nat https://github.com/zwyang6/SeCo.git.", + "Quantized neural networks employ reduced precision representations for both\nweights and activations. This quantization process significantly reduces the\nmemory requirements and computational complexity of the network. Binary Neural\nNetworks (BNNs) are the extreme quantization case, representing values with\njust one bit. Since the sign function is typically used to map real values to\nbinary values, smooth approximations are introduced to mimic the gradients\nduring error backpropagation. Thus, the mismatch between the forward and\nbackward models corrupts the direction of the gradient, causing training\ninconsistency problems and performance degradation. In contrast to current BNN\napproaches, we propose to employ a binary periodic (BiPer) function during\nbinarization. Specifically, we use a square wave for the forward pass to obtain\nthe binary values and employ the trigonometric sine function with the same\nperiod of the square wave as a differentiable surrogate during the backward\npass. We demonstrate that this approach can control the quantization error by\nusing the frequency of the periodic function and improves network performance.", + "We demonstrate that this approach can control the quantization error by\nusing the frequency of the periodic function and improves network performance.\nExtensive experiments validate the effectiveness of BiPer in benchmark datasets\nand network architectures, with improvements of up to 1% and 0.69% with respect\nto state-of-the-art methods in the classification task over CIFAR-10 and\nImageNet, respectively. Our code is publicly available at\nhttps://github.com/edmav4/BiPer.", + "This work presents AnyDoor, a diffusion-based image generator with the power\nto teleport target objects to new scenes at user-specified locations in a\nharmonious way. Instead of tuning parameters for each object, our model is\ntrained only once and effortlessly generalizes to diverse object-scene\ncombinations at the inference stage. Such a challenging zero-shot setting\nrequires an adequate characterization of a certain object. To this end, we\ncomplement the commonly used identity feature with detail features, which are\ncarefully designed to maintain texture details yet allow versatile local\nvariations (e.g., lighting, orientation, posture, etc.), supporting the object\nin favorably blending with different surroundings. We further propose to borrow\nknowledge from video datasets, where we can observe various forms (i.e., along\nthe time axis) of a single object, leading to stronger model generalizability\nand robustness. Extensive experiments demonstrate the superiority of our\napproach over existing alternatives as well as its great potential in\nreal-world applications, such as virtual try-on and object moving. Project page\nis https://damo-vilab.github.io/AnyDoor-Page/.", + "The prevalent approaches of unsupervised 3D object detection follow\ncluster-based pseudo-label generation and iterative self-training processes.\nHowever, the challenge arises due to the sparsity of LiDAR scans, which leads\nto pseudo-labels with erroneous size and position, resulting in subpar\ndetection performance. To tackle this problem, this paper introduces a\nCommonsense Prototype-based Detector, termed CPD, for unsupervised 3D object\ndetection. CPD first constructs Commonsense Prototype (CProto) characterized by\nhigh-quality bounding box and dense points, based on commonsense intuition.\nSubsequently, CPD refines the low-quality pseudo-labels by leveraging the size\nprior from CProto. Furthermore, CPD enhances the detection accuracy of sparsely\nscanned objects by the geometric knowledge from CProto. CPD outperforms\nstate-of-the-art unsupervised 3D detectors on Waymo Open Dataset (WOD),\nPandaSet, and KITTI datasets by a large margin.", + "CPD outperforms\nstate-of-the-art unsupervised 3D detectors on Waymo Open Dataset (WOD),\nPandaSet, and KITTI datasets by a large margin. Besides, by training CPD on WOD\nand testing on KITTI, CPD attains 90.85% and 81.01% 3D Average Precision on\neasy and moderate car classes, respectively. These achievements position CPD in\nclose proximity to fully supervised detectors, highlighting the significance of\nour method. The code will be available at https://github.com/hailanyi/CPD.", + "Vision-and-language navigation (VLN) enables the agent to navigate to a\nremote location following the natural language instruction in 3D environments.\nAt each navigation step, the agent selects from possible candidate locations\nand then makes the move. For better navigation planning, the lookahead\nexploration strategy aims to effectively evaluate the agent's next action by\naccurately anticipating the future environment of candidate locations. To this\nend, some existing works predict RGB images for future environments, while this\nstrategy suffers from image distortion and high computational cost. To address\nthese issues, we propose the pre-trained hierarchical neural radiance\nrepresentation model (HNR) to produce multi-level semantic features for future\nenvironments, which are more robust and efficient than pixel-wise RGB\nreconstruction. Furthermore, with the predicted future environmental\nrepresentations, our lookahead VLN model is able to construct the navigable\nfuture path tree and select the optimal path via efficient parallel evaluation.\nExtensive experiments on the VLN-CE datasets confirm the effectiveness of our\nmethod.", + "Prominent solutions for medical image segmentation are typically tailored for\nautomatic or interactive setups, posing challenges in facilitating progress\nachieved in one task to another.$_{\\!}$ This$_{\\!}$ also$_{\\!}$\nnecessitates$_{\\!}$ separate$_{\\!}$ models for each task, duplicating both\ntraining time and parameters.$_{\\!}$ To$_{\\!}$ address$_{\\!}$ above$_{\\!}$\nissues,$_{\\!}$ we$_{\\!}$ introduce$_{\\!}$ S2VNet,$_{\\!}$ a$_{\\!}$\nuniversal$_{\\!}$ framework$_{\\!}$ that$_{\\!}$ leverages$_{\\!}$\nSlice-to-Volume$_{\\!}$ propagation$_{\\!}$ to$_{\\!}$ unify automatic/interactive\nsegmentation within a single model and one training session. Inspired by\nclustering-based segmentation techniques, S2VNet makes full use of the\nslice-wise structure of volumetric data by initializing cluster centers from\nthe cluster$_{\\!}$ results$_{\\!}$ of$_{\\!}$ previous$_{\\!}$ slice.$_{\\!", + "Inspired by\nclustering-based segmentation techniques, S2VNet makes full use of the\nslice-wise structure of volumetric data by initializing cluster centers from\nthe cluster$_{\\!}$ results$_{\\!}$ of$_{\\!}$ previous$_{\\!}$ slice.$_{\\!}$ This\nenables knowledge acquired from prior slices to assist in the segmentation of\nthe current slice, further efficiently bridging the communication between\nremote slices using mere 2D networks. Moreover, such a framework readily\naccommodates interactive segmentation with no architectural change, simply by\ninitializing centroids from user inputs. S2VNet distinguishes itself by swift\ninference speeds and reduced memory consumption compared to prevailing 3D\nsolutions. It can also handle multi-class interactions with each of them\nserving to initialize different centroids. Experiments on three benchmarks\ndemonstrate S2VNet surpasses task-specified solutions on both\nautomatic/interactive setups.", + "We introduce SynCLR, a novel approach for learning visual representations\nexclusively from synthetic images and synthetic captions, without any real\ndata. We synthesize a large dataset of image captions using LLMs, then use an\noff-the-shelf text-to-image model to generate multiple images corresponding to\neach synthetic caption. We perform visual representation learning on these\nsynthetic images via contrastive learning, treating images sharing the same\ncaption as positive pairs. The resulting representations transfer well to many\ndownstream tasks, competing favorably with other general-purpose visual\nrepresentation learners such as CLIP and DINO v2 in image classification tasks.\nFurthermore, in dense prediction tasks such as semantic segmentation, SynCLR\noutperforms previous self-supervised methods by a significant margin, e.g.,\nimproving over MAE and iBOT by 6.2 and 4.3 mIoU on ADE20k for ViT-B/16.", + "Vision Transformer (ViT) has shown high potential in video recognition, owing\nto its flexible design, adaptable self-attention mechanisms, and the efficacy\nof masked pre-training. Yet, it remains unclear how to adapt these pre-trained\nshort-term ViTs for temporal action detection (TAD) in untrimmed videos. The\nexisting works treat them as off-the-shelf feature extractors for each\nshort-trimmed snippet without capturing the fine-grained relation among\ndifferent snippets in a broader temporal context. To mitigate this issue, this\npaper focuses on designing a new mechanism for adapting these pre-trained ViT\nmodels as a unified long-form video transformer to fully unleash its modeling\npower in capturing inter-snippet relation, while still keeping low computation\noverhead and memory consumption for efficient TAD. To this end, we design\neffective cross-snippet propagation modules to gradually exchange short-term\nvideo information among different snippets from two levels. For inner-backbone\ninformation propagation, we introduce a cross-snippet propagation strategy to\nenable multi-snippet temporal feature interaction inside the backbone.For\npost-backbone information propagation, we propose temporal transformer layers\nfor further clip-level modeling.", + "For inner-backbone\ninformation propagation, we introduce a cross-snippet propagation strategy to\nenable multi-snippet temporal feature interaction inside the backbone.For\npost-backbone information propagation, we propose temporal transformer layers\nfor further clip-level modeling. With the plain ViT-B pre-trained with\nVideoMAE, our end-to-end temporal action detector (ViT-TAD) yields a very\ncompetitive performance to previous temporal action detectors, riching up to\n69.5 average mAP on THUMOS14, 37.40 average mAP on ActivityNet-1.3 and 17.20\naverage mAP on FineAction.", + "Large-scale black-box models have become ubiquitous across numerous\napplications. Understanding the influence of individual training data sources\non predictions made by these models is crucial for improving their\ntrustworthiness. Current influence estimation techniques involve computing\ngradients for every training point or repeated training on different subsets.\nThese approaches face obvious computational challenges when scaled up to large\ndatasets and models.\n In this paper, we introduce and explore the Mirrored Influence Hypothesis,\nhighlighting a reciprocal nature of influence between training and test data.\nSpecifically, it suggests that evaluating the influence of training data on\ntest predictions can be reformulated as an equivalent, yet inverse problem:\nassessing how the predictions for training samples would be altered if the\nmodel were trained on specific test samples. Through both empirical and\ntheoretical validations, we demonstrate the wide applicability of our\nhypothesis. Inspired by this, we introduce a new method for estimating the\ninfluence of training data, which requires calculating gradients for specific\ntest samples, paired with a forward pass for each training point.", + "Through both empirical and\ntheoretical validations, we demonstrate the wide applicability of our\nhypothesis. Inspired by this, we introduce a new method for estimating the\ninfluence of training data, which requires calculating gradients for specific\ntest samples, paired with a forward pass for each training point. This approach\ncan capitalize on the common asymmetry in scenarios where the number of test\nsamples under concurrent examination is much smaller than the scale of the\ntraining dataset, thus gaining a significant improvement in efficiency compared\nto existing approaches.\n We demonstrate the applicability of our method across a range of scenarios,\nincluding data attribution in diffusion models, data leakage detection,\nanalysis of memorization, mislabeled data detection, and tracing behavior in\nlanguage models. Our code will be made available at\nhttps://github.com/ruoxi-jia-group/Forward-INF.", + "In rapidly-evolving domains such as autonomous driving, the use of multiple\nsensors with different modalities is crucial to ensure high operational\nprecision and stability. To correctly exploit the provided information by each\nsensor in a single common frame, it is essential for these sensors to be\naccurately calibrated. In this paper, we leverage the ability of Neural\nRadiance Fields (NeRF) to represent different sensors modalities in a common\nvolumetric representation to achieve robust and accurate spatio-temporal sensor\ncalibration. By designing a partitioning approach based on the visible part of\nthe scene for each sensor, we formulate the calibration problem using only the\noverlapping areas. This strategy results in a more robust and accurate\ncalibration that is less prone to failure. We demonstrate that our approach\nworks on outdoor urban scenes by validating it on multiple established driving\ndatasets. Results show that our method is able to get better accuracy and\nrobustness compared to existing methods.", + "While modeling people wearing tight-fitting clothing has made great strides\nin recent years, loose-fitting clothing remains a challenge. We propose a\nmethod that delivers realistic garment models from real-world images,\nregardless of garment shape or deformation. To this end, we introduce a fitting\napproach that utilizes shape and deformation priors learned from synthetic data\nto accurately capture garment shapes and deformations, including large ones.\nNot only does our approach recover the garment geometry accurately, it also\nyields models that can be directly used by downstream applications such as\nanimation and simulation.", + "Reconstructing the viewed images from human brain activity bridges human and\ncomputer vision through the Brain-Computer Interface. The inherent variability\nin brain function between individuals leads existing literature to focus on\nacquiring separate models for each individual using their respective brain\nsignal data, ignoring commonalities between these data. In this article, we\ndevise Psychometry, an omnifit model for reconstructing images from functional\nMagnetic Resonance Imaging (fMRI) obtained from different subjects. Psychometry\nincorporates an omni mixture-of-experts (Omni MoE) module where all the experts\nwork together to capture the inter-subject commonalities, while each expert\nassociated with subject-specific parameters copes with the individual\ndifferences. Moreover, Psychometry is equipped with a retrieval-enhanced\ninference strategy, termed Ecphory, which aims to enhance the learned fMRI\nrepresentation via retrieving from prestored subject-specific memories. These\ndesigns collectively render Psychometry omnifit and efficient, enabling it to\ncapture both inter-subject commonality and individual specificity across\nsubjects.", + "These\ndesigns collectively render Psychometry omnifit and efficient, enabling it to\ncapture both inter-subject commonality and individual specificity across\nsubjects. As a result, the enhanced fMRI representations serve as conditional\nsignals to guide a generation model to reconstruct high-quality and realistic\nimages, establishing Psychometry as state-of-the-art in terms of both\nhigh-level and low-level metrics.", + "Despite significant advancements in text-to-motion synthesis, generating\nlanguage-guided human motion within 3D environments poses substantial\nchallenges. These challenges stem primarily from (i) the absence of powerful\ngenerative models capable of jointly modeling natural language, 3D scenes, and\nhuman motion, and (ii) the generative models' intensive data requirements\ncontrasted with the scarcity of comprehensive, high-quality,\nlanguage-scene-motion datasets. To tackle these issues, we introduce a novel\ntwo-stage framework that employs scene affordance as an intermediate\nrepresentation, effectively linking 3D scene grounding and conditional motion\ngeneration. Our framework comprises an Affordance Diffusion Model (ADM) for\npredicting explicit affordance map and an Affordance-to-Motion Diffusion Model\n(AMDM) for generating plausible human motions. By leveraging scene affordance\nmaps, our method overcomes the difficulty in generating human motion under\nmultimodal condition signals, especially when training with limited data\nlacking extensive language-scene-motion pairs. Our extensive experiments\ndemonstrate that our approach consistently outperforms all baselines on\nestablished benchmarks, including HumanML3D and HUMANISE.", + "Our extensive experiments\ndemonstrate that our approach consistently outperforms all baselines on\nestablished benchmarks, including HumanML3D and HUMANISE. Additionally, we\nvalidate our model's exceptional generalization capabilities on a specially\ncurated evaluation set featuring previously unseen descriptions and scenes.", + "Scene text images contain not only style information (font, background) but\nalso content information (character, texture). Different scene text tasks need\ndifferent information, but previous representation learning methods use tightly\ncoupled features for all tasks, resulting in sub-optimal performance. We\npropose a Disentangled Representation Learning framework (DARLING) aimed at\ndisentangling these two types of features for improved adaptability in better\naddressing various downstream tasks (choose what you really need).\nSpecifically, we synthesize a dataset of image pairs with identical style but\ndifferent content. Based on the dataset, we decouple the two types of features\nby the supervision design. Clearly, we directly split the visual representation\ninto style and content features, the content features are supervised by a text\nrecognition loss, while an alignment loss aligns the style features in the\nimage pairs. Then, style features are employed in reconstructing the\ncounterpart image via an image decoder with a prompt that indicates the\ncounterpart's content. Such an operation effectively decouples the features\nbased on their distinctive properties.", + "Then, style features are employed in reconstructing the\ncounterpart image via an image decoder with a prompt that indicates the\ncounterpart's content. Such an operation effectively decouples the features\nbased on their distinctive properties. To the best of our knowledge, this is\nthe first time in the field of scene text that disentangles the inherent\nproperties of the text images. Our method achieves state-of-the-art performance\nin Scene Text Recognition, Removal, and Editing.", + "As a significant step for human face modeling, editing, and generation, face\nlandmarking aims at extracting facial keypoints from images. A generalizable\nface landmarker is required in practice because real-world facial images, e.g.,\nthe avatars in animations and games, are often stylized in various ways.\nHowever, achieving generalizable face landmarking is challenging due to the\ndiversity of facial styles and the scarcity of labeled stylized faces. In this\nstudy, we propose a simple but effective paradigm to learn a generalizable face\nlandmarker based on labeled real human faces and unlabeled stylized faces. Our\nmethod learns the face landmarker as the key module of a conditional face\nwarper. Given a pair of real and stylized facial images, the conditional face\nwarper predicts a warping field from the real face to the stylized one, in\nwhich the face landmarker predicts the ending points of the warping field and\nprovides us with high-quality pseudo landmarks for the corresponding stylized\nfacial images.", + "Applying an alternating optimization strategy, we learn the face\nlandmarker to minimize $i)$ the discrepancy between the stylized faces and the\nwarped real ones and $ii)$ the prediction errors of both real and pseudo\nlandmarks. Experiments on various datasets show that our method outperforms\nexisting state-of-the-art domain adaptation methods in face landmarking tasks,\nleading to a face landmarker with better generalizability. Code is available at\nhttps://plustwo0.github.io/project-face-landmarker.", + "Directly generating scenes from satellite imagery offers exciting\npossibilities for integration into applications like games and map services.\nHowever, challenges arise from significant view changes and scene scale.\nPrevious efforts mainly focused on image or video generation, lacking\nexploration into the adaptability of scene generation for arbitrary views.\nExisting 3D generation works either operate at the object level or are\ndifficult to utilize the geometry obtained from satellite imagery. To overcome\nthese limitations, we propose a novel architecture for direct 3D scene\ngeneration by introducing diffusion models into 3D sparse representations and\ncombining them with neural rendering techniques. Specifically, our approach\ngenerates texture colors at the point level for a given geometry using a 3D\ndiffusion model first, which is then transformed into a scene representation in\na feed-forward manner. The representation can be utilized to render arbitrary\nviews which would excel in both single-frame quality and inter-frame\nconsistency. Experiments in two city-scale datasets show that our model\ndemonstrates proficiency in generating photo-realistic street-view image\nsequences and cross-view urban scenes from satellite imagery.", + "We introduce Control4D, an innovative framework for editing dynamic 4D\nportraits using text instructions. Our method addresses the prevalent\nchallenges in 4D editing, notably the inefficiencies of existing 4D\nrepresentations and the inconsistent editing effect caused by diffusion-based\neditors. We first propose GaussianPlanes, a novel 4D representation that makes\nGaussian Splatting more structured by applying plane-based decomposition in 3D\nspace and time. This enhances both efficiency and robustness in 4D editing.\nFurthermore, we propose to leverage a 4D generator to learn a more continuous\ngeneration space from inconsistent edited images produced by the\ndiffusion-based editor, which effectively improves the consistency and quality\nof 4D editing. Comprehensive evaluation demonstrates the superiority of\nControl4D, including significantly reduced training time, high-quality\nrendering, and spatial-temporal consistency in 4D portrait editing. The link to\nour project website is https://control4darxiv.github.io.", + "`3D Semantic Scene Completion (SSC) has emerged as a nascent and pivotal\nundertaking in autonomous driving, aiming to predict voxel occupancy within\nvolumetric scenes. However, prevailing methodologies primarily focus on\nvoxel-wise feature aggregation, while neglecting instance semantics and scene\ncontext. In this paper, we present a novel paradigm termed Symphonies\n(Scene-from-Insts), that delves into the integration of instance queries to\norchestrate 2D-to-3D reconstruction and 3D scene modeling. Leveraging our\nproposed Serial Instance-Propagated Attentions, Symphonies dynamically encodes\ninstance-centric semantics, facilitating intricate interactions between\nimage-based and volumetric domains. Simultaneously, Symphonies enables holistic\nscene comprehension by capturing context through the efficient fusion of\ninstance queries, alleviating geometric ambiguity such as occlusion and\nperspective errors through contextual scene reasoning. Experimental results\ndemonstrate that Symphonies achieves state-of-the-art performance on\nchallenging benchmarks SemanticKITTI and SSCBench-KITTI-360, yielding\nremarkable mIoU scores of 15.04 and 18.58, respectively.", + "Experimental results\ndemonstrate that Symphonies achieves state-of-the-art performance on\nchallenging benchmarks SemanticKITTI and SSCBench-KITTI-360, yielding\nremarkable mIoU scores of 15.04 and 18.58, respectively. These results showcase\nthe paradigm's promising advancements. The code is available at\nhttps://github.com/hustvl/Symphonies.", + "Recent image tone adjustment (or enhancement) approaches have predominantly\nadopted supervised learning for learning human-centric perceptual assessment.\nHowever, these approaches are constrained by intrinsic challenges of supervised\nlearning. Primarily, the requirement for expertly-curated or retouched images\nescalates the data acquisition expenses. Moreover, their coverage of target\nstyle is confined to stylistic variants inferred from the training data. To\nsurmount the above challenges, we propose an unsupervised learning-based\napproach for text-based image tone adjustment method, CLIPtone, that extends an\nexisting image enhancement method to accommodate natural language descriptions.\nSpecifically, we design a hyper-network to adaptively modulate the pretrained\nparameters of the backbone model based on text description. To assess whether\nthe adjusted image aligns with the text description without ground truth image,\nwe utilize CLIP, which is trained on a vast set of language-image pairs and\nthus encompasses knowledge of human perception. The major advantages of our\napproach are three fold: (i) minimal data collection expenses, (ii) support for\na range of adjustments, and (iii) the ability to handle novel text descriptions\nunseen in training.", + "The major advantages of our\napproach are three fold: (i) minimal data collection expenses, (ii) support for\na range of adjustments, and (iii) the ability to handle novel text descriptions\nunseen in training. Our approach's efficacy is demonstrated through\ncomprehensive experiments, including a user study.", + "Currently, machine learning-based methods for remote sensing pansharpening\nhave progressed rapidly. However, existing pansharpening methods often do not\nfully exploit differentiating regional information in non-local spaces, thereby\nlimiting the effectiveness of the methods and resulting in redundant learning\nparameters. In this paper, we introduce a so-called content-adaptive non-local\nconvolution (CANConv), a novel method tailored for remote sensing image\npansharpening. Specifically, CANConv employs adaptive convolution, ensuring\nspatial adaptability, and incorporates non-local self-similarity through the\nsimilarity relationship partition (SRP) and the partition-wise adaptive\nconvolution (PWAC) sub-modules. Furthermore, we also propose a corresponding\nnetwork architecture, called CANNet, which mainly utilizes the multi-scale\nself-similarity. Extensive experiments demonstrate the superior performance of\nCANConv, compared with recent promising fusion methods. Besides, we\nsubstantiate the method's effectiveness through visualization, ablation\nexperiments, and comparison with existing methods on multiple test sets. The\nsource code is publicly available at https://github.com/duanyll/CANConv.", + "Vector-Quantized Image Modeling (VQIM) is a fundamental research problem in\nimage synthesis, which aims to represent an image with a discrete token\nsequence. Existing studies effectively address this problem by learning a\ndiscrete codebook from scratch and in a code-independent manner to quantize\ncontinuous representations into discrete tokens. However, learning a codebook\nfrom scratch and in a code-independent manner is highly challenging, which may\nbe a key reason causing codebook collapse, i.e., some code vectors can rarely\nbe optimized without regard to the relationship between codes and good codebook\npriors such that die off finally. In this paper, inspired by pretrained\nlanguage models, we find that these language models have actually pretrained a\nsuperior codebook via a large number of text corpus, but such information is\nrarely exploited in VQIM. To this end, we propose a novel codebook transfer\nframework with part-of-speech, called VQCT, which aims to transfer a\nwell-trained codebook from pretrained language models to VQIM for robust\ncodebook learning. Specifically, we first introduce a pretrained codebook from\nlanguage models and part-of-speech knowledge as priors.", + "Specifically, we first introduce a pretrained codebook from\nlanguage models and part-of-speech knowledge as priors. Then, we construct a\nvision-related codebook with these priors for achieving codebook transfer.\nFinally, a novel codebook transfer network is designed to exploit abundant\nsemantic relationships between codes contained in pretrained codebooks for\nrobust VQIM codebook learning. Experimental results on four datasets show that\nour VQCT method achieves superior VQIM performance over previous\nstate-of-the-art methods.", + "Colorizing line art is a pivotal task in the production of hand-drawn cel\nanimation. This typically involves digital painters using a paint bucket tool\nto manually color each segment enclosed by lines, based on RGB values\npredetermined by a color designer. This frame-by-frame process is both arduous\nand time-intensive. Current automated methods mainly focus on segment matching.\nThis technique migrates colors from a reference to the target frame by aligning\nfeatures within line-enclosed segments across frames. However, issues like\nocclusion and wrinkles in animations often disrupt these direct\ncorrespondences, leading to mismatches. In this work, we introduce a new\nlearning-based inclusion matching pipeline, which directs the network to\ncomprehend the inclusion relationships between segments rather than relying\nsolely on direct visual correspondences. Our method features a two-stage\npipeline that integrates a coarse color warping module with an inclusion\nmatching module, enabling more nuanced and accurate colorization. To facilitate\nthe training of our network, we also develope a unique dataset, referred to as\nPaintBucket-Character. This dataset includes rendered line arts alongside their\ncolorized counterparts, featuring various 3D characters.", + "To facilitate\nthe training of our network, we also develope a unique dataset, referred to as\nPaintBucket-Character. This dataset includes rendered line arts alongside their\ncolorized counterparts, featuring various 3D characters. Extensive experiments\ndemonstrate the effectiveness and superiority of our method over existing\ntechniques.", + "Scene simulation in autonomous driving has gained significant attention\nbecause of its huge potential for generating customized data. However, existing\neditable scene simulation approaches face limitations in terms of user\ninteraction efficiency, multi-camera photo-realistic rendering and external\ndigital assets integration. To address these challenges, this paper introduces\nChatSim, the first system that enables editable photo-realistic 3D driving\nscene simulations via natural language commands with external digital assets.\nTo enable editing with high command flexibility,~ChatSim leverages a large\nlanguage model (LLM) agent collaboration framework. To generate photo-realistic\noutcomes, ChatSim employs a novel multi-camera neural radiance field method.\nFurthermore, to unleash the potential of extensive high-quality digital assets,\nChatSim employs a novel multi-camera lighting estimation method to achieve\nscene-consistent assets' rendering. Our experiments on Waymo Open Dataset\ndemonstrate that ChatSim can handle complex language commands and generate\ncorresponding photo-realistic scene videos.", + "Inspired by the long-range modeling ability of ViTs, large-kernel\nconvolutions are widely studied and adopted recently to enlarge the receptive\nfield and improve model performance, like the remarkable work ConvNeXt which\nemploys 7x7 depthwise convolution. Although such depthwise operator only\nconsumes a few FLOPs, it largely harms the model efficiency on powerful\ncomputing devices due to the high memory access costs. For example, ConvNeXt-T\nhas similar FLOPs with ResNet-50 but only achieves 60% throughputs when trained\non A100 GPUs with full precision. Although reducing the kernel size of ConvNeXt\ncan improve speed, it results in significant performance degradation. It is\nstill unclear how to speed up large-kernel-based CNN models while preserving\ntheir performance. To tackle this issue, inspired by Inceptions, we propose to\ndecompose large-kernel depthwise convolution into four parallel branches along\nchannel dimension, i.e. small square kernel, two orthogonal band kernels, and\nan identity mapping.", + "To tackle this issue, inspired by Inceptions, we propose to\ndecompose large-kernel depthwise convolution into four parallel branches along\nchannel dimension, i.e. small square kernel, two orthogonal band kernels, and\nan identity mapping. With this new Inception depthwise convolution, we build a\nseries of networks, namely IncepitonNeXt, which not only enjoy high throughputs\nbut also maintain competitive performance. For instance, InceptionNeXt-T\nachieves 1.6x higher training throughputs than ConvNeX-T, as well as attains\n0.2% top-1 accuracy improvement on ImageNet-1K. We anticipate InceptionNeXt can\nserve as an economical baseline for future architecture design to reduce carbon\nfootprint. Code is available at https://github.com/sail-sg/inceptionnext.", + "Temporal grounding of text descriptions in videos is a central problem in\nvision-language learning and video understanding. Existing methods often\nprioritize accuracy over scalability -- they have been optimized for grounding\nonly a few text queries within short videos, and fail to scale up to long\nvideos with hundreds of queries. In this paper, we study the effect of\ncross-modal fusion on the scalability of video grounding models. Our analysis\nestablishes late fusion as a more cost-effective fusion scheme for long-form\nvideos with many text queries. Moreover, it leads us to a novel, video-centric\nsampling scheme for efficient training. Based on these findings, we present\nSnAG, a simple baseline for scalable and accurate video grounding. Without\nbells and whistles, SnAG is 43% more accurate and 1.5x faster than CONE, a\nstate of the art for long-form video grounding on the challenging MAD dataset,\nwhile achieving highly competitive results on short videos.", + "Unsupervised object-centric learning aims to decompose scenes into\ninterpretable object entities, termed slots. Slot-based auto-encoders stand out\nas a prominent method for this task. Within them, crucial aspects include\nguiding the encoder to generate object-specific slots and ensuring the decoder\nutilizes them during reconstruction. This work introduces two novel techniques,\n(i) an attention-based self-training approach, which distills superior\nslot-based attention masks from the decoder to the encoder, enhancing object\nsegmentation, and (ii) an innovative patch-order permutation strategy for\nautoregressive transformers that strengthens the role of slot vectors in\nreconstruction. The effectiveness of these strategies is showcased\nexperimentally. The combined approach significantly surpasses prior slot-based\nautoencoder methods in unsupervised object segmentation, especially with\ncomplex real-world images. We provide the implementation code at\nhttps://github.com/gkakogeorgiou/spot .", + "For human-centric large-scale scenes, fine-grained modeling for 3D human\nglobal pose and shape is significant for scene understanding and can benefit\nmany real-world applications. In this paper, we present LiveHPS, a novel\nsingle-LiDAR-based approach for scene-level human pose and shape estimation\nwithout any limitation of light conditions and wearable devices. In particular,\nwe design a distillation mechanism to mitigate the distribution-varying effect\nof LiDAR point clouds and exploit the temporal-spatial geometric and dynamic\ninformation existing in consecutive frames to solve the occlusion and noise\ndisturbance. LiveHPS, with its efficient configuration and high-quality output,\nis well-suited for real-world applications. Moreover, we propose a huge human\nmotion dataset, named FreeMotion, which is collected in various scenarios with\ndiverse human poses, shapes and translations. It consists of multi-modal and\nmulti-view acquisition data from calibrated and synchronized LiDARs, cameras,\nand IMUs. Extensive experiments on our new dataset and other public datasets\ndemonstrate the SOTA performance and robustness of our approach. We will\nrelease our code and dataset soon.", + "Semantic segmentation models, while effective for in-distribution categories,\nface challenges in real-world deployment due to encountering\nout-of-distribution (OoD) objects. Detecting these OoD objects is crucial for\nsafety-critical applications. Existing methods rely on anomaly scores, but\nchoosing a suitable threshold for generating masks presents difficulties and\ncan lead to fragmentation and inaccuracy. This paper introduces a method to\nconvert anomaly \\textbf{S}core \\textbf{T}o segmentation \\textbf{M}ask, called\nS2M, a simple and effective framework for OoD detection in semantic\nsegmentation. Unlike assigning anomaly scores to pixels, S2M directly segments\nthe entire OoD object. By transforming anomaly scores into prompts for a\npromptable segmentation model, S2M eliminates the need for threshold selection.\nExtensive experiments demonstrate that S2M outperforms the state-of-the-art by\napproximately 20% in IoU and 40% in mean F1 score, on average, across various\nbenchmarks including Fishyscapes, Segment-Me-If-You-Can, and RoadAnomaly\ndatasets.", + "Underwater images are subject to intricate and diverse degradation,\ninevitably affecting the effectiveness of underwater visual tasks. However,\nmost approaches primarily operate in the raw pixel space of images, which\nlimits the exploration of the frequency characteristics of underwater images,\nleading to an inadequate utilization of deep models' representational\ncapabilities in producing high-quality images. In this paper, we introduce a\nnovel Underwater Image Enhancement (UIE) framework, named WF-Diff, designed to\nfully leverage the characteristics of frequency domain information and\ndiffusion models. WF-Diff consists of two detachable networks: Wavelet-based\nFourier information interaction network (WFI2-net) and Frequency Residual\nDiffusion Adjustment Module (FRDAM). With our full exploration of the frequency\ndomain information, WFI2-net aims to achieve preliminary enhancement of\nfrequency information in the wavelet space. Our proposed FRDAM can further\nrefine the high- and low-frequency information of the initial enhanced images,\nwhich can be viewed as a plug-and-play universal module to adjust the detail of\nthe underwater images. With the above techniques, our algorithm can show SOTA\nperformance on real-world underwater image datasets, and achieves competitive\nperformance in visual quality.", + "Partial-label learning (PLL) is an important weakly supervised learning\nproblem, which allows each training example to have a candidate label set\ninstead of a single ground-truth label. Identification-based methods have been\nwidely explored to tackle label ambiguity issues in PLL, which regard the true\nlabel as a latent variable to be identified. However, identifying the true\nlabels accurately and completely remains challenging, causing noise in pseudo\nlabels during model training. In this paper, we propose a new method called\nCroSel, which leverages historical predictions from the model to identify true\nlabels for most training examples. First, we introduce a cross selection\nstrategy, which enables two deep models to select true labels of partially\nlabeled data for each other. Besides, we propose a novel consistency\nregularization term called co-mix to avoid sample waste and tiny noise caused\nby false selection. In this way, CroSel can pick out the true labels of most\nexamples with high precision. Extensive experiments demonstrate the superiority\nof CroSel, which consistently outperforms previous state-of-the-art methods on\nbenchmark datasets. Additionally, our method achieves over 90\\% accuracy and\nquantity for selecting true labels on CIFAR-type datasets under various\nsettings.", + "Although polygon meshes have been a standard representation in geometry\nprocessing, their irregular and combinatorial nature hinders their suitability\nfor learning-based applications. In this work, we introduce a novel learnable\nmesh representation through a set of local 3D sample Points and their\nassociated Normals and Quadric error metrics (QEM) w.r.t. the underlying shape,\nwhich we denote PoNQ. A global mesh is directly derived from PoNQ by\nefficiently leveraging the knowledge of the local quadric errors. Besides\nmarking the first use of QEM within a neural shape representation, our\ncontribution guarantees both topological and geometrical properties by ensuring\nthat a PoNQ mesh does not self-intersect and is always the boundary of a\nvolume. Notably, our representation does not rely on a regular grid, is\nsupervised directly by the target surface alone, and also handles open surfaces\nwith boundaries and/or sharp features. We demonstrate the efficacy of PoNQ\nthrough a learning-based mesh prediction from SDF grids and show that our\nmethod surpasses recent state-of-the-art techniques in terms of both surface\nand edge-based metrics.", + "Humans possess the capability to comprehend diverse modalities and seamlessly\ntransfer information between them. In this work, we introduce ModaVerse, a\nMulti-modal Large Language Model (MLLM) capable of comprehending and\ntransforming content across various modalities including images, videos, and\naudio. Predominant MLLM frameworks have largely relied on the alignment of\nlatent spaces of textual and non-textual features. This alignment process,\nwhich synchronizes a language model trained on textual data with encoders and\ndecoders trained on multi-modal data, often necessitates extensive training of\nseveral projection layers in multiple stages. Inspired by LLM-as-agent\nmethodologies, we propose a novel Input/Output (I/O) alignment mechanism that\noperates directly at the level of natural language. It aligns the LLM's output\nwith the input of generative models, avoiding the complexities associated with\nlatent feature alignments, and simplifying the multiple training stages of\nexisting MLLMs into a single, efficient process. This conceptual advancement\nleads to significant reductions in both data and computational costs.", + "It aligns the LLM's output\nwith the input of generative models, avoiding the complexities associated with\nlatent feature alignments, and simplifying the multiple training stages of\nexisting MLLMs into a single, efficient process. This conceptual advancement\nleads to significant reductions in both data and computational costs. By\nconducting experiments on several benchmarks, we demonstrate that our approach\nattains comparable performance with the state of the art while achieving\nconsiderable efficiencies in data usage and training duration.", + "Action Localization is a challenging problem that combines detection and\nrecognition tasks, which are often addressed separately. State-of-the-art\nmethods rely on off-the-shelf bounding box detections pre-computed at high\nresolution, and propose transformer models that focus on the classification\ntask alone. Such two-stage solutions are prohibitive for real-time deployment.\nOn the other hand, single-stage methods target both tasks by devoting part of\nthe network (generally the backbone) to sharing the majority of the workload,\ncompromising performance for speed. These methods build on adding a DETR head\nwith learnable queries that after cross- and self-attention can be sent to\ncorresponding MLPs for detecting a person's bounding box and action. However,\nDETR-like architectures are challenging to train and can incur in big\ncomplexity.\n In this paper, we observe that \\textbf{a straight bipartite matching loss can\nbe applied to the output tokens of a vision transformer}. This results in a\nbackbone + MLP architecture that can do both tasks without the need of an extra\nencoder-decoder head and learnable queries.", + "In this paper, we observe that \\textbf{a straight bipartite matching loss can\nbe applied to the output tokens of a vision transformer}. This results in a\nbackbone + MLP architecture that can do both tasks without the need of an extra\nencoder-decoder head and learnable queries. We show that a single MViTv2-S\narchitecture trained with bipartite matching to perform both tasks surpasses\nthe same MViTv2-S when trained with RoI align on pre-computed bounding boxes.\nWith a careful design of token pooling and the proposed training pipeline, our\nBipartite-Matching Vision Transformer model, \\textbf{BMViT}, achieves +3 mAP on\nAVA2.2. w.r.t. the two-stage MViTv2-S counterpart. Code is available at\n\\href{https://github.com/IoannaNti/BMViT}{https://github.com/IoannaNti/BMViT}", + "Supernet is a core component in many recent Neural Architecture Search (NAS)\nmethods. It not only helps embody the search space but also provides a\n(relative) estimation of the final performance of candidate architectures.\nThus, it is critical that the top architectures ranked by a supernet should be\nconsistent with those ranked by true performance, which is known as the\norder-preserving ability. In this work, we analyze the order-preserving ability\non the whole search space (global) and a sub-space of top architectures\n(local), and empirically show that the local order-preserving for current\ntwo-stage NAS methods still need to be improved. To rectify this, we propose a\nnovel concept of Supernet Shifting, a refined search strategy combining\narchitecture searching with supernet fine-tuning. Specifically, apart from\nevaluating, the training loss is also accumulated in searching and the supernet\nis updated every iteration. Since superior architectures are sampled more\nfrequently in evolutionary searching, the supernet is encouraged to focus on\ntop architectures, thus improving local order-preserving. Besides, a\npre-trained supernet is often un-reusable for one-shot methods.", + "Since superior architectures are sampled more\nfrequently in evolutionary searching, the supernet is encouraged to focus on\ntop architectures, thus improving local order-preserving. Besides, a\npre-trained supernet is often un-reusable for one-shot methods. We show that\nSupernet Shifting can fulfill transferring supernet to a new dataset.\nSpecifically, the last classifier layer will be unset and trained through\nevolutionary searching. Comprehensive experiments show that our method has\nbetter order-preserving ability and can find a dominating architecture.\nMoreover, the pre-trained supernet can be easily transferred into a new dataset\nwith no loss of performance.", + "Foundation segmentation models, while powerful, pose a significant risk: they\nenable users to effortlessly extract any objects from any digital content with\na single click, potentially leading to copyright infringement or malicious\nmisuse. To mitigate this risk, we introduce a new task \"Anything Unsegmentable\"\nto grant any image \"the right to be unsegmented\". The ambitious pursuit of the\ntask is to achieve highly transferable adversarial attacks against all\nprompt-based segmentation models, regardless of model parameterizations and\nprompts. We highlight the non-transferable and heterogeneous nature of\nprompt-specific adversarial noises. Our approach focuses on disrupting image\nencoder features to achieve prompt-agnostic attacks. Intriguingly, targeted\nfeature attacks exhibit better transferability compared to untargeted ones,\nsuggesting the optimal update direction aligns with the image manifold. Based\non the observations, we design a novel attack named Unsegment Anything by\nSimulating Deformation (UAD). Our attack optimizes a differentiable deformation\nfunction to create a target deformed image, which alters structural information\nwhile preserving achievable feature distance by adversarial example.", + "Based\non the observations, we design a novel attack named Unsegment Anything by\nSimulating Deformation (UAD). Our attack optimizes a differentiable deformation\nfunction to create a target deformed image, which alters structural information\nwhile preserving achievable feature distance by adversarial example. Extensive\nexperiments verify the effectiveness of our approach, compromising a variety of\npromptable segmentation models with different architectures and prompt\ninterfaces. We release the code at\nhttps://github.com/jiahaolu97/anything-unsegmentable.", + "Transductive inference has been widely investigated in few-shot image\nclassification, but completely overlooked in the recent, fast growing\nliterature on adapting vision-langage models like CLIP. This paper addresses\nthe transductive zero-shot and few-shot CLIP classification challenge, in which\ninference is performed jointly across a mini-batch of unlabeled query samples,\nrather than treating each instance independently. We initially construct\ninformative vision-text probability features, leading to a classification\nproblem on the unit simplex set. Inspired by Expectation-Maximization (EM), our\noptimization-based classification objective models the data probability\ndistribution for each class using a Dirichlet law. The minimization problem is\nthen tackled with a novel block Majorization-Minimization algorithm, which\nsimultaneously estimates the distribution parameters and class assignments.\nExtensive numerical experiments on 11 datasets underscore the benefits and\nefficacy of our batch inference approach.On zero-shot tasks with test batches\nof 75 samples, our approach yields near 20% improvement in ImageNet accuracy\nover CLIP's zero-shot performance. Additionally, we outperform state-of-the-art\nmethods in the few-shot setting.", + "Additionally, we outperform state-of-the-art\nmethods in the few-shot setting. The code is available at:\nhttps://github.com/SegoleneMartin/transductive-CLIP.", + "Due to the omnipresence of Neural Radiance Fields (NeRFs), the interest\ntowards editable implicit 3D representations has surged over the last years.\nHowever, editing implicit or hybrid representations as used for NeRFs is\ndifficult due to the entanglement of appearance and geometry encoded in the\nmodel parameters. Despite these challenges, recent research has shown first\npromising steps towards photorealistic and non-photorealistic appearance edits.\nThe main open issues of related work include limited interactivity, a lack of\nsupport for local edits and large memory requirements, rendering them less\nuseful in practice. We address these limitations with LAENeRF, a unified\nframework for photorealistic and non-photorealistic appearance editing of\nNeRFs. To tackle local editing, we leverage a voxel grid as starting point for\nregion selection. We learn a mapping from expected ray terminations to final\noutput color, which can optionally be supervised by a style loss, resulting in\na framework which can perform photorealistic and non-photorealistic appearance\nediting of selected regions. Relying on a single point per ray for our mapping,\nwe limit memory requirements and enable fast optimization.", + "Relying on a single point per ray for our mapping,\nwe limit memory requirements and enable fast optimization. To guarantee\ninteractivity, we compose the output color using a set of learned, modifiable\nbase colors, composed with additive layer mixing. Compared to concurrent work,\nLAENeRF enables recoloring and stylization while keeping processing time low.\nFurthermore, we demonstrate that our approach surpasses baseline methods both\nquantitatively and qualitatively.", + "Video summarization aims to generate a concise representation of a video,\ncapturing its essential content and key moments while reducing its overall\nlength. Although several methods employ attention mechanisms to handle\nlong-term dependencies, they often fail to capture the visual significance\ninherent in frames. To address this limitation, we propose a CNN-based\nSpatioTemporal Attention (CSTA) method that stacks each feature of frames from\na single video to form image-like frame representations and applies 2D CNN to\nthese frame features. Our methodology relies on CNN to comprehend the inter and\nintra-frame relations and to find crucial attributes in videos by exploiting\nits ability to learn absolute positions within images. In contrast to previous\nwork compromising efficiency by designing additional modules to focus on\nspatial importance, CSTA requires minimal computational overhead as it uses CNN\nas a sliding window. Extensive experiments on two benchmark datasets (SumMe and\nTVSum) demonstrate that our proposed approach achieves state-of-the-art\nperformance with fewer MACs compared to previous methods. Codes are available\nat https://github.com/thswodnjs3/CSTA.", + "Existing score distillation methods are sensitive to classifier-free guidance\n(CFG) scale: manifested as over-smoothness or instability at small CFG scales,\nwhile over-saturation at large ones. To explain and analyze these issues, we\nrevisit the derivation of Score Distillation Sampling (SDS) and decipher\nexisting score distillation with the Wasserstein Generative Adversarial Network\n(WGAN) paradigm. With the WGAN paradigm, we find that existing score\ndistillation either employs a fixed sub-optimal discriminator or conducts\nincomplete discriminator optimization, resulting in the scale-sensitive issue.\nWe propose the Adversarial Score Distillation (ASD), which maintains an\noptimizable discriminator and updates it using the complete optimization\nobjective. Experiments show that the proposed ASD performs favorably in 2D\ndistillation and text-to-3D tasks against existing methods. Furthermore, to\nexplore the generalization ability of our WGAN paradigm, we extend ASD to the\nimage editing task, which achieves competitive results. The project page and\ncode are at https://github.com/2y7c3/ASD.", + "Personalized Federated Learning (PFL) is proposed to find the greatest\npersonalized models for each client. To avoid the central failure and\ncommunication bottleneck in the server-based FL, we concentrate on the\nDecentralized Personalized Federated Learning (DPFL) that performs distributed\nmodel training in a Peer-to-Peer (P2P) manner. Most personalized works in DPFL\nare based on undirected and symmetric topologies, however, the data,\ncomputation and communication resources heterogeneity result in large variances\nin the personalized models, which lead the undirected aggregation to suboptimal\npersonalized performance and unguaranteed convergence. To address these issues,\nwe propose a directed collaboration DPFL framework by incorporating stochastic\ngradient push and partial model personalized, called \\textbf{D}ecentralized\n\\textbf{Fed}erated \\textbf{P}artial \\textbf{G}radient \\textbf{P}ush\n(\\textbf{DFedPGP}). It personalizes the linear classifier in the modern deep\nmodel to customize the local solution and learns a consensus representation in\na fully decentralized manner.", + "It personalizes the linear classifier in the modern deep\nmodel to customize the local solution and learns a consensus representation in\na fully decentralized manner. Clients only share gradients with a subset of\nneighbors based on the directed and asymmetric topologies, which guarantees\nflexible choices for resource efficiency and better convergence. Theoretically,\nwe show that the proposed DFedPGP achieves a superior convergence rate of\n$\\mathcal{O}(\\frac{1}{\\sqrt{T}})$ in the general non-convex setting, and prove\nthe tighter connectivity among clients will speed up the convergence. The\nproposed method achieves state-of-the-art (SOTA) accuracy in both data and\ncomputation heterogeneity scenarios, demonstrating the efficiency of the\ndirected collaboration and partial gradient push.", + "Recent transformer-based architectures have shown impressive results in the\nfield of image segmentation. Thanks to their flexibility, they obtain\noutstanding performance in multiple segmentation tasks, such as semantic and\npanoptic, under a single unified framework. To achieve such impressive\nperformance, these architectures employ intensive operations and require\nsubstantial computational resources, which are often not available, especially\non edge devices. To fill this gap, we propose Prototype-based Efficient\nMaskFormer (PEM), an efficient transformer-based architecture that can operate\nin multiple segmentation tasks. PEM proposes a novel prototype-based\ncross-attention which leverages the redundancy of visual features to restrict\nthe computation and improve the efficiency without harming the performance. In\naddition, PEM introduces an efficient multi-scale feature pyramid network,\ncapable of extracting features that have high semantic content in an efficient\nway, thanks to the combination of deformable convolutions and context-based\nself-modulation. We benchmark the proposed PEM architecture on two tasks,\nsemantic and panoptic segmentation, evaluated on two different datasets,\nCityscapes and ADE20K. PEM demonstrates outstanding performance on every task\nand dataset, outperforming task-specific architectures while being comparable\nand even better than computationally-expensive baselines.", + "Advancements in 3D Gaussian Splatting have significantly accelerated 3D\nreconstruction and generation. However, it may require a large number of\nGaussians, which creates a substantial memory footprint. This paper introduces\nGES (Generalized Exponential Splatting), a novel representation that employs\nGeneralized Exponential Function (GEF) to model 3D scenes, requiring far fewer\nparticles to represent a scene and thus significantly outperforming Gaussian\nSplatting methods in efficiency with a plug-and-play replacement ability for\nGaussian-based utilities. GES is validated theoretically and empirically in\nboth principled 1D setup and realistic 3D scenes.\n It is shown to represent signals with sharp edges more accurately, which are\ntypically challenging for Gaussians due to their inherent low-pass\ncharacteristics. Our empirical analysis demonstrates that GEF outperforms\nGaussians in fitting natural-occurring signals (e.g. squares, triangles, and\nparabolic signals), thereby reducing the need for extensive splitting\noperations that increase the memory footprint of Gaussian Splatting.", + "Our empirical analysis demonstrates that GEF outperforms\nGaussians in fitting natural-occurring signals (e.g. squares, triangles, and\nparabolic signals), thereby reducing the need for extensive splitting\noperations that increase the memory footprint of Gaussian Splatting. With the\naid of a frequency-modulated loss, GES achieves competitive performance in\nnovel-view synthesis benchmarks while requiring less than half the memory\nstorage of Gaussian Splatting and increasing the rendering speed by up to 39%.\nThe code is available on the project website https://abdullahamdi.com/ges .", + "Vision-Language models (VLMs) have excelled in the image-domain -- especially\nin zero-shot settings -- thanks to the availability of vast pretraining data\n(i.e., paired image-text samples). However for videos, such paired data is not\nas abundant. Therefore, video-VLMs are usually designed by adapting pretrained\nimage-VLMs to the video-domain, instead of training from scratch. All such\nrecipes rely on augmenting visual embeddings with temporal information (i.e.,\nimage $\\rightarrow$ video), often keeping text embeddings unchanged or even\nbeing discarded. In this paper, we argue the contrary, that better video-VLMs\ncan be designed by focusing more on augmenting text, rather than visual\ninformation. More specifically, we introduce Video-conditioned Text\nRepresentations (VicTR): a form of text embeddings optimized w.r.t. visual\nembeddings, creating a more-flexible contrastive latent space. Our model can\nfurther make use of freely-available semantic information, in the form of\nvisually-grounded auxiliary text (e.g. object or scene information).", + "visual\nembeddings, creating a more-flexible contrastive latent space. Our model can\nfurther make use of freely-available semantic information, in the form of\nvisually-grounded auxiliary text (e.g. object or scene information). We\nevaluate our model on few-shot, zero-shot (HMDB-51, UCF-101), short-form\n(Kinetics-400) and long-form (Charades) activity recognition benchmarks,\nshowing strong performance among video-VLMs.", + "Different from a unimodal model whose input is from a single modality, the\ninput (called multi-modal input) of a multi-modal model is from multiple\nmodalities such as image, 3D points, audio, text, etc. Similar to unimodal\nmodels, many existing studies show that a multi-modal model is also vulnerable\nto adversarial perturbation, where an attacker could add small perturbation to\nall modalities of a multi-modal input such that the multi-modal model makes\nincorrect predictions for it. Existing certified defenses are mostly designed\nfor unimodal models, which achieve sub-optimal certified robustness guarantees\nwhen extended to multi-modal models as shown in our experimental results. In\nour work, we propose MMCert, the first certified defense against adversarial\nattacks to a multi-modal model. We derive a lower bound on the performance of\nour MMCert under arbitrary adversarial attacks with bounded perturbations to\nboth modalities (e.g., in the context of auto-driving, we bound the number of\nchanged pixels in both RGB image and depth image).", + "We derive a lower bound on the performance of\nour MMCert under arbitrary adversarial attacks with bounded perturbations to\nboth modalities (e.g., in the context of auto-driving, we bound the number of\nchanged pixels in both RGB image and depth image). We evaluate our MMCert using\ntwo benchmark datasets: one for the multi-modal road segmentation task and the\nother for the multi-modal emotion recognition task. Moreover, we compare our\nMMCert with a state-of-the-art certified defense extended from unimodal models.\nOur experimental results show that our MMCert outperforms the baseline.", + "Data-Free Knowledge Distillation (DFKD) has made significant recent strides\nby transferring knowledge from a teacher neural network to a student neural\nnetwork without accessing the original data. Nonetheless, existing approaches\nencounter a significant challenge when attempting to generate samples from\nrandom noise inputs, which inherently lack meaningful information.\nConsequently, these models struggle to effectively map this noise to the\nground-truth sample distribution, resulting in prolonging training times and\nlow-quality outputs. In this paper, we propose a novel Noisy Layer Generation\nmethod (NAYER) which relocates the random source from the input to a noisy\nlayer and utilizes the meaningful constant label-text embedding (LTE) as the\ninput. LTE is generated by using the language model once, and then it is stored\nin memory for all subsequent training processes. The significance of LTE lies\nin its ability to contain substantial meaningful inter-class information,\nenabling the generation of high-quality samples with only a few training steps.\nSimultaneously, the noisy layer plays a key role in addressing the issue of\ndiversity in sample generation by preventing the model from overemphasizing the\nconstrained label information.", + "Simultaneously, the noisy layer plays a key role in addressing the issue of\ndiversity in sample generation by preventing the model from overemphasizing the\nconstrained label information. By reinitializing the noisy layer in each\niteration, we aim to facilitate the generation of diverse samples while still\nretaining the method's efficiency, thanks to the ease of learning provided by\nLTE. Experiments carried out on multiple datasets demonstrate that our NAYER\nnot only outperforms the state-of-the-art methods but also achieves speeds 5 to\n15 times faster than previous approaches. The code is available at\nhttps://github.com/tmtuan1307/nayer.", + "In recent research, significant attention has been devoted to the\nopen-vocabulary object detection task, aiming to generalize beyond the limited\nnumber of classes labeled during training and detect objects described by\narbitrary category names at inference. Compared with conventional object\ndetection, open vocabulary object detection largely extends the object\ndetection categories. However, it relies on calculating the similarity between\nimage regions and a set of arbitrary category names with a pretrained\nvision-and-language model. This implies that, despite its open-set nature, the\ntask still needs the predefined object categories during the inference stage.\nThis raises the question: What if we do not have exact knowledge of object\ncategories during inference? In this paper, we call such a new setting as\ngenerative open-ended object detection, which is a more general and practical\nproblem. To address it, we formulate object detection as a generative problem\nand propose a simple framework named GenerateU, which can detect dense objects\nand generate their names in a free-form way. Particularly, we employ Deformable\nDETR as a region proposal generator with a language model translating visual\nregions to object names.", + "Particularly, we employ Deformable\nDETR as a region proposal generator with a language model translating visual\nregions to object names. To assess the free-form object detection task, we\nintroduce an evaluation method designed to quantitatively measure the\nperformance of generative outcomes. Extensive experiments demonstrate strong\nzero-shot detection performance of our GenerateU. For example, on the LVIS\ndataset, our GenerateU achieves comparable results to the open-vocabulary\nobject detection method GLIP, even though the category names are not seen by\nGenerateU during inference. Code is available at: https://\ngithub.com/FoundationVision/GenerateU .", + "Recently, the advent of Large Visual-Language Models (LVLMs) has received\nincreasing attention across various domains, particularly in the field of\nvisual document understanding (VDU). Different from conventional\nvision-language tasks, VDU is specifically concerned with text-rich scenarios\ncontaining abundant document elements. Nevertheless, the importance of\nfine-grained features remains largely unexplored within the community of LVLMs,\nleading to suboptimal performance in text-rich scenarios. In this paper, we\nabbreviate it as the fine-grained feature collapse issue. With the aim of\nfilling this gap, we propose a contrastive learning framework, termed Document\nObject COntrastive learning (DoCo), specifically tailored for the downstream\ntasks of VDU. DoCo leverages an auxiliary multimodal encoder to obtain the\nfeatures of document objects and align them to the visual features generated by\nthe vision encoder of LVLM, which enhances visual representation in text-rich\nscenarios.", + "DoCo leverages an auxiliary multimodal encoder to obtain the\nfeatures of document objects and align them to the visual features generated by\nthe vision encoder of LVLM, which enhances visual representation in text-rich\nscenarios. It can represent that the contrastive learning between the visual\nholistic representations and the multimodal fine-grained features of document\nobjects can assist the vision encoder in acquiring more effective visual cues,\nthereby enhancing the comprehension of text-rich documents in LVLMs. We also\ndemonstrate that the proposed DoCo serves as a plug-and-play pre-training\nmethod, which can be employed in the pre-training of various LVLMs without\ninducing any increase in computational complexity during the inference process.\nExtensive experimental results on multiple benchmarks of VDU reveal that LVLMs\nequipped with our proposed DoCo can achieve superior performance and mitigate\nthe gap between VDU and generic vision-language tasks.", + "Diffusion models generate images with an unprecedented level of quality, but\nhow can we freely rearrange image layouts? Recent works generate controllable\nscenes via learning spatially disentangled latent codes, but these methods do\nnot apply to diffusion models due to their fixed forward process. In this work,\nwe propose SceneDiffusion to optimize a layered scene representation during the\ndiffusion sampling process. Our key insight is that spatial disentanglement can\nbe obtained by jointly denoising scene renderings at different spatial layouts.\nOur generated scenes support a wide range of spatial editing operations,\nincluding moving, resizing, cloning, and layer-wise appearance editing\noperations, including object restyling and replacing. Moreover, a scene can be\ngenerated conditioned on a reference image, thus enabling object moving for\nin-the-wild images. Notably, this approach is training-free, compatible with\ngeneral text-to-image diffusion models, and responsive in less than a second.", + "Federated Learning (FL) enables collaborative model training while preserving\nthe privacy of raw data. A challenge in this framework is the fair and\nefficient valuation of data, which is crucial for incentivizing clients to\ncontribute high-quality data in the FL task. In scenarios involving numerous\ndata clients within FL, it is often the case that only a subset of clients and\ndatasets are pertinent to a specific learning task, while others might have\neither a negative or negligible impact on the model training process. This\npaper introduces a novel privacy-preserving method for evaluating client\ncontributions and selecting relevant datasets without a pre-specified training\nalgorithm in an FL task. Our proposed approach FedBary, utilizes Wasserstein\ndistance within the federated context, offering a new solution for data\nvaluation in the FL framework. This method ensures transparent data valuation\nand efficient computation of the Wasserstein barycenter and reduces the\ndependence on validation datasets. Through extensive empirical experiments and\ntheoretical analyses, we demonstrate the potential of this data valuation\nmethod as a promising avenue for FL research.", + "In this paper, we present an empirical study on image recognition fairness,\ni.e., extreme class accuracy disparity on balanced data like ImageNet. We\nexperimentally demonstrate that classes are not equal and the fairness issue is\nprevalent for image classification models across various datasets, network\narchitectures, and model capacities. Moreover, several intriguing properties of\nfairness are identified. First, the unfairness lies in problematic\nrepresentation rather than classifier bias. Second, with the proposed concept\nof Model Prediction Bias, we investigate the origins of problematic\nrepresentation during optimization. Our findings reveal that models tend to\nexhibit greater prediction biases for classes that are more challenging to\nrecognize. It means that more other classes will be confused with harder\nclasses. Then the False Positives (FPs) will dominate the learning in\noptimization, thus leading to their poor accuracy. Further, we conclude that\ndata augmentation and representation learning algorithms improve overall\nperformance by promoting fairness to some degree in image classification. The\nCode is available at\nhttps://github.com/dvlab-research/Parametric-Contrastive-Learning.", + "This work addresses the problem of real-time rendering of photorealistic\nhuman body avatars learned from multi-view videos. While the classical\napproaches to model and render virtual humans generally use a textured mesh,\nrecent research has developed neural body representations that achieve\nimpressive visual quality. However, these models are difficult to render in\nreal-time and their quality degrades when the character is animated with body\nposes different than the training observations. We propose an animatable human\nmodel based on 3D Gaussian Splatting, that has recently emerged as a very\nefficient alternative to neural radiance fields. The body is represented by a\nset of gaussian primitives in a canonical space which is deformed with a coarse\nto fine approach that combines forward skinning and local non-rigid refinement.\nWe describe how to learn our Human Gaussian Splatting (HuGS) model in an\nend-to-end fashion from multi-view observations, and evaluate it against the\nstate-of-the-art approaches for novel pose synthesis of clothed body. Our\nmethod achieves 1.5 dB PSNR improvement over the state-of-the-art on THuman4\ndataset while being able to render in real-time (80 fps for 512x512\nresolution).", + "3D Gaussians have recently emerged as a highly efficient representation for\n3D reconstruction and rendering. Despite its high rendering quality and speed\nat high resolutions, they both deteriorate drastically when rendered at lower\nresolutions or from far away camera position. During low resolution or far away\nrendering, the pixel size of the image can fall below the Nyquist frequency\ncompared to the screen size of each splatted 3D Gaussian and leads to aliasing\neffect. The rendering is also drastically slowed down by the sequential alpha\nblending of more splatted Gaussians per pixel. To address these issues, we\npropose a multi-scale 3D Gaussian splatting algorithm, which maintains\nGaussians at different scales to represent the same scene. Higher-resolution\nimages are rendered with more small Gaussians, and lower-resolution images are\nrendered with fewer larger Gaussians. With similar training time, our algorithm\ncan achieve 13\\%-66\\% PSNR and 160\\%-2400\\% rendering speed improvement at\n4$\\times$-128$\\times$ scale rendering on Mip-NeRF360 dataset compared to the\nsingle scale 3D Gaussian splitting.", + "Our code and more results are available on\nour project website https://jokeryan.github.io/projects/ms-gs/", + "An important and unsolved problem in computer vision is to ensure that the\nalgorithms are robust to changes in image domains. We address this problem in\nthe scenario where we have access to images from the target domains but no\nannotations. Motivated by the challenges of the OOD-CV benchmark where we\nencounter real world Out-of-Domain (OOD) nuisances and occlusion, we introduce\na novel Bayesian approach to OOD robustness for object classification. Our work\nextends Compositional Neural Networks (CompNets), which have been shown to be\nrobust to occlusion but degrade badly when tested on OOD data. We exploit the\nfact that CompNets contain a generative head defined over feature vectors\nrepresented by von Mises-Fisher (vMF) kernels, which correspond roughly to\nobject parts, and can be learned without supervision. We obverse that some vMF\nkernels are similar between different domains, while others are not. This\nenables us to learn a transitional dictionary of vMF kernels that are\nintermediate between the source and target domains and train the generative\nmodel on this dictionary using the annotations on the source domain, followed\nby iterative refinement.", + "This\nenables us to learn a transitional dictionary of vMF kernels that are\nintermediate between the source and target domains and train the generative\nmodel on this dictionary using the annotations on the source domain, followed\nby iterative refinement. This approach, termed Unsupervised Generative\nTransition (UGT), performs very well in OOD scenarios even when occlusion is\npresent. UGT is evaluated on different OOD benchmarks including the OOD-CV\ndataset, several popular datasets (e.g., ImageNet-C [9]), artificial image\ncorruptions (including adding occluders), and synthetic-to-real domain\ntransfer, and does well in all scenarios outperforming SOTA alternatives (e.g.\nup to 10% top-1 accuracy on Occluded OOD-CV dataset).", + "We present Unified-IO 2, the first autoregressive multimodal model that is\ncapable of understanding and generating image, text, audio, and action. To\nunify different modalities, we tokenize inputs and outputs -- images, text,\naudio, action, bounding boxes, etc., into a shared semantic space and then\nprocess them with a single encoder-decoder transformer model. Since training\nwith such diverse modalities is challenging, we propose various architectural\nimprovements to stabilize model training. We train our model from scratch on a\nlarge multimodal pre-training corpus from diverse sources with a multimodal\nmixture of denoisers objective. To learn an expansive set of skills, such as\nfollowing multimodal instructions, we construct and finetune on an ensemble of\n120 datasets with prompts and augmentations. With a single unified model,\nUnified-IO 2 achieves state-of-the-art performance on the GRIT benchmark and\nstrong results in more than 35 benchmarks, including image generation and\nunderstanding, natural language understanding, video and audio understanding,\nand robotic manipulation. We release all our models to the research community.", + "Human-object contact serves as a strong cue to understand how humans\nphysically interact with objects. Nevertheless, it is not widely explored to\nutilize human-object contact information for the joint reconstruction of 3D\nhuman and object from a single image. In this work, we present a novel joint 3D\nhuman-object reconstruction method (CONTHO) that effectively exploits contact\ninformation between humans and objects. There are two core designs in our\nsystem: 1) 3D-guided contact estimation and 2) contact-based 3D human and\nobject refinement. First, for accurate human-object contact estimation, CONTHO\ninitially reconstructs 3D humans and objects and utilizes them as explicit 3D\nguidance for contact estimation. Second, to refine the initial reconstructions\nof 3D human and object, we propose a novel contact-based refinement Transformer\nthat effectively aggregates human features and object features based on the\nestimated human-object contact. The proposed contact-based refinement prevents\nthe learning of erroneous correlation between human and object, which enables\naccurate 3D reconstruction.", + "The proposed contact-based refinement prevents\nthe learning of erroneous correlation between human and object, which enables\naccurate 3D reconstruction. As a result, our CONTHO achieves state-of-the-art\nperformance in both human-object contact estimation and joint reconstruction of\n3D human and object. The code is publicly available at\nhttps://github.com/dqj5182/CONTHO_RELEASE.", + "Diverse actions give rise to rich audio-visual signals in long videos. Recent\nworks showcase that the two modalities of audio and video exhibit different\ntemporal extents of events and distinct labels. We address the interplay\nbetween the two modalities in long videos by explicitly modelling the temporal\nextents of audio and visual events. We propose the Time Interval Machine (TIM)\nwhere a modality-specific time interval poses as a query to a transformer\nencoder that ingests a long video input. The encoder then attends to the\nspecified interval, as well as the surrounding context in both modalities, in\norder to recognise the ongoing action.\n We test TIM on three long audio-visual video datasets: EPIC-KITCHENS,\nPerception Test, and AVE, reporting state-of-the-art (SOTA) for recognition. On\nEPIC-KITCHENS, we beat previous SOTA that utilises LLMs and significantly\nlarger pre-training by 2.9% top-1 action recognition accuracy.", + "On\nEPIC-KITCHENS, we beat previous SOTA that utilises LLMs and significantly\nlarger pre-training by 2.9% top-1 action recognition accuracy. Additionally, we\nshow that TIM can be adapted for action detection, using dense multi-scale\ninterval queries, outperforming SOTA on EPIC-KITCHENS-100 for most metrics, and\nshowing strong performance on the Perception Test. Our ablations show the\ncritical role of integrating the two modalities and modelling their time\nintervals in achieving this performance. Code and models at:\nhttps://github.com/JacobChalk/TIM", + "In the literature, points and conics have been major features for camera\ngeometric calibration. Although conics are more informative features than\npoints, the loss of the conic property under distortion has critically limited\nthe utility of conic features in camera calibration. Many existing approaches\naddressed conic-based calibration by ignoring distortion or introducing 3D\nspherical targets to circumvent this limitation. In this paper, we present a\nnovel formulation for conic-based calibration using moments. Our derivation is\nbased on the mathematical finding that the first moment can be estimated\nwithout bias even under distortion. This allows us to track moment changes\nduring projection and distortion, ensuring the preservation of the first moment\nof the distorted conic. With an unbiased estimator, the circular patterns can\nbe accurately detected at the sub-pixel level and can now be fully exploited\nfor an entire calibration pipeline, resulting in significantly improved\ncalibration. The entire code is readily available from\nhttps://github.com/ChaehyeonSong/discocal.", + "We introduce MultiPhys, a method designed for recovering multi-person motion\nfrom monocular videos. Our focus lies in capturing coherent spatial placement\nbetween pairs of individuals across varying degrees of engagement. MultiPhys,\nbeing physically aware, exhibits robustness to jittering and occlusions, and\neffectively eliminates penetration issues between the two individuals. We\ndevise a pipeline in which the motion estimated by a kinematic-based method is\nfed into a physics simulator in an autoregressive manner. We introduce distinct\ncomponents that enable our model to harness the simulator's properties without\ncompromising the accuracy of the kinematic estimates. This results in final\nmotion estimates that are both kinematically coherent and physically compliant.\nExtensive evaluations on three challenging datasets characterized by\nsubstantial inter-person interaction show that our method significantly reduces\nerrors associated with penetration and foot skating, while performing\ncompetitively with the state-of-the-art on motion accuracy and smoothness.\nResults and code can be found on our project page\n(http://www.iri.upc.edu/people/nugrinovic/multiphys/).", + "We estimate the radiance field of large-scale dynamic areas from multiple\nvehicle captures under varying environmental conditions. Previous works in this\ndomain are either restricted to static environments, do not scale to more than\na single short video, or struggle to separately represent dynamic object\ninstances. To this end, we present a novel, decomposable radiance field\napproach for dynamic urban environments. We propose a multi-level neural scene\ngraph representation that scales to thousands of images from dozens of\nsequences with hundreds of fast-moving objects. To enable efficient training\nand rendering of our representation, we develop a fast composite ray sampling\nand rendering scheme. To test our approach in urban driving scenarios, we\nintroduce a new, novel view synthesis benchmark. We show that our approach\noutperforms prior art by a significant margin on both established and our\nproposed benchmark while being faster in training and rendering.", + "We investigate the impact of deep generative models on potential social\nbiases in upcoming computer vision models. As the internet witnesses an\nincreasing influx of AI-generated images, concerns arise regarding inherent\nbiases that may accompany them, potentially leading to the dissemination of\nharmful content. This paper explores whether a detrimental feedback loop,\nresulting in bias amplification, would occur if generated images were used as\nthe training data for future models. We conduct simulations by progressively\nsubstituting original images in COCO and CC3M datasets with images generated\nthrough Stable Diffusion. The modified datasets are used to train OpenCLIP and\nimage captioning models, which we evaluate in terms of quality and bias.\nContrary to expectations, our findings indicate that introducing generated\nimages during training does not uniformly amplify bias. Instead, instances of\nbias mitigation across specific tasks are observed. We further explore the\nfactors that may influence these phenomena, such as artifacts in image\ngeneration (e.g., blurry faces) or pre-existing biases in the original\ndatasets.", + "The success of denoising diffusion models in representing rich data\ndistributions over 2D raster images has prompted research on extending them to\nother data representations, such as vector graphics. Unfortunately due to their\nvariable structure and scarcity of vector training data, directly applying\ndiffusion models on this domain remains a challenging problem. Using\nworkarounds like optimization via Score Distillation Sampling (SDS) is also\nfraught with difficulty, as vector representations are non trivial to directly\noptimize and tend to result in implausible geometries such as redundant or\nself-intersecting shapes. NIVeL addresses these challenges by reinterpreting\nthe problem on an alternative, intermediate domain which preserves the\ndesirable properties of vector graphics -- mainly sparsity of representation\nand resolution-independence. This alternative domain is based on neural\nimplicit fields expressed in a set of decomposable, editable layers. Based on\nour experiments, NIVeL produces text-to-vector graphics results of\nsignificantly better quality than the state-of-the-art.", + "Real driving-video dehazing poses a significant challenge due to the inherent\ndifficulty in acquiring precisely aligned hazy/clear video pairs for effective\nmodel training, especially in dynamic driving scenarios with unpredictable\nweather conditions. In this paper, we propose a pioneering approach that\naddresses this challenge through a nonaligned regularization strategy. Our core\nconcept involves identifying clear frames that closely match hazy frames,\nserving as references to supervise a video dehazing network. Our approach\ncomprises two key components: reference matching and video dehazing. Firstly,\nwe introduce a non-aligned reference frame matching module, leveraging an\nadaptive sliding window to match high-quality reference frames from clear\nvideos. Video dehazing incorporates flow-guided cosine attention sampler and\ndeformable cosine attention fusion modules to enhance spatial multiframe\nalignment and fuse their improved information. To validate our approach, we\ncollect a GoProHazy dataset captured effortlessly with GoPro cameras in diverse\nrural and urban road environments. Extensive experiments demonstrate the\nsuperiority of the proposed method over current state-of-the-art methods in the\nchallenging task of real driving-video dehazing. Project page.", + "Neural Radiance Field (NeRF) has achieved superior performance for novel view\nsynthesis by modeling the scene with a Multi-Layer Perception (MLP) and a\nvolume rendering procedure, however, when fewer known views are given (i.e.,\nfew-shot view synthesis), the model is prone to overfit the given views. To\nhandle this issue, previous efforts have been made towards leveraging learned\npriors or introducing additional regularizations. In contrast, in this paper,\nwe for the first time provide an orthogonal method from the perspective of\nnetwork structure. Given the observation that trivially reducing the number of\nmodel parameters alleviates the overfitting issue, but at the cost of missing\ndetails, we propose the multi-input MLP (mi-MLP) that incorporates the inputs\n(i.e., location and viewing direction) of the vanilla MLP into each layer to\nprevent the overfitting issue without harming detailed synthesis. To further\nreduce the artifacts, we propose to model colors and volume density separately\nand present two regularization terms.", + "To further\nreduce the artifacts, we propose to model colors and volume density separately\nand present two regularization terms. Extensive experiments on multiple\ndatasets demonstrate that: 1) although the proposed mi-MLP is easy to\nimplement, it is surprisingly effective as it boosts the PSNR of the baseline\nfrom $14.73$ to $24.23$. 2) the overall framework achieves state-of-the-art\nresults on a wide range of benchmarks. We will release the code upon\npublication.", + "We present OAKINK2, a dataset of bimanual object manipulation tasks for\ncomplex daily activities. In pursuit of constructing the complex tasks into a\nstructured representation, OAKINK2 introduces three level of abstraction to\norganize the manipulation tasks: Affordance, Primitive Task, and Complex Task.\nOAKINK2 features on an object-centric perspective for decoding the complex\ntasks, treating them as a sequence of object affordance fulfillment. The first\nlevel, Affordance, outlines the functionalities that objects in the scene can\nafford, the second level, Primitive Task, describes the minimal interaction\nunits that humans interact with the object to achieve its affordance, and the\nthird level, Complex Task, illustrates how Primitive Tasks are composed and\ninterdependent. OAKINK2 dataset provides multi-view image streams and precise\npose annotations for the human body, hands and various interacting objects.\nThis extensive collection supports applications such as interaction\nreconstruction and motion synthesis. Based on the 3-level abstraction of\nOAKINK2, we explore a task-oriented framework for Complex Task Completion\n(CTC). CTC aims to generate a sequence of bimanual manipulation to achieve task\nobjectives.", + "Based on the 3-level abstraction of\nOAKINK2, we explore a task-oriented framework for Complex Task Completion\n(CTC). CTC aims to generate a sequence of bimanual manipulation to achieve task\nobjectives. Within the CTC framework, we employ Large Language Models (LLMs) to\ndecompose the complex task objectives into sequences of Primitive Tasks and\nhave developed a Motion Fulfillment Model that generates bimanual hand motion\nfor each Primitive Task. OAKINK2 datasets and models are available at\nhttps://oakink.net/v2.", + "People are spending an enormous amount of time on digital devices through\ngraphical user interfaces (GUIs), e.g., computer or smartphone screens. Large\nlanguage models (LLMs) such as ChatGPT can assist people in tasks like writing\nemails, but struggle to understand and interact with GUIs, thus limiting their\npotential to increase automation levels. In this paper, we introduce CogAgent,\nan 18-billion-parameter visual language model (VLM) specializing in GUI\nunderstanding and navigation. By utilizing both low-resolution and\nhigh-resolution image encoders, CogAgent supports input at a resolution of\n1120*1120, enabling it to recognize tiny page elements and text. As a\ngeneralist visual language model, CogAgent achieves the state of the art on\nfive text-rich and four general VQA benchmarks, including VQAv2, OK-VQA,\nText-VQA, ST-VQA, ChartQA, infoVQA, DocVQA, MM-Vet, and POPE.", + "CogAgent, using\nonly screenshots as input, outperforms LLM-based methods that consume extracted\nHTML text on both PC and Android GUI navigation tasks -- Mind2Web and AITW,\nadvancing the state of the art. The model and codes are available at\nhttps://github.com/THUDM/CogVLM .", + "Autonomous vehicle (AV) systems rely on robust perception models as a\ncornerstone of safety assurance. However, objects encountered on the road\nexhibit a long-tailed distribution, with rare or unseen categories posing\nchallenges to a deployed perception model. This necessitates an expensive\nprocess of continuously curating and annotating data with significant human\neffort. We propose to leverage recent advances in vision-language and large\nlanguage models to design an Automatic Data Engine (AIDE) that automatically\nidentifies issues, efficiently curates data, improves the model through\nauto-labeling, and verifies the model through generation of diverse scenarios.\nThis process operates iteratively, allowing for continuous self-improvement of\nthe model. We further establish a benchmark for open-world detection on AV\ndatasets to comprehensively evaluate various learning paradigms, demonstrating\nour method's superior performance at a reduced cost.", + "We introduce Multi-view Ancestral Sampling (MAS), a method for 3D motion\ngeneration, using 2D diffusion models that were trained on motions obtained\nfrom in-the-wild videos. As such, MAS opens opportunities to exciting and\ndiverse fields of motion previously under-explored as 3D data is scarce and\nhard to collect. MAS works by simultaneously denoising multiple 2D motion\nsequences representing different views of the same 3D motion. It ensures\nconsistency across all views at each diffusion step by combining the individual\ngenerations into a unified 3D sequence, and projecting it back to the original\nviews. We demonstrate MAS on 2D pose data acquired from videos depicting\nprofessional basketball maneuvers, rhythmic gymnastic performances featuring a\nball apparatus, and horse races. In each of these domains, 3D motion capture is\narduous, and yet, MAS generates diverse and realistic 3D sequences. Unlike the\nScore Distillation approach, which optimizes each sample by repeatedly applying\nsmall fixes, our method uses a sampling process that was constructed for the\ndiffusion framework.", + "Unlike the\nScore Distillation approach, which optimizes each sample by repeatedly applying\nsmall fixes, our method uses a sampling process that was constructed for the\ndiffusion framework. As we demonstrate, MAS avoids common issues such as\nout-of-domain sampling and mode-collapse. https://guytevet.github.io/mas-page/", + "Despite the significant demand for assistive technology among vulnerable\ngroups (e.g., the elderly, children, and the disabled) in daily tasks, research\ninto advanced AI-driven assistive solutions that genuinely accommodate their\ndiverse needs remains sparse. Traditional human-machine interaction tasks often\nrequire machines to simply help without nuanced consideration of human\nabilities and feelings, such as their opportunity for practice and learning,\nsense of self-improvement, and self-esteem. Addressing this gap, we define a\npivotal and novel challenge Smart Help, which aims to provide proactive yet\nadaptive support to human agents with diverse disabilities and dynamic goals in\nvarious tasks and environments. To establish this challenge, we leverage\nAI2-THOR to build a new interactive 3D realistic household environment for the\nSmart Help task. We introduce an innovative opponent modeling module that\nprovides a nuanced understanding of the main agent's capabilities and goals, in\norder to optimize the assisting agent's helping policy. Rigorous experiments\nvalidate the efficacy of our model components and show the superiority of our\nholistic approach against established baselines. Our findings illustrate the\npotential of AI-imbued assistive robots in improving the well-being of\nvulnerable groups.", + "Event Stream Super-Resolution (ESR) aims to address the challenge of\ninsufficient spatial resolution in event streams, which holds great\nsignificance for the application of event cameras in complex scenarios.\nPrevious works for ESR often process positive and negative events in a mixed\nparadigm. This paradigm limits their ability to effectively model the unique\ncharacteristics of each event and mutually refine each other by considering\ntheir correlations. In this paper, we propose a bilateral event mining and\ncomplementary network (BMCNet) to fully leverage the potential of each event\nand capture the shared information to complement each other simultaneously.\nSpecifically, we resort to a two-stream network to accomplish comprehensive\nmining of each type of events individually. To facilitate the exchange of\ninformation between two streams, we propose a bilateral information exchange\n(BIE) module. This module is layer-wisely embedded between two streams,\nenabling the effective propagation of hierarchical global information while\nalleviating the impact of invalid information brought by inherent\ncharacteristics of events.", + "This module is layer-wisely embedded between two streams,\nenabling the effective propagation of hierarchical global information while\nalleviating the impact of invalid information brought by inherent\ncharacteristics of events. The experimental results demonstrate that our\napproach outperforms the previous state-of-the-art methods in ESR, achieving\nperformance improvements of over 11\\% on both real and synthetic datasets.\nMoreover, our method significantly enhances the performance of event-based\ndownstream tasks such as object recognition and video reconstruction. Our code\nis available at https://github.com/Lqm26/BMCNet-ESR.", + "Developing generalizable manipulation skills is a core challenge in embodied\nAI. This includes generalization across diverse task configurations,\nencompassing variations in object shape, density, friction coefficient, and\nexternal disturbances such as forces applied to the robot. Rapid Motor\nAdaptation (RMA) offers a promising solution to this challenge. It posits that\nessential hidden variables influencing an agent's task performance, such as\nobject mass and shape, can be effectively inferred from the agent's action and\nproprioceptive history. Drawing inspiration from RMA in locomotion and in-hand\nrotation, we use depth perception to develop agents tailored for rapid motor\nadaptation in a variety of manipulation tasks. We evaluated our agents on four\nchallenging tasks from the Maniskill2 benchmark, namely pick-and-place\noperations with hundreds of objects from the YCB and EGAD datasets, peg\ninsertion with precise position and orientation, and operating a variety of\nfaucets and handles, with customized environment variations. Empirical results\ndemonstrate that our agents surpass state-of-the-art methods like automatic\ndomain randomization and vision-based policies, obtaining better generalization\nperformance and sample efficiency.", + "Scene graph generation aims to capture detailed spatial and semantic\nrelationships between objects in an image, which is challenging due to\nincomplete labelling, long-tailed relationship categories, and relational\nsemantic overlap. Existing Transformer-based methods either employ distinct\nqueries for objects and predicates or utilize holistic queries for relation\ntriplets and hence often suffer from limited capacity in learning low-frequency\nrelationships. In this paper, we present a new Transformer-based method, called\nDSGG, that views scene graph detection as a direct graph prediction problem\nbased on a unique set of graph-aware queries. In particular, each graph-aware\nquery encodes a compact representation of both the node and all of its\nrelations in the graph, acquired through the utilization of a relaxed sub-graph\nmatching during the training process. Moreover, to address the problem of\nrelational semantic overlap, we utilize a strategy for relation distillation,\naiming to efficiently learn multiple instances of semantic relationships.", + "Moreover, to address the problem of\nrelational semantic overlap, we utilize a strategy for relation distillation,\naiming to efficiently learn multiple instances of semantic relationships.\nExtensive experiments on the VG and the PSG datasets show that our model\nachieves state-of-the-art results, showing a significant improvement of 3.5\\%\nand 6.7\\% in mR@50 and mR@100 for the scene-graph generation task and achieves\nan even more substantial improvement of 8.5\\% and 10.3\\% in mR@50 and mR@100\nfor the panoptic scene graph generation task. Code is available at\n\\url{https://github.com/zeeshanhayder/DSGG}.", + "Single Image Super-Resolution is a classic computer vision problem that\ninvolves estimating high-resolution (HR) images from low-resolution (LR) ones.\nAlthough deep neural networks (DNNs), especially Transformers for\nsuper-resolution, have seen significant advancements in recent years,\nchallenges still remain, particularly in limited receptive field caused by\nwindow-based self-attention. To address these issues, we introduce a group of\nauxiliary Adaptive Token Dictionary to SR Transformer and establish an ATD-SR\nmethod. The introduced token dictionary could learn prior information from\ntraining data and adapt the learned prior to specific testing image through an\nadaptive refinement step. The refinement strategy could not only provide global\ninformation to all input tokens but also group image tokens into categories.\nBased on category partitions, we further propose a category-based\nself-attention mechanism designed to leverage distant but similar tokens for\nenhancing input features. The experimental results show that our method\nachieves the best performance on various single image super-resolution\nbenchmarks.", + "Modeling object dynamics with a neural network is an important problem with\nnumerous applications. Most recent work has been based on graph neural\nnetworks. However, physics happens in 3D space, where geometric information\npotentially plays an important role in modeling physical phenomena. In this\nwork, we propose a novel U-net architecture based on continuous point\nconvolution which naturally embeds information from 3D coordinates and allows\nfor multi-scale feature representations with established downsampling and\nupsampling procedures. Bottleneck layers in the downsampled point clouds lead\nto better long-range interaction modeling. Besides, the flexibility of point\nconvolutions allows our approach to generalize to sparsely sampled points from\nmesh vertices and dynamically generate features on important interaction points\non mesh faces. Experimental results demonstrate that our approach significantly\nimproves the state-of-the-art, especially in scenarios that require accurate\ngravity or collision reasoning.", + "Recent advancements in neural networks have showcased their remarkable\ncapabilities across various domains. Despite these successes, the \"black box\"\nproblem still remains. Addressing this, we propose a novel framework, WWW, that\noffers the 'what', 'where', and 'why' of the neural network decisions in\nhuman-understandable terms. Specifically, WWW utilizes adaptive selection for\nconcept discovery, employing adaptive cosine similarity and thresholding\ntechniques to effectively explain 'what'. To address the 'where' and 'why', we\nproposed a novel combination of neuron activation maps (NAMs) with Shapley\nvalues, generating localized concept maps and heatmaps for individual inputs.\nFurthermore, WWW introduces a method for predicting uncertainty, leveraging\nheatmap similarities to estimate 'how' reliable the prediction is. Experimental\nevaluations of WWW demonstrate superior performance in both quantitative and\nqualitative metrics, outperforming existing methods in interpretability. WWW\nprovides a unified solution for explaining 'what', 'where', and 'why',\nintroducing a method for localized explanations from global interpretations and\noffering a plug-and-play solution adaptable to various architectures.", + "Prior studies on Remote Sensing Foundation Model (RSFM) reveal immense\npotential towards a generic model for Earth Observation. Nevertheless, these\nworks primarily focus on a single modality without temporal and geo-context\nmodeling, hampering their capabilities for diverse tasks. In this study, we\npresent SkySense, a generic billion-scale model, pre-trained on a curated\nmulti-modal Remote Sensing Imagery (RSI) dataset with 21.5 million temporal\nsequences. SkySense incorporates a factorized multi-modal spatiotemporal\nencoder taking temporal sequences of optical and Synthetic Aperture Radar (SAR)\ndata as input. This encoder is pre-trained by our proposed Multi-Granularity\nContrastive Learning to learn representations across different modal and\nspatial granularities. To further enhance the RSI representations by the\ngeo-context clue, we introduce Geo-Context Prototype Learning to learn\nregion-aware prototypes upon RSI's multi-modal spatiotemporal features. To our\nbest knowledge, SkySense is the largest Multi-Modal RSFM to date, whose modules\ncan be flexibly combined or used individually to accommodate various tasks.", + "To our\nbest knowledge, SkySense is the largest Multi-Modal RSFM to date, whose modules\ncan be flexibly combined or used individually to accommodate various tasks. It\ndemonstrates remarkable generalization capabilities on a thorough evaluation\nencompassing 16 datasets over 7 tasks, from single- to multi-modal, static to\ntemporal, and classification to localization. SkySense surpasses 18 recent\nRSFMs in all test scenarios. Specifically, it outperforms the latest models\nsuch as GFM, SatLas and Scale-MAE by a large margin, i.e., 2.76%, 3.67% and\n3.61% on average respectively. We will release the pre-trained weights to\nfacilitate future research and Earth Observation applications.", + "While federated learning (FL) systems often utilize quantization to battle\ncommunication and computational bottlenecks, they have heretofore been limited\nto deploying fixed-precision quantization schemes. Meanwhile, the concept of\nmixed-precision quantization (MPQ), where different layers of a deep learning\nmodel are assigned varying bit-width, remains unexplored in the FL settings. We\npresent a novel FL algorithm, FedMPQ, which introduces mixed-precision\nquantization to resource-heterogeneous FL systems. Specifically, local models,\nquantized so as to satisfy bit-width constraint, are trained by optimizing an\nobjective function that includes a regularization term which promotes reduction\nof precision in some of the layers without significant performance degradation.\nThe server collects local model updates, de-quantizes them into full-precision\nmodels, and then aggregates them into a global model. To initialize the next\nround of local training, the server relies on the information learned in the\nprevious training round to customize bit-width assignments of the models\ndelivered to different clients.", + "To initialize the next\nround of local training, the server relies on the information learned in the\nprevious training round to customize bit-width assignments of the models\ndelivered to different clients. In extensive benchmarking experiments on\nseveral model architectures and different datasets in both iid and non-iid\nsettings, FedMPQ outperformed the baseline FL schemes that utilize\nfixed-precision quantization while incurring only a minor computational\noverhead on the participating devices.", + "Vision-Language Transformers (VLTs) have shown great success recently, but\nare meanwhile accompanied by heavy computation costs, where a major reason can\nbe attributed to the large number of visual and language tokens. Existing token\npruning research for compressing VLTs mainly follows a single-modality-based\nscheme yet ignores the critical role of aligning different modalities for\nguiding the token pruning process, causing the important tokens for one\nmodality to be falsely pruned in another modality branch. Meanwhile, existing\nVLT pruning works also lack the flexibility to dynamically compress each layer\nbased on different input samples. To this end, we propose a novel framework\nnamed Multimodal Alignment-Guided Dynamic Token Pruning (MADTP) for\naccelerating various VLTs. Specifically, we first introduce a well-designed\nMulti-modality Alignment Guidance (MAG) module that can align features of the\nsame semantic concept from different modalities, to ensure the pruned tokens\nare less important for all modalities. We further design a novel Dynamic Token\nPruning (DTP) module, which can adaptively adjust the token compression ratio\nin each layer based on different input instances.", + "We further design a novel Dynamic Token\nPruning (DTP) module, which can adaptively adjust the token compression ratio\nin each layer based on different input instances. Extensive experiments on\nvarious benchmarks demonstrate that MADTP significantly reduces the\ncomputational complexity of kinds of multimodal models while preserving\ncompetitive performance. Notably, when applied to the BLIP model in the NLVR2\ndataset, MADTP can reduce the GFLOPs by 80% with less than 4% performance\ndegradation.", + "This paper proposes to correct the rolling shutter (RS) distorted images by\nestimating the distortion flow from the global shutter (GS) to RS directly.\nExisting methods usually perform correction using the undistortion flow from\nthe RS to GS. They initially predict the flow from consecutive RS frames,\nsubsequently rescaling it as the displacement fields from the RS frame to the\nunderlying GS image using time-dependent scaling factors. Following this,\nRS-aware forward warping is employed to convert the RS image into its GS\ncounterpart. Nevertheless, this strategy is prone to two shortcomings. First,\nthe undistortion flow estimation is rendered inaccurate by merely linear\nscaling the flow, due to the complex non-linear motion nature. Second, RS-aware\nforward warping often results in unavoidable artifacts. To address these\nlimitations, we introduce a new framework that directly estimates the\ndistortion flow and rectifies the RS image with the backward warping operation.\nMore specifically, we first propose a global correlation-based flow attention\nmechanism to estimate the initial distortion flow and GS feature jointly, which\nare then refined by the following coarse-to-fine decoder layers.", + "More specifically, we first propose a global correlation-based flow attention\nmechanism to estimate the initial distortion flow and GS feature jointly, which\nare then refined by the following coarse-to-fine decoder layers. Additionally,\na multi-distortion flow prediction strategy is integrated to mitigate the issue\nof inaccurate flow estimation further. Experimental results validate the\neffectiveness of the proposed method, which outperforms state-of-the-art\napproaches on various benchmarks while maintaining high efficiency. The project\nis available at \\url{https://github.com/ljzycmd/DFRSC}.", + "Deep learning models for semantic segmentation often experience performance\ndegradation when deployed to unseen target domains unidentified during the\ntraining phase. This is mainly due to variations in image texture (\\ie style)\nfrom different data sources. To tackle this challenge, existing domain\ngeneralized semantic segmentation (DGSS) methods attempt to remove style\nvariations from the feature. However, these approaches struggle with the\nentanglement of style and content, which may lead to the unintentional removal\nof crucial content information, causing performance degradation. This study\naddresses this limitation by proposing BlindNet, a novel DGSS approach that\nblinds the style without external modules or datasets. The main idea behind our\nproposed approach is to alleviate the effect of style in the encoder whilst\nfacilitating robust segmentation in the decoder. To achieve this, BlindNet\ncomprises two key components: covariance alignment and semantic consistency\ncontrastive learning. Specifically, the covariance alignment trains the encoder\nto uniformly recognize various styles and preserve the content information of\nthe feature, rather than removing the style-sensitive factor.", + "To achieve this, BlindNet\ncomprises two key components: covariance alignment and semantic consistency\ncontrastive learning. Specifically, the covariance alignment trains the encoder\nto uniformly recognize various styles and preserve the content information of\nthe feature, rather than removing the style-sensitive factor. Meanwhile,\nsemantic consistency contrastive learning enables the decoder to construct\ndiscriminative class embedding space and disentangles features that are\nvulnerable to misclassification. Through extensive experiments, our approach\noutperforms existing DGSS methods, exhibiting robustness and superior\nperformance for semantic segmentation on unseen target domains.", + "Recent compositional zero-shot learning (CZSL) methods adapt pre-trained\nvision-language models (VLMs) by constructing trainable prompts only for\ncomposed state-object pairs. Relying on learning the joint representation of\nseen compositions, these methods ignore the explicit modeling of the state and\nobject, thus limiting the exploitation of pre-trained knowledge and\ngeneralization to unseen compositions. With a particular focus on the\nuniversality of the solution, in this work, we propose a novel paradigm for\nCZSL models that establishes three identification branches (i.e., Multi-Path)\nto jointly model the state, object, and composition. The presented Troika is\nour implementation that aligns the branch-specific prompt representations with\ndecomposed visual features. To calibrate the bias between semantically similar\nmulti-modal representations, we further devise a Cross-Modal Traction module\ninto Troika that shifts the prompt representation towards the current visual\ncontent. We conduct extensive experiments on three popular benchmarks, where\nour method significantly outperforms existing methods in both closed-world and\nopen-world settings. The code will be available at\nhttps://github.com/bighuang624/Troika.", + "It is well known that many open-released foundational diffusion models have\ndifficulty in generating images that substantially depart from average\nbrightness, despite such images being present in the training data. This is due\nto an inconsistency: while denoising starts from pure Gaussian noise during\ninference, the training noise schedule retains residual data even in the final\ntimestep distribution, due to difficulties in numerical conditioning in\nmainstream formulation, leading to unintended bias during inference. To\nmitigate this issue, certain $\\epsilon$-prediction models are combined with an\nad-hoc offset-noise methodology. In parallel, some contemporary models have\nadopted zero-terminal SNR noise schedules together with\n$\\mathbf{v}$-prediction, which necessitate major alterations to pre-trained\nmodels. However, such changes risk destabilizing a large multitude of\ncommunity-driven applications anchored on these pre-trained models. In light of\nthis, our investigation revisits the fundamental causes, leading to our\nproposal of an innovative and principled remedy, called One More Step (OMS).", + "However, such changes risk destabilizing a large multitude of\ncommunity-driven applications anchored on these pre-trained models. In light of\nthis, our investigation revisits the fundamental causes, leading to our\nproposal of an innovative and principled remedy, called One More Step (OMS). By\nintegrating a compact network and incorporating an additional simple yet\neffective step during inference, OMS elevates image fidelity and harmonizes the\ndichotomy between training and inference, while preserving original model\nparameters. Once trained, various pre-trained diffusion models with the same\nlatent domain can share the same OMS module.", + "Active recognition enables robots to intelligently explore novel\nobservations, thereby acquiring more information while circumventing undesired\nviewing conditions. Recent approaches favor learning policies from simulated or\ncollected data, wherein appropriate actions are more frequently selected when\nthe recognition is accurate. However, most recognition modules are developed\nunder the closed-world assumption, which makes them ill-equipped to handle\nunexpected inputs, such as the absence of the target object in the current\nobservation. To address this issue, we propose treating active recognition as a\nsequential evidence-gathering process, providing by-step uncertainty\nquantification and reliable prediction under the evidence combination theory.\nAdditionally, the reward function developed in this paper effectively\ncharacterizes the merit of actions when operating in open-world environments.\nTo evaluate the performance, we collect a dataset from an indoor simulator,\nencompassing various recognition challenges such as distance, occlusion levels,\nand visibility. Through a series of experiments on recognition and robustness\nanalysis, we demonstrate the necessity of introducing uncertainties to active\nrecognition and the superior performance of the proposed method.", + "In recent years, semantic segmentation has become a pivotal tool in\nprocessing and interpreting satellite imagery. Yet, a prevalent limitation of\nsupervised learning techniques remains the need for extensive manual\nannotations by experts. In this work, we explore the potential of generative\nimage diffusion to address the scarcity of annotated data in earth observation\ntasks. The main idea is to learn the joint data manifold of images and labels,\nleveraging recent advancements in denoising diffusion probabilistic models. To\nthe best of our knowledge, we are the first to generate both images and\ncorresponding masks for satellite segmentation. We find that the obtained pairs\nnot only display high quality in fine-scale features but also ensure a wide\nsampling diversity. Both aspects are crucial for earth observation data, where\nsemantic classes can vary severely in scale and occurrence frequency. We employ\nthe novel data instances for downstream segmentation, as a form of data\naugmentation. In our experiments, we provide comparisons to prior works based\non discriminative diffusion models or GANs. We demonstrate that integrating\ngenerated samples yields significant quantitative improvements for satellite\nsemantic segmentation -- both compared to baselines and when training only on\nthe original data.", + "Pose refinement is an interesting and practically relevant research\ndirection. Pose refinement can be used to (1) obtain a more accurate pose\nestimate from an initial prior (e.g., from retrieval), (2) as pre-processing,\ni.e., to provide a better starting point to a more expensive pose estimator,\n(3) as post-processing of a more accurate localizer. Existing approaches focus\non learning features / scene representations for the pose refinement task. This\ninvolves training an implicit scene representation or learning features while\noptimizing a camera pose-based loss. A natural question is whether training\nspecific features / representations is truly necessary or whether similar\nresults can be already achieved with more generic features. In this work, we\npresent a simple approach that combines pre-trained features with a particle\nfilter and a renderable representation of the scene. Despite its simplicity, it\nachieves state-of-the-art results, demonstrating that one can easily build a\npose refiner without the need for specific training. The code is at\nhttps://github.com/ga1i13o/mcloc_poseref", + "Given an input set of $3$D point pairs, the goal of outlier-robust $3$D\nregistration is to compute some rotation and translation that align as many\npoint pairs as possible. This is an important problem in computer vision, for\nwhich many highly accurate approaches have been recently proposed. Despite\ntheir impressive performance, these approaches lack scalability, often\noverflowing the $16$GB of memory of a standard laptop to handle roughly\n$30,000$ point pairs. In this paper, we propose a $3$D registration approach\nthat can process more than ten million ($10^7$) point pairs with over $99\\%$\nrandom outliers. Moreover, our method is efficient, entails low memory costs,\nand maintains high accuracy at the same time. We call our method TEAR, as it\ninvolves minimizing an outlier-robust loss that computes Truncated Entry-wise\nAbsolute Residuals. To minimize this loss, we decompose the original\n$6$-dimensional problem into two subproblems of dimensions $3$ and $2$,\nrespectively, solved in succession to global optimality via a customized\nbranch-and-bound method.", + "To minimize this loss, we decompose the original\n$6$-dimensional problem into two subproblems of dimensions $3$ and $2$,\nrespectively, solved in succession to global optimality via a customized\nbranch-and-bound method. While branch-and-bound is often slow and unscalable,\nthis does not apply to TEAR as we propose novel bounding functions that are\ntight and computationally efficient. Experiments on various datasets are\nconducted to validate the scalability and efficiency of our method.", + "Plug-and-play algorithms constitute a popular framework for solving inverse\nimaging problems that rely on the implicit definition of an image prior via a\ndenoiser. These algorithms can leverage powerful pre-trained denoisers to solve\na wide range of imaging tasks, circumventing the necessity to train models on a\nper-task basis. Unfortunately, plug-and-play methods often show unstable\nbehaviors, hampering their promise of versatility and leading to suboptimal\nquality of reconstructed images. In this work, we show that enforcing\nequivariance to certain groups of transformations (rotations, reflections,\nand/or translations) on the denoiser strongly improves the stability of the\nalgorithm as well as its reconstruction quality. We provide a theoretical\nanalysis that illustrates the role of equivariance on better performance and\nstability. We present a simple algorithm that enforces equivariance on any\nexisting denoiser by simply applying a random transformation to the input of\nthe denoiser and the inverse transformation to the output at each iteration of\nthe algorithm. Experiments on multiple imaging modalities and denoising\nnetworks show that the equivariant plug-and-play algorithm improves both the\nreconstruction performance and the stability compared to their non-equivariant\ncounterparts.", + "Existing open-vocabulary image segmentation methods require a fine-tuning\nstep on mask labels and/or image-text datasets. Mask labels are\nlabor-intensive, which limits the number of categories in segmentation\ndatasets. Consequently, the vocabulary capacity of pre-trained VLMs is severely\nreduced after fine-tuning. However, without fine-tuning, VLMs trained under\nweak image-text supervision tend to make suboptimal mask predictions. To\nalleviate these issues, we introduce a novel recurrent framework that\nprogressively filters out irrelevant texts and enhances mask quality without\ntraining efforts. The recurrent unit is a two-stage segmenter built upon a\nfrozen VLM. Thus, our model retains the VLM's broad vocabulary space and equips\nit with segmentation ability. Experiments show that our method outperforms not\nonly the training-free counterparts, but also those fine-tuned with millions of\ndata samples, and sets the new state-of-the-art records for both zero-shot\nsemantic and referring segmentation. Concretely, we improve the current record\nby 28.8, 16.0, and 6.9 mIoU on Pascal VOC, COCO Object, and Pascal Context.", + "Generalized Category Discovery (GCD) is a pragmatic and challenging\nopen-world task, which endeavors to cluster unlabeled samples from both novel\nand old classes, leveraging some labeled data of old classes. Given that\nknowledge learned from old classes is not fully transferable to new classes,\nand that novel categories are fully unlabeled, GCD inherently faces intractable\nproblems, including imbalanced classification performance and inconsistent\nconfidence between old and new classes, especially in the low-labeling regime.\nHence, some annotations of new classes are deemed necessary. However, labeling\nnew classes is extremely costly. To address this issue, we take the spirit of\nactive learning and propose a new setting called Active Generalized Category\nDiscovery (AGCD). The goal is to improve the performance of GCD by actively\nselecting a limited amount of valuable samples for labeling from the oracle. To\nsolve this problem, we devise an adaptive sampling strategy, which jointly\nconsiders novelty, informativeness and diversity to adaptively select novel\nsamples with proper uncertainty. However, owing to the varied orderings of\nlabel indices caused by the clustering of novel classes, the queried labels are\nnot directly applicable to subsequent training.", + "However, owing to the varied orderings of\nlabel indices caused by the clustering of novel classes, the queried labels are\nnot directly applicable to subsequent training. To overcome this issue, we\nfurther propose a stable label mapping algorithm that transforms ground truth\nlabels to the label space of the classifier, thereby ensuring consistent\ntraining across different active selection stages. Our method achieves\nstate-of-the-art performance on both generic and fine-grained datasets. Our\ncode is available at https://github.com/mashijie1028/ActiveGCD", + "Incorporating human feedback has been shown to be crucial to align text\ngenerated by large language models to human preferences. We hypothesize that\nstate-of-the-art instructional image editing models, where outputs are\ngenerated based on an input image and an editing instruction, could similarly\nbenefit from human feedback, as their outputs may not adhere to the correct\ninstructions and preferences of users. In this paper, we present a novel\nframework to harness human feedback for instructional visual editing (HIVE).\nSpecifically, we collect human feedback on the edited images and learn a reward\nfunction to capture the underlying user preferences. We then introduce scalable\ndiffusion model fine-tuning methods that can incorporate human preferences\nbased on the estimated reward. Besides, to mitigate the bias brought by the\nlimitation of data, we contribute a new 1M training dataset, a 3.6K reward\ndataset for rewards learning, and a 1K evaluation dataset to boost the\nperformance of instructional image editing. We conduct extensive empirical\nexperiments quantitatively and qualitatively, showing that HIVE is favored over\nprevious state-of-the-art instructional image editing approaches by a large\nmargin.", + "Generating emotional talking faces is a practical yet challenging endeavor.\nTo create a lifelike avatar, we draw upon two critical insights from a human\nperspective: 1) The connection between audio and the non-deterministic facial\ndynamics, encompassing expressions, blinks, poses, should exhibit synchronous\nand one-to-many mapping. 2) Vibrant expressions are often accompanied by\nemotion-aware high-definition (HD) textures and finely detailed teeth. However,\nboth aspects are frequently overlooked by existing methods. To this end, this\npaper proposes using normalizing Flow and Vector-Quantization modeling to\nproduce emotional talking faces that satisfy both insights concurrently\n(FlowVQTalker). Specifically, we develop a flow-based coefficient generator\nthat encodes the dynamics of facial emotion into a multi-emotion-class latent\nspace represented as a mixture distribution. The generation process commences\nwith random sampling from the modeled distribution, guided by the accompanying\naudio, enabling both lip-synchronization and the uncertain nonverbal facial\ncues generation.", + "The generation process commences\nwith random sampling from the modeled distribution, guided by the accompanying\naudio, enabling both lip-synchronization and the uncertain nonverbal facial\ncues generation. Furthermore, our designed vector-quantization image generator\ntreats the creation of expressive facial images as a code query task, utilizing\na learned codebook to provide rich, high-quality textures that enhance the\nemotional perception of the results. Extensive experiments are conducted to\nshowcase the effectiveness of our approach.", + "Most existing attention prediction research focuses on salient instances like\nhumans and objects. However, the more complex interaction-oriented attention,\narising from the comprehension of interactions between instances by human\nobservers, remains largely unexplored. This is equally crucial for advancing\nhuman-machine interaction and human-centered artificial intelligence. To bridge\nthis gap, we first collect a novel gaze fixation dataset named IG, comprising\n530,000 fixation points across 740 diverse interaction categories, capturing\nvisual attention during human observers cognitive processes of interactions.\nSubsequently, we introduce the zero-shot interaction-oriented attention\nprediction task ZeroIA, which challenges models to predict visual cues for\ninteractions not encountered during training. Thirdly, we present the\nInteractive Attention model IA, designed to emulate human observers cognitive\nprocesses to tackle the ZeroIA problem. Extensive experiments demonstrate that\nthe proposed IA outperforms other state-of-the-art approaches in both ZeroIA\nand fully supervised settings. Lastly, we endeavor to apply\ninteraction-oriented attention to the interaction recognition task itself.\nFurther experimental results demonstrate the promising potential to enhance the\nperformance and interpretability of existing state-of-the-art HOI models by\nincorporating real human attention data from IG and attention labels generated\nby IA.", + "Learning-based approaches to monocular motion capture have recently shown\npromising results by learning to regress in a data-driven manner. However, due\nto the challenges in data collection and network designs, it remains\nchallenging for existing solutions to achieve real-time full-body capture while\nbeing accurate in world space. In this work, we introduce ProxyCap, a\nhuman-centric proxy-to-motion learning scheme to learn world-space motions from\na proxy dataset of 2D skeleton sequences and 3D rotational motions. Such proxy\ndata enables us to build a learning-based network with accurate world-space\nsupervision while also mitigating the generalization issues. For more accurate\nand physically plausible predictions in world space, our network is designed to\nlearn human motions from a human-centric perspective, which enables the\nunderstanding of the same motion captured with different camera trajectories.\nMoreover, a contact-aware neural motion descent module is proposed in our\nnetwork so that it can be aware of foot-ground contact and motion misalignment\nwith the proxy observations.", + "Moreover, a contact-aware neural motion descent module is proposed in our\nnetwork so that it can be aware of foot-ground contact and motion misalignment\nwith the proxy observations. With the proposed learning-based solution, we\ndemonstrate the first real-time monocular full-body capture system with\nplausible foot-ground contact in world space even using hand-held moving\ncameras. Our project page is https://zhangyux15.github.io/ProxyCapV2.", + "Recent advances in monocular depth estimation have been made by incorporating\nnatural language as additional guidance. Although yielding impressive results,\nthe impact of the language prior, particularly in terms of generalization and\nrobustness, remains unexplored. In this paper, we address this gap by\nquantifying the impact of this prior and introduce methods to benchmark its\neffectiveness across various settings. We generate \"low-level\" sentences that\nconvey object-centric, three-dimensional spatial relationships, incorporate\nthem as additional language priors and evaluate their downstream impact on\ndepth estimation. Our key finding is that current language-guided depth\nestimators perform optimally only with scene-level descriptions and\ncounter-intuitively fare worse with low level descriptions. Despite leveraging\nadditional data, these methods are not robust to directed adversarial attacks\nand decline in performance with an increase in distribution shift. Finally, to\nprovide a foundation for future research, we identify points of failures and\noffer insights to better understand these shortcomings. With an increasing\nnumber of methods using language for depth estimation, our findings highlight\nthe opportunities and pitfalls that require careful consideration for effective\ndeployment in real-world settings", + "Text-to-image diffusion models have demonstrated remarkable capabilities in\ntransforming textual prompts into coherent images, yet the computational cost\nof their inference remains a persistent challenge. To address this issue, we\npresent UFOGen, a novel generative model designed for ultra-fast, one-step\ntext-to-image synthesis. In contrast to conventional approaches that focus on\nimproving samplers or employing distillation techniques for diffusion models,\nUFOGen adopts a hybrid methodology, integrating diffusion models with a GAN\nobjective. Leveraging a newly introduced diffusion-GAN objective and\ninitialization with pre-trained diffusion models, UFOGen excels in efficiently\ngenerating high-quality images conditioned on textual descriptions in a single\nstep. Beyond traditional text-to-image generation, UFOGen showcases versatility\nin applications. Notably, UFOGen stands among the pioneering models enabling\none-step text-to-image generation and diverse downstream tasks, presenting a\nsignificant advancement in the landscape of efficient generative models.", + "We present 3DiffTection, a state-of-the-art method for 3D object detection\nfrom single images, leveraging features from a 3D-aware diffusion model.\nAnnotating large-scale image data for 3D detection is resource-intensive and\ntime-consuming. Recently, pretrained large image diffusion models have become\nprominent as effective feature extractors for 2D perception tasks. However,\nthese features are initially trained on paired text and image data, which are\nnot optimized for 3D tasks, and often exhibit a domain gap when applied to the\ntarget data. Our approach bridges these gaps through two specialized tuning\nstrategies: geometric and semantic. For geometric tuning, we fine-tune a\ndiffusion model to perform novel view synthesis conditioned on a single image,\nby introducing a novel epipolar warp operator. This task meets two essential\ncriteria: the necessity for 3D awareness and reliance solely on posed image\ndata, which are readily available (e.g., from videos) and does not require\nmanual annotation. For semantic refinement, we further train the model on\ntarget data with detection supervision. Both tuning phases employ ControlNet to\npreserve the integrity of the original feature capabilities.", + "For semantic refinement, we further train the model on\ntarget data with detection supervision. Both tuning phases employ ControlNet to\npreserve the integrity of the original feature capabilities. In the final step,\nwe harness these enhanced capabilities to conduct a test-time prediction\nensemble across multiple virtual viewpoints. Through our methodology, we obtain\n3D-aware features that are tailored for 3D detection and excel in identifying\ncross-view point correspondences. Consequently, our model emerges as a powerful\n3D detector, substantially surpassing previous benchmarks, e.g., Cube-RCNN, a\nprecedent in single-view 3D detection by 9.43\\% in AP3D on the\nOmni3D-ARkitscene dataset. Furthermore, 3DiffTection showcases robust data\nefficiency and generalization to cross-domain data.", + "In recent years, there has been an explosion of 2D vision models for numerous\ntasks such as semantic segmentation, style transfer or scene editing, enabled\nby large-scale 2D image datasets. At the same time, there has been renewed\ninterest in 3D scene representations such as neural radiance fields from\nmulti-view images. However, the availability of 3D or multiview data is still\nsubstantially limited compared to 2D image datasets, making extending 2D vision\nmodels to 3D data highly desirable but also very challenging. Indeed, extending\na single 2D vision operator like scene editing to 3D typically requires a\nhighly creative method specialized to that task and often requires per-scene\noptimization. In this paper, we ask the question of whether any 2D vision model\ncan be lifted to make 3D consistent predictions. We answer this question in the\naffirmative; our new Lift3D method trains to predict unseen views on feature\nspaces generated by a few visual models (i.e.", + "We answer this question in the\naffirmative; our new Lift3D method trains to predict unseen views on feature\nspaces generated by a few visual models (i.e. DINO and CLIP), but then\ngeneralizes to novel vision operators and tasks, such as style transfer,\nsuper-resolution, open vocabulary segmentation and image colorization; for some\nof these tasks, there is no comparable previous 3D method. In many cases, we\neven outperform state-of-the-art methods specialized for the task in question.\nMoreover, Lift3D is a zero-shot method, in the sense that it requires no\ntask-specific training, nor scene-specific optimization.", + "We introduce a novel framework for multiway point cloud mosaicking (named\nWednesday), designed to co-align sets of partially overlapping point clouds --\ntypically obtained from 3D scanners or moving RGB-D cameras -- into a unified\ncoordinate system. At the core of our approach is ODIN, a learned pairwise\nregistration algorithm that iteratively identifies overlaps and refines\nattention scores, employing a diffusion-based process for denoising pairwise\ncorrelation matrices to enhance matching accuracy. Further steps include\nconstructing a pose graph from all point clouds, performing rotation averaging,\na novel robust algorithm for re-estimating translations optimally in terms of\nconsensus maximization and translation optimization. Finally, the point cloud\nrotations and positions are optimized jointly by a diffusion-based approach.\nTested on four diverse, large-scale datasets, our method achieves\nstate-of-the-art pairwise and multiway registration results by a large margin\non all benchmarks. Our code and models are available at\nhttps://github.com/jinsz/Multiway-Point-Cloud-Mosaicking-with-Diffusion-and-Global-Optimization.", + "In this paper, we firstly consider view-dependent effects into single\nimage-based novel view synthesis (NVS) problems. For this, we propose to\nexploit the camera motion priors in NVS to model view-dependent appearance or\neffects (VDE) as the negative disparity in the scene. By recognizing\nspecularities \"follow\" the camera motion, we infuse VDEs into the input images\nby aggregating input pixel colors along the negative depth region of the\nepipolar lines. Also, we propose a `relaxed volumetric rendering' approximation\nthat allows computing the densities in a single pass, improving efficiency for\nNVS from single images. Our method can learn single-image NVS from image\nsequences only, which is a completely self-supervised learning method, for the\nfirst time requiring neither depth nor camera pose annotations. We present\nextensive experiment results and show that our proposed method can learn NVS\nwith VDEs, outperforming the SOTA single-view NVS methods on the RealEstate10k\nand MannequinChallenge datasets.", + "With the rapidly increasing demand for oriented object detection (OOD),\nrecent research involving weakly-supervised detectors for learning rotated box\n(RBox) from the horizontal box (HBox) has attracted more and more attention. In\nthis paper, we explore a more challenging yet label-efficient setting, namely\nsingle point-supervised OOD, and present our approach called Point2RBox.\nSpecifically, we propose to leverage two principles: 1) Synthetic pattern\nknowledge combination: By sampling around each labeled point on the image, we\nspread the object feature to synthetic visual patterns with known boxes to\nprovide the knowledge for box regression. 2) Transform self-supervision: With a\ntransformed input image (e.g. scaled/rotated), the output RBoxes are trained to\nfollow the same transformation so that the network can perceive the relative\nsize/rotation between objects. The detector is further enhanced by a few\ndevised techniques to cope with peripheral issues, e.g. the anchor/layer\nassignment as the size of the object is not available in our point supervision\nsetting. To our best knowledge, Point2RBox is the first end-to-end solution for\npoint-supervised OOD.", + "the anchor/layer\nassignment as the size of the object is not available in our point supervision\nsetting. To our best knowledge, Point2RBox is the first end-to-end solution for\npoint-supervised OOD. In particular, our method uses a lightweight paradigm,\nyet it achieves a competitive performance among point-supervised alternatives,\n41.05%/27.62%/80.01% on DOTA/DIOR/HRSC datasets.", + "In this paper, we present an end-to-end 3D building wireframe reconstruction\nmethod to regress edges directly from aerial LiDAR point clouds.Our method,\nnamed Parametric Building Wireframe Reconstruction (PBWR), takes aerial LiDAR\npoint clouds and initial edge entities as input, and fully uses self-attention\nmechanism of transformers to regress edge parameters without any intermediate\nsteps such as corner prediction. We propose an edge non-maximum suppression\n(E-NMS) module based on edge similarityto remove redundant edges. Additionally,\na dedicated edge loss function is utilized to guide the PBWR in regressing\nedges parameters, where simple use of edge distance loss isn't suitable. In our\nexperiments, we demonstrate state-of-the-art results on the Building3D dataset,\nachieving an improvement of approximately 36% in entry-level dataset edge\naccuracy and around 42% improvement in the Tallinn dataset.", + "Existing 3D mesh shape evaluation metrics mainly focus on the overall shape\nbut are usually less sensitive to local details. This makes them inconsistent\nwith human evaluation, as human perception cares about both overall and\ndetailed shape. In this paper, we propose an analytic metric named Spectrum\nArea Under the Curve Difference (SAUCD) that demonstrates better consistency\nwith human evaluation. To compare the difference between two shapes, we first\ntransform the 3D mesh to the spectrum domain using the discrete\nLaplace-Beltrami operator and Fourier transform. Then, we calculate the Area\nUnder the Curve (AUC) difference between the two spectrums, so that each\nfrequency band that captures either the overall or detailed shape is equitably\nconsidered. Taking human sensitivity across frequency bands into account, we\nfurther extend our metric by learning suitable weights for each frequency band\nwhich better aligns with human perception. To measure the performance of SAUCD,\nwe build a 3D mesh evaluation dataset called Shape Grading, along with manual\nannotations from more than 800 subjects.", + "To measure the performance of SAUCD,\nwe build a 3D mesh evaluation dataset called Shape Grading, along with manual\nannotations from more than 800 subjects. By measuring the correlation between\nour metric and human evaluation, we demonstrate that SAUCD is well aligned with\nhuman evaluation, and outperforms previous 3D mesh metrics.", + "Leveraging vast training data, multimodal large language models (MLLMs) have\ndemonstrated formidable general visual comprehension capabilities and achieved\nremarkable performance across various tasks. However, their performance in\nvisual document understanding still leaves much room for improvement. This\ndiscrepancy is primarily attributed to the fact that visual document\nunderstanding is a fine-grained prediction task. In natural scenes, MLLMs\ntypically use low-resolution images, leading to a substantial loss of visual\ninformation. Furthermore, general-purpose MLLMs do not excel in handling\ndocument-oriented instructions. In this paper, we propose a High-Resolution\nVisual Document Assistant (HRVDA), which bridges the gap between MLLMs and\nvisual document understanding. This model employs a content filtering mechanism\nand an instruction filtering module to separately filter out the\ncontent-agnostic visual tokens and instruction-agnostic visual tokens, thereby\nachieving efficient model training and inference for high-resolution images. In\naddition, we construct a document-oriented visual instruction tuning dataset\nand apply a multi-stage training strategy to enhance the model's document\nmodeling capabilities.", + "In\naddition, we construct a document-oriented visual instruction tuning dataset\nand apply a multi-stage training strategy to enhance the model's document\nmodeling capabilities. Extensive experiments demonstrate that our model\nachieves state-of-the-art performance across multiple document understanding\ndatasets, while maintaining training efficiency and inference speed comparable\nto low-resolution models.", + "In deep metric learning for visual recognition, the calibration of distance\nthresholds is crucial for achieving desired model performance in the true\npositive rates (TPR) or true negative rates (TNR). However, calibrating this\nthreshold presents challenges in open-world scenarios, where the test classes\ncan be entirely disjoint from those encountered during training. We define the\nproblem of finding distance thresholds for a trained embedding model to achieve\ntarget performance metrics over unseen open-world test classes as open-world\nthreshold calibration. Existing posthoc threshold calibration methods, reliant\non inductive inference and requiring a calibration dataset with a similar\ndistance distribution as the test data, often prove ineffective in open-world\nscenarios. To address this, we introduce OpenGCN, a Graph Neural Network-based\ntransductive threshold calibration method with enhanced adaptability and\nrobustness. OpenGCN learns to predict pairwise connectivity for the unlabeled\ntest instances embedded in a graph to determine its TPR and TNR at various\ndistance thresholds, allowing for transductive inference of the distance\nthresholds which also incorporates test-time information. Extensive experiments\nacross open-world visual recognition benchmarks validate OpenGCN's superiority\nover existing posthoc calibration methods for open-world threshold calibration.", + "Generating vivid and emotional 3D co-speech gestures is crucial for virtual\navatar animation in human-machine interaction applications. While the existing\nmethods enable generating the gestures to follow a single emotion label, they\noverlook that long gesture sequence modeling with emotion transition is more\npractical in real scenes. In addition, the lack of large-scale available\ndatasets with emotional transition speech and corresponding 3D human gestures\nalso limits the addressing of this task. To fulfill this goal, we first\nincorporate the ChatGPT-4 and an audio inpainting approach to construct the\nhigh-fidelity emotion transition human speeches. Considering obtaining the\nrealistic 3D pose annotations corresponding to the dynamically inpainted\nemotion transition audio is extremely difficult, we propose a novel weakly\nsupervised training strategy to encourage authority gesture transitions.\nSpecifically, to enhance the coordination of transition gestures w.r.t\ndifferent emotional ones, we model the temporal association representation\nbetween two different emotional gesture sequences as style guidance and infuse\nit into the transition generation. We further devise an emotion mixture\nmechanism that provides weak supervision based on a learnable mixed emotion\nlabel for transition gestures.", + "We further devise an emotion mixture\nmechanism that provides weak supervision based on a learnable mixed emotion\nlabel for transition gestures. Last, we present a keyframe sampler to supply\neffective initial posture cues in long sequences, enabling us to generate\ndiverse gestures. Extensive experiments demonstrate that our method outperforms\nthe state-of-the-art models constructed by adapting single emotion-conditioned\ncounterparts on our newly defined emotion transition task and datasets. Our\ncode and dataset will be released on the project page:\nhttps://xingqunqi-lab.github.io/Emo-Transition-Gesture/.", + "We introduce a new system for Multi-Session SLAM, which tracks camera motion\nacross multiple disjoint videos under a single global reference. Our approach\ncouples the prediction of optical flow with solver layers to estimate camera\npose. The backbone is trained end-to-end using a novel differentiable solver\nfor wide-baseline two-view pose. The full system can connect disjoint\nsequences, perform visual odometry, and global optimization. Compared to\nexisting approaches, our design is accurate and robust to catastrophic\nfailures. Code is available at github.com/princeton-vl/MultiSlam_DiffPose", + "3D human pose data collected in controlled laboratory settings present\nchallenges for pose estimators that generalize across diverse scenarios. To\naddress this, domain generalization is employed. Current methodologies in\ndomain generalization for 3D human pose estimation typically utilize\nadversarial training to generate synthetic poses for training. Nonetheless,\nthese approaches exhibit several limitations. First, the lack of prior\ninformation about the target domain complicates the application of suitable\naugmentation through a single pose augmentor, affecting generalization on\ntarget domains. Moreover, adversarial training's discriminator tends to enforce\nsimilarity between source and synthesized poses, impeding the exploration of\nout-of-source distributions. Furthermore, the pose estimator's optimization is\nnot exposed to domain shifts, limiting its overall generalization ability.\n To address these limitations, we propose a novel framework featuring two pose\naugmentors: the weak and the strong augmentors. Our framework employs\ndifferential strategies for generation and discrimination processes,\nfacilitating the preservation of knowledge related to source poses and the\nexploration of out-of-source distributions without prior information about\ntarget poses.", + "Our framework employs\ndifferential strategies for generation and discrimination processes,\nfacilitating the preservation of knowledge related to source poses and the\nexploration of out-of-source distributions without prior information about\ntarget poses. Besides, we leverage meta-optimization to simulate domain shifts\nin the optimization process of the pose estimator, thereby improving its\ngeneralization ability. Our proposed approach significantly outperforms\nexisting methods, as demonstrated through comprehensive experiments on various\nbenchmark datasets.Our code will be released at\n\\url{https://github.com/davidpengucf/DAF-DG}.", + "Out-of-distribution (OOD) generalization in the graph domain is challenging\ndue to complex distribution shifts and a lack of environmental contexts. Recent\nmethods attempt to enhance graph OOD generalization by generating flat\nenvironments. However, such flat environments come with inherent limitations to\ncapture more complex data distributions. Considering the DrugOOD dataset, which\ncontains diverse training environments (e.g., scaffold, size, etc.), flat\ncontexts cannot sufficiently address its high heterogeneity. Thus, a new\nchallenge is posed to generate more semantically enriched environments to\nenhance graph invariant learning for handling distribution shifts. In this\npaper, we propose a novel approach to generate hierarchical semantic\nenvironments for each graph. Firstly, given an input graph, we explicitly\nextract variant subgraphs from the input graph to generate proxy predictions on\nlocal environments. Then, stochastic attention mechanisms are employed to\nre-extract the subgraphs for regenerating global environments in a hierarchical\nmanner. In addition, we introduce a new learning objective that guides our\nmodel to learn the diversity of environments within the same hierarchy while\nmaintaining consistency across different hierarchies.", + "Then, stochastic attention mechanisms are employed to\nre-extract the subgraphs for regenerating global environments in a hierarchical\nmanner. In addition, we introduce a new learning objective that guides our\nmodel to learn the diversity of environments within the same hierarchy while\nmaintaining consistency across different hierarchies. This approach enables our\nmodel to consider the relationships between environments and facilitates robust\ngraph invariant learning. Extensive experiments on real-world graph data have\ndemonstrated the effectiveness of our framework. Particularly, in the\nchallenging dataset DrugOOD, our method achieves up to 1.29% and 2.83%\nimprovement over the best baselines on IC50 and EC50 prediction tasks,\nrespectively.", + "Although 3D shape matching and interpolation are highly interrelated, they\nare often studied separately and applied sequentially to relate different 3D\nshapes, thus resulting in sub-optimal performance. In this work we present a\nunified framework to predict both point-wise correspondences and shape\ninterpolation between 3D shapes. To this end, we combine the deep functional\nmap framework with classical surface deformation models to map shapes in both\nspectral and spatial domains. On the one hand, by incorporating spatial maps,\nour method obtains more accurate and smooth point-wise correspondences compared\nto previous functional map methods for shape matching. On the other hand, by\nintroducing spectral maps, our method gets rid of commonly used but\ncomputationally expensive geodesic distance constraints that are only valid for\nnear-isometric shape deformations. Furthermore, we propose a novel test-time\nadaptation scheme to capture both pose-dominant and shape-dominant\ndeformations. Using different challenging datasets, we demonstrate that our\nmethod outperforms previous state-of-the-art methods for both shape matching\nand interpolation, even compared to supervised approaches.", + "Instruction-based image editing holds immense potential for a variety of\napplications, as it enables users to perform any editing operation using a\nnatural language instruction. However, current models in this domain often\nstruggle with accurately executing user instructions. We present Emu Edit, a\nmulti-task image editing model which sets state-of-the-art results in\ninstruction-based image editing. To develop Emu Edit we train it to multi-task\nacross an unprecedented range of tasks, such as region-based editing, free-form\nediting, and Computer Vision tasks, all of which are formulated as generative\ntasks. Additionally, to enhance Emu Edit's multi-task learning abilities, we\nprovide it with learned task embeddings which guide the generation process\ntowards the correct edit type. Both these elements are essential for Emu Edit's\noutstanding performance. Furthermore, we show that Emu Edit can generalize to\nnew tasks, such as image inpainting, super-resolution, and compositions of\nediting tasks, with just a few labeled examples. This capability offers a\nsignificant advantage in scenarios where high-quality samples are scarce.", + "Furthermore, we show that Emu Edit can generalize to\nnew tasks, such as image inpainting, super-resolution, and compositions of\nediting tasks, with just a few labeled examples. This capability offers a\nsignificant advantage in scenarios where high-quality samples are scarce.\nLastly, to facilitate a more rigorous and informed assessment of instructable\nimage editing models, we release a new challenging and versatile benchmark that\nincludes seven different image editing tasks.", + "Face personalization aims to insert specific faces, taken from images, into\npretrained text-to-image diffusion models. However, it is still challenging for\nprevious methods to preserve both the identity similarity and editability due\nto overfitting to training samples. In this paper, we propose Face2Diffusion\n(F2D) for high-editability face personalization. The core idea behind F2D is\nthat removing identity-irrelevant information from the training pipeline\nprevents the overfitting problem and improves editability of encoded faces. F2D\nconsists of the following three novel components: 1) Multi-scale identity\nencoder provides well-disentangled identity features while keeping the benefits\nof multi-scale information, which improves the diversity of camera poses. 2)\nExpression guidance disentangles face expressions from identities and improves\nthe controllability of face expressions. 3) Class-guided denoising\nregularization encourages models to learn how faces should be denoised, which\nboosts the text-alignment of backgrounds.", + "2)\nExpression guidance disentangles face expressions from identities and improves\nthe controllability of face expressions. 3) Class-guided denoising\nregularization encourages models to learn how faces should be denoised, which\nboosts the text-alignment of backgrounds. Extensive experiments on the\nFaceForensics++ dataset and diverse prompts demonstrate our method greatly\nimproves the trade-off between the identity- and text-fidelity compared to\nprevious state-of-the-art methods.", + "Adversarial attack methods based on point manipulation for 3D point cloud\nclassification have revealed the fragility of 3D models, yet the adversarial\nexamples they produce are easily perceived or defended against. The trade-off\nbetween the imperceptibility and adversarial strength leads most point attack\nmethods to inevitably introduce easily detectable outlier points upon a\nsuccessful attack. Another promising strategy, shape-based attack, can\neffectively eliminate outliers, but existing methods often suffer significant\nreductions in imperceptibility due to irrational deformations. We find that\nconcealing deformation perturbations in areas insensitive to human eyes can\nachieve a better trade-off between imperceptibility and adversarial strength,\nspecifically in parts of the object surface that are complex and exhibit\ndrastic curvature changes. Therefore, we propose a novel shape-based\nadversarial attack method, HiT-ADV, which initially conducts a two-stage search\nfor attack regions based on saliency and imperceptibility scores, and then adds\ndeformation perturbations in each attack region using Gaussian kernel\nfunctions. Additionally, HiT-ADV is extendable to physical attack.", + "Additionally, HiT-ADV is extendable to physical attack. We propose\nthat by employing benign resampling and benign rigid transformations, we can\nfurther enhance physical adversarial strength with little sacrifice to\nimperceptibility. Extensive experiments have validated the superiority of our\nmethod in terms of adversarial and imperceptible properties in both digital and\nphysical spaces. Our code is avaliable at: https://github.com/TRLou/HiT-ADV.", + "Multi-task learning has become increasingly popular in the machine learning\nfield, but its practicality is hindered by the need for large, labeled\ndatasets. Most multi-task learning methods depend on fully labeled datasets\nwherein each input example is accompanied by ground-truth labels for all target\ntasks. Unfortunately, curating such datasets can be prohibitively expensive and\nimpractical, especially for dense prediction tasks which require per-pixel\nlabels for each image. With this in mind, we propose Joint-Task Regularization\n(JTR), an intuitive technique which leverages cross-task relations to\nsimultaneously regularize all tasks in a single joint-task latent space to\nimprove learning when data is not fully labeled for all tasks. JTR stands out\nfrom existing approaches in that it regularizes all tasks jointly rather than\nseparately in pairs -- therefore, it achieves linear complexity relative to the\nnumber of tasks while previous methods scale quadratically. To demonstrate the\nvalidity of our approach, we extensively benchmark our method across a wide\nvariety of partially labeled scenarios based on NYU-v2, Cityscapes, and\nTaskonomy.", + "Recently, dataset distillation has paved the way towards efficient machine\nlearning, especially for image datasets. However, the distillation for videos,\ncharacterized by an exclusive temporal dimension, remains an underexplored\ndomain. In this work, we provide the first systematic study of video\ndistillation and introduce a taxonomy to categorize temporal compression. Our\ninvestigation reveals that the temporal information is usually not well learned\nduring distillation, and the temporal dimension of synthetic data contributes\nlittle. The observations motivate our unified framework of disentangling the\ndynamic and static information in the videos. It first distills the videos into\nstill images as static memory and then compensates the dynamic and motion\ninformation with a learnable dynamic memory block. Our method achieves\nstate-of-the-art on video datasets at different scales, with a notably smaller\nmemory storage budget. Our code is available at\nhttps://github.com/yuz1wan/video_distillation.", + "Tracking by natural language specification (TNL) aims to consistently\nlocalize a target in a video sequence given a linguistic description in the\ninitial frame. Existing methodologies perform language-based and template-based\nmatching for target reasoning separately and merge the matching results from\ntwo sources, which suffer from tracking drift when language and visual\ntemplates miss-align with the dynamic target state and ambiguity in the later\nmerging stage. To tackle the issues, we propose a joint multi-modal tracking\nframework with 1) a prompt modulation module to leverage the complementarity\nbetween temporal visual templates and language expressions, enabling precise\nand context-aware appearance and linguistic cues, and 2) a unified target\ndecoding module to integrate the multi-modal reference cues and executes the\nintegrated queries on the search image to predict the target location in an\nend-to-end manner directly. This design ensures spatio-temporal consistency by\nleveraging historical visual information and introduces an integrated solution,\ngenerating predictions in a single step. Extensive experiments conducted on\nTNL2K, OTB-Lang, LaSOT, and RefCOCOg validate the efficacy of our proposed\napproach. The results demonstrate competitive performance against\nstate-of-the-art methods for both tracking and grounding.", + "Denoising diffusion probabilistic models (DDPMs) employ a sequence of white\nGaussian noise samples to generate an image. In analogy with GANs, those noise\nmaps could be considered as the latent code associated with the generated\nimage. However, this native noise space does not possess a convenient\nstructure, and is thus challenging to work with in editing tasks. Here, we\npropose an alternative latent noise space for DDPM that enables a wide range of\nediting operations via simple means, and present an inversion method for\nextracting these edit-friendly noise maps for any given image (real or\nsynthetically generated). As opposed to the native DDPM noise space, the\nedit-friendly noise maps do not have a standard normal distribution and are not\nstatistically independent across timesteps. However, they allow perfect\nreconstruction of any desired image, and simple transformations on them\ntranslate into meaningful manipulations of the output image (e.g. shifting,\ncolor edits). Moreover, in text-conditional models, fixing those noise maps\nwhile changing the text prompt, modifies semantics while retaining structure.", + "shifting,\ncolor edits). Moreover, in text-conditional models, fixing those noise maps\nwhile changing the text prompt, modifies semantics while retaining structure.\nWe illustrate how this property enables text-based editing of real images via\nthe diverse DDPM sampling scheme (in contrast to the popular non-diverse DDIM\ninversion). We also show how it can be used within existing diffusion-based\nediting methods to improve their quality and diversity. Webpage:\nhttps://inbarhub.github.io/DDPM_inversion", + "Before developing a Document Layout Analysis (DLA) model in real-world\napplications, conducting comprehensive robustness testing is essential.\nHowever, the robustness of DLA models remains underexplored in the literature.\nTo address this, we are the first to introduce a robustness benchmark for DLA\nmodels, which includes 450K document images of three datasets. To cover\nrealistic corruptions, we propose a perturbation taxonomy with 36 common\ndocument perturbations inspired by real-world document processing.\nAdditionally, to better understand document perturbation impacts, we propose\ntwo metrics, Mean Perturbation Effect (mPE) for perturbation assessment and\nMean Robustness Degradation (mRD) for robustness evaluation. Furthermore, we\nintroduce a self-titled model, i.e., Robust Document Layout Analyzer (RoDLA),\nwhich improves attention mechanisms to boost extraction of robust features.\nExperiments on the proposed benchmarks (PubLayNet-P, DocLayNet-P, and\nM$^6$Doc-P) demonstrate that RoDLA obtains state-of-the-art mRD scores of\n115.7, 135.4, and 150.4, respectively.", + "Experiments on the proposed benchmarks (PubLayNet-P, DocLayNet-P, and\nM$^6$Doc-P) demonstrate that RoDLA obtains state-of-the-art mRD scores of\n115.7, 135.4, and 150.4, respectively. Compared to previous methods, RoDLA\nachieves notable improvements in mAP of +3.8%, +7.1% and +12.1%, respectively.", + "Large-kernel convolutional neural networks (ConvNets) have recently received\nextensive research attention, but two unresolved and critical issues demand\nfurther investigation. 1) The architectures of existing large-kernel ConvNets\nlargely follow the design principles of conventional ConvNets or transformers,\nwhile the architectural design for large-kernel ConvNets remains\nunder-addressed. 2) As transformers have dominated multiple modalities, it\nremains to be investigated whether ConvNets also have a strong universal\nperception ability in domains beyond vision. In this paper, we contribute from\ntwo aspects. 1) We propose four architectural guidelines for designing\nlarge-kernel ConvNets, the core of which is to exploit the essential\ncharacteristics of large kernels that distinguish them from small kernels -\nthey can see wide without going deep. Following such guidelines, our proposed\nlarge-kernel ConvNet shows leading performance in image recognition (ImageNet\naccuracy of 88.0%, ADE20K mIoU of 55.6%, and COCO box AP of 56.4%),\ndemonstrating better performance and higher speed than the recent powerful\ncompetitors.", + "2) We discover large kernels are the key to unlocking the\nexceptional performance of ConvNets in domains where they were originally not\nproficient. With certain modality-related preprocessing approaches, the\nproposed model achieves state-of-the-art performance on time-series forecasting\nand audio recognition tasks even without modality-specific customization to the\narchitecture. All the code and models are publicly available on GitHub and\nHuggingface.", + "Despite their ability to generate high-resolution and diverse images from\ntext prompts, text-to-image diffusion models often suffer from slow iterative\nsampling processes. Model distillation is one of the most effective directions\nto accelerate these models. However, previous distillation methods fail to\nretain the generation quality while requiring a significant amount of images\nfor training, either from real data or synthetically generated by the teacher\nmodel. In response to this limitation, we present a novel image-free\ndistillation scheme named $\\textbf{SwiftBrush}$. Drawing inspiration from\ntext-to-3D synthesis, in which a 3D neural radiance field that aligns with the\ninput prompt can be obtained from a 2D text-to-image diffusion prior via a\nspecialized loss without the use of any 3D data ground-truth, our approach\nre-purposes that same loss for distilling a pretrained multi-step text-to-image\nmodel to a student network that can generate high-fidelity images with just a\nsingle inference step.", + "In spite of its simplicity, our model stands as one of\nthe first one-step text-to-image generators that can produce images of\ncomparable quality to Stable Diffusion without reliance on any training image\ndata. Remarkably, SwiftBrush achieves an FID score of $\\textbf{16.67}$ and a\nCLIP score of $\\textbf{0.29}$ on the COCO-30K benchmark, achieving competitive\nresults or even substantially surpassing existing state-of-the-art distillation\ntechniques.", + "The diffusion-based text-to-image model harbors immense potential in\ntransferring reference style. However, current encoder-based approaches\nsignificantly impair the text controllability of text-to-image models while\ntransferring styles. In this paper, we introduce DEADiff to address this issue\nusing the following two strategies: 1) a mechanism to decouple the style and\nsemantics of reference images. The decoupled feature representations are first\nextracted by Q-Formers which are instructed by different text descriptions.\nThen they are injected into mutually exclusive subsets of cross-attention\nlayers for better disentanglement. 2) A non-reconstructive learning method. The\nQ-Formers are trained using paired images rather than the identical target, in\nwhich the reference image and the ground-truth image are with the same style or\nsemantics. We show that DEADiff attains the best visual stylization results and\noptimal balance between the text controllability inherent in the text-to-image\nmodel and style similarity to the reference image, as demonstrated both\nquantitatively and qualitatively. Our project page is\nhttps://tianhao-qi.github.io/DEADiff/.", + "Category-level 6D object pose estimation aims to estimate the rotation,\ntranslation and size of unseen instances within specific categories. In this\narea, dense correspondence-based methods have achieved leading performance.\nHowever, they do not explicitly consider the local and global geometric\ninformation of different instances, resulting in poor generalization ability to\nunseen instances with significant shape variations. To deal with this problem,\nwe propose a novel Instance-Adaptive and Geometric-Aware Keypoint Learning\nmethod for category-level 6D object pose estimation (AG-Pose), which includes\ntwo key designs: (1) The first design is an Instance-Adaptive Keypoint\nDetection module, which can adaptively detect a set of sparse keypoints for\nvarious instances to represent their geometric structures. (2) The second\ndesign is a Geometric-Aware Feature Aggregation module, which can efficiently\nintegrate the local and global geometric information into keypoint features.", + "(2) The second\ndesign is a Geometric-Aware Feature Aggregation module, which can efficiently\nintegrate the local and global geometric information into keypoint features.\nThese two modules can work together to establish robust keypoint-level\ncorrespondences for unseen instances, thus enhancing the generalization ability\nof the model.Experimental results on CAMERA25 and REAL275 datasets show that\nthe proposed AG-Pose outperforms state-of-the-art methods by a large margin\nwithout category-specific shape priors.", + "Domain adaptation is a critical task in machine learning that aims to improve\nmodel performance on a target domain by leveraging knowledge from a related\nsource domain. In this work, we introduce Universal Semi-Supervised Domain\nAdaptation (UniSSDA), a practical yet challenging setting where the target\ndomain is partially labeled, and the source and target label space may not\nstrictly match. UniSSDA is at the intersection of Universal Domain Adaptation\n(UniDA) and Semi-Supervised Domain Adaptation (SSDA): the UniDA setting does\nnot allow for fine-grained categorization of target private classes not\nrepresented in the source domain, while SSDA focuses on the restricted\nclosed-set setting where source and target label spaces match exactly. Existing\nUniDA and SSDA methods are susceptible to common-class bias in UniSSDA\nsettings, where models overfit to data distributions of classes common to both\ndomains at the expense of private classes. We propose a new prior-guided\npseudo-label refinement strategy to reduce the reinforcement of common-class\nbias due to pseudo-labeling, a common label propagation strategy in domain\nadaptation.", + "We propose a new prior-guided\npseudo-label refinement strategy to reduce the reinforcement of common-class\nbias due to pseudo-labeling, a common label propagation strategy in domain\nadaptation. We demonstrate the effectiveness of the proposed strategy on\nbenchmark datasets Office-Home, DomainNet, and VisDA. The proposed strategy\nattains the best performance across UniSSDA adaptation settings and establishes\na new baseline for UniSSDA.", + "We present the content deformation field CoDeF as a new type of video\nrepresentation, which consists of a canonical content field aggregating the\nstatic contents in the entire video and a temporal deformation field recording\nthe transformations from the canonical image (i.e. rendered from the canonical\ncontent field) to each individual frame along the time axis.Given a target\nvideo, these two fields are jointly optimized to reconstruct it through a\ncarefully tailored rendering pipeline.We advisedly introduce some\nregularizations into the optimization process, urging the canonical content\nfield to inherit semantics (e.g. the object shape) from the video.With such a\ndesign, CoDeF naturally supports lifting image algorithms for video processing,\nin the sense that one can apply an image algorithm to the canonical image and\neffortlessly propagate the outcomes to the entire video with the aid of the\ntemporal deformation field.We experimentally show that CoDeF is able to lift\nimage-to-image translation to video-to-video translation and lift keypoint\ndetection to keypoint tracking without any training.More importantly, thanks to\nour lifting strategy that deploys the algorithms on only one image,", + "We experimentally show that CoDeF is able to lift\nimage-to-image translation to video-to-video translation and lift keypoint\ndetection to keypoint tracking without any training.More importantly, thanks to\nour lifting strategy that deploys the algorithms on only one image, we achieve\nsuperior cross-frame consistency in processed videos compared to existing\nvideo-to-video translation approaches, and even manage to track non-rigid\nobjects like water and smog.Project page can be found at\nhttps://qiuyu96.github.io/CoDeF/.", + "Image stitching from different captures often results in non-rectangular\nboundaries, which is often considered unappealing. To solve non-rectangular\nboundaries, current solutions involve cropping, which discards image content,\ninpainting, which can introduce unrelated content, or warping, which can\ndistort non-linear features and introduce artifacts. To overcome these issues,\nwe introduce a novel diffusion-based learning framework, \\textbf{RecDiffusion},\nfor image stitching rectangling. This framework combines Motion Diffusion\nModels (MDM) to generate motion fields, effectively transitioning from the\nstitched image's irregular borders to a geometrically corrected intermediary.\nFollowed by Content Diffusion Models (CDM) for image detail refinement.\nNotably, our sampling process utilizes a weighted map to identify regions\nneeding correction during each iteration of CDM. Our RecDiffusion ensures\ngeometric accuracy and overall visual appeal, surpassing all previous methods\nin both quantitative and qualitative measures when evaluated on public\nbenchmarks. Code is released at https://github.com/lhaippp/RecDiffusion.", + "Decomposing an object's appearance into representations of its materials and\nthe surrounding illumination is difficult, even when the object's 3D shape is\nknown beforehand. This problem is especially challenging for diffuse objects:\nit is ill-conditioned because diffuse materials severely blur incoming light,\nand it is ill-posed because diffuse materials under high-frequency lighting can\nbe indistinguishable from shiny materials under low-frequency lighting. We show\nthat it is possible to recover precise materials and illumination -- even from\ndiffuse objects -- by exploiting unintended shadows, like the ones cast onto an\nobject by the photographer who moves around it. These shadows are a nuisance in\nmost previous inverse rendering pipelines, but here we exploit them as signals\nthat improve conditioning and help resolve material-lighting ambiguities. We\npresent a method based on differentiable Monte Carlo ray tracing that uses\nimages of an object to jointly recover its spatially-varying materials, the\nsurrounding illumination environment, and the shapes of the unseen light\noccluders who inadvertently cast shadows upon it.", + "3D scene representations have gained immense popularity in recent years.\nMethods that use Neural Radiance fields are versatile for traditional tasks\nsuch as novel view synthesis. In recent times, some work has emerged that aims\nto extend the functionality of NeRF beyond view synthesis, for semantically\naware tasks such as editing and segmentation using 3D feature field\ndistillation from 2D foundation models. However, these methods have two major\nlimitations: (a) they are limited by the rendering speed of NeRF pipelines, and\n(b) implicitly represented feature fields suffer from continuity artifacts\nreducing feature quality. Recently, 3D Gaussian Splatting has shown\nstate-of-the-art performance on real-time radiance field rendering. In this\nwork, we go one step further: in addition to radiance field rendering, we\nenable 3D Gaussian splatting on arbitrary-dimension semantic features via 2D\nfoundation model distillation. This translation is not straightforward: naively\nincorporating feature fields in the 3DGS framework encounters significant\nchallenges, notably the disparities in spatial resolution and channel\nconsistency between RGB images and feature maps.", + "This translation is not straightforward: naively\nincorporating feature fields in the 3DGS framework encounters significant\nchallenges, notably the disparities in spatial resolution and channel\nconsistency between RGB images and feature maps. We propose architectural and\ntraining changes to efficiently avert this problem. Our proposed method is\ngeneral, and our experiments showcase novel view semantic segmentation,\nlanguage-guided editing and segment anything through learning feature fields\nfrom state-of-the-art 2D foundation models such as SAM and CLIP-LSeg. Across\nexperiments, our distillation method is able to provide comparable or better\nresults, while being significantly faster to both train and render.\nAdditionally, to the best of our knowledge, we are the first method to enable\npoint and bounding-box prompting for radiance field manipulation, by leveraging\nthe SAM model. Project website at: https://feature-3dgs.github.io/", + "Diffusion Models (DMs) have emerged as powerful generative models with\nunprecedented image generation capability. These models are widely used for\ndata augmentation and creative applications. However, DMs reflect the biases\npresent in the training datasets. This is especially concerning in the context\nof faces, where the DM prefers one demographic subgroup vs others (eg. female\nvs male). In this work, we present a method for debiasing DMs without relying\non additional data or model retraining. Specifically, we propose Distribution\nGuidance, which enforces the generated images to follow the prescribed\nattribute distribution. To realize this, we build on the key insight that the\nlatent features of denoising UNet hold rich demographic semantics, and the same\ncan be leveraged to guide debiased generation. We train Attribute Distribution\nPredictor (ADP) - a small mlp that maps the latent features to the distribution\nof attributes. ADP is trained with pseudo labels generated from existing\nattribute classifiers. The proposed Distribution Guidance with ADP enables us\nto do fair generation.", + "We train Attribute Distribution\nPredictor (ADP) - a small mlp that maps the latent features to the distribution\nof attributes. ADP is trained with pseudo labels generated from existing\nattribute classifiers. The proposed Distribution Guidance with ADP enables us\nto do fair generation. Our method reduces bias across single/multiple\nattributes and outperforms the baseline by a significant margin for\nunconditional and text-conditional diffusion models. Further, we present a\ndownstream task of training a fair attribute classifier by rebalancing the\ntraining set with our generated data.", + "This paper targets high-fidelity and real-time view synthesis of dynamic 3D\nscenes at 4K resolution. Recently, some methods on dynamic view synthesis have\nshown impressive rendering quality. However, their speed is still limited when\nrendering high-resolution images. To overcome this problem, we propose 4K4D, a\n4D point cloud representation that supports hardware rasterization and enables\nunprecedented rendering speed. Our representation is built on a 4D feature grid\nso that the points are naturally regularized and can be robustly optimized. In\naddition, we design a novel hybrid appearance model that significantly boosts\nthe rendering quality while preserving efficiency. Moreover, we develop a\ndifferentiable depth peeling algorithm to effectively learn the proposed model\nfrom RGB videos. Experiments show that our representation can be rendered at\nover 400 FPS on the DNA-Rendering dataset at 1080p resolution and 80 FPS on the\nENeRF-Outdoor dataset at 4K resolution using an RTX 4090 GPU, which is 30x\nfaster than previous methods and achieves the state-of-the-art rendering\nquality. Our project page is available at https://zju3dv.github.io/4k4d/.", + "Existing person re-identification methods have achieved remarkable advances\nin appearance-based identity association across homogeneous cameras, such as\nground-ground matching. However, as a more practical scenario, aerial-ground\nperson re-identification (AGPReID) among heterogeneous cameras has received\nminimal attention. To alleviate the disruption of discriminative identity\nrepresentation by dramatic view discrepancy as the most significant challenge\nin AGPReID, the view-decoupled transformer (VDT) is proposed as a simple yet\neffective framework. Two major components are designed in VDT to decouple\nview-related and view-unrelated features, namely hierarchical subtractive\nseparation and orthogonal loss, where the former separates these two features\ninside the VDT, and the latter constrains these two to be independent. In\naddition, we contribute a large-scale AGPReID dataset called CARGO, consisting\nof five/eight aerial/ground cameras, 5,000 identities, and 108,563 images.", + "In\naddition, we contribute a large-scale AGPReID dataset called CARGO, consisting\nof five/eight aerial/ground cameras, 5,000 identities, and 108,563 images.\nExperiments on two datasets show that VDT is a feasible and effective solution\nfor AGPReID, surpassing the previous method on mAP/Rank1 by up to 5.0%/2.7% on\nCARGO and 3.7%/5.2% on AG-ReID, keeping the same magnitude of computational\ncomplexity. Our project is available at https://github.com/LinlyAC/VDT-AGPReID", + "In the field of 3D object detection for autonomous driving, LiDAR-Camera (LC)\nfusion is the top-performing sensor configuration. Still, LiDAR is relatively\nhigh cost, which hinders adoption of this technology for consumer automobiles.\nAlternatively, camera and radar are commonly deployed on vehicles already on\nthe road today, but performance of Camera-Radar (CR) fusion falls behind LC\nfusion. In this work, we propose Camera-Radar Knowledge Distillation (CRKD) to\nbridge the performance gap between LC and CR detectors with a novel\ncross-modality KD framework. We use the Bird's-Eye-View (BEV) representation as\nthe shared feature space to enable effective knowledge distillation. To\naccommodate the unique cross-modality KD path, we propose four distillation\nlosses to help the student learn crucial features from the teacher model. We\npresent extensive evaluations on the nuScenes dataset to demonstrate the\neffectiveness of the proposed CRKD framework. The project page for CRKD is\nhttps://song-jingyu.github.io/CRKD.", + "We present differentiable point-based inverse rendering, DPIR, an\nanalysis-by-synthesis method that processes images captured under diverse\nilluminations to estimate shape and spatially-varying BRDF. To this end, we\nadopt point-based rendering, eliminating the need for multiple samplings per\nray, typical of volumetric rendering, thus significantly enhancing the speed of\ninverse rendering. To realize this idea, we devise a hybrid point-volumetric\nrepresentation for geometry and a regularized basis-BRDF representation for\nreflectance. The hybrid geometric representation enables fast rendering through\npoint-based splatting while retaining the geometric details and stability\ninherent to SDF-based representations. The regularized basis-BRDF mitigates the\nill-posedness of inverse rendering stemming from limited light-view angular\nsamples. We also propose an efficient shadow detection method using point-based\nshadow map rendering. Our extensive evaluations demonstrate that DPIR\noutperforms prior works in terms of reconstruction accuracy, computational\nefficiency, and memory footprint. Furthermore, our explicit point-based\nrepresentation and rendering enables intuitive geometry and reflectance\nediting.", + "Dynamic Scene Graph Generation (DSGG) focuses on identifying visual\nrelationships within the spatial-temporal domain of videos. Conventional\napproaches often employ multi-stage pipelines, which typically consist of\nobject detection, temporal association, and multi-relation classification.\nHowever, these methods exhibit inherent limitations due to the separation of\nmultiple stages, and independent optimization of these sub-problems may yield\nsub-optimal solutions. To remedy these limitations, we propose a one-stage\nend-to-end framework, termed OED, which streamlines the DSGG pipeline. This\nframework reformulates the task as a set prediction problem and leverages\npair-wise features to represent each subject-object pair within the scene\ngraph. Moreover, another challenge of DSGG is capturing temporal dependencies,\nwe introduce a Progressively Refined Module (PRM) for aggregating temporal\ncontext without the constraints of additional trackers or handcrafted\ntrajectories, enabling end-to-end optimization of the network. Extensive\nexperiments conducted on the Action Genome benchmark demonstrate the\neffectiveness of our design. The code and models are available at\n\\url{https://github.com/guanw-pku/OED}.", + "Recent success of pre-trained foundation vision-language models makes\nOpen-Vocabulary Segmentation (OVS) possible. Despite the promising performance,\nthis approach introduces heavy computational overheads for two challenges: 1)\nlarge model sizes of the backbone; 2) expensive costs during the fine-tuning.\nThese challenges hinder this OVS strategy from being widely applicable and\naffordable in real-world scenarios. Although traditional methods such as model\ncompression and efficient fine-tuning can address these challenges, they often\nrely on heuristics. This means that their solutions cannot be easily\ntransferred and necessitate re-training on different models, which comes at a\ncost. In the context of efficient OVS, we target achieving performance that is\ncomparable to or even better than prior OVS works based on large\nvision-language foundation models, by utilizing smaller models that incur lower\ntraining costs. The core strategy is to make our efficiency principled and thus\nseamlessly transferable from one OVS framework to others without further\ncustomization. Comprehensive experiments on diverse OVS benchmarks demonstrate\nour superior trade-off between segmentation accuracy and computation costs over\nprevious works. Our code is available on https://github.com/Xujxyang/OpenTrans", + "Canonical emotions, such as happy, sad, and fearful, are easy to understand\nand annotate. However, emotions are often compound, e.g. happily surprised, and\ncan be mapped to the action units (AUs) used for expressing emotions, and\ntrivially to the canonical ones. Intuitively, emotions are continuous as\nrepresented by the arousal-valence (AV) model. An interpretable unification of\nthese four modalities - namely, Canonical, Compound, AUs, and AV - is highly\ndesirable, for a better representation and understanding of emotions. However,\nsuch unification remains to be unknown in the current literature. In this work,\nwe propose an interpretable and unified emotion model, referred as C2A2. We\nalso develop a method that leverages labels of the non-unified models to\nannotate the novel unified one. Finally, we modify the text-conditional\ndiffusion models to understand continuous numbers, which are then used to\ngenerate continuous expressions using our unified emotion model. Through\nquantitative and qualitative experiments, we show that our generated images are\nrich and capture subtle expressions.", + "Finally, we modify the text-conditional\ndiffusion models to understand continuous numbers, which are then used to\ngenerate continuous expressions using our unified emotion model. Through\nquantitative and qualitative experiments, we show that our generated images are\nrich and capture subtle expressions. Our work allows a fine-grained generation\nof expressions in conjunction with other textual inputs and offers a new label\nspace for emotions at the same time.", + "Low-shot image classification, where training images are limited or\ninaccessible, has benefited from recent progress on pre-trained vision-language\n(VL) models with strong generalizability, e.g. CLIP. Prompt learning methods\nbuilt with VL models generate text features from the class names that only have\nconfined class-specific information. Large Language Models (LLMs), with their\nvast encyclopedic knowledge, emerge as the complement. Thus, in this paper, we\ndiscuss the integration of LLMs to enhance pre-trained VL models, specifically\non low-shot classification. However, the domain gap between language and vision\nblocks the direct application of LLMs. Thus, we propose LLaMP, Large Language\nModels as Prompt learners, that produces adaptive prompts for the CLIP text\nencoder, establishing it as the connecting bridge. Experiments show that,\ncompared with other state-of-the-art prompt learning methods, LLaMP yields\nbetter performance on both zero-shot generalization and few-shot image\nclassification, over a spectrum of 11 datasets. Code will be made available at:\nhttps://github.com/zhaohengz/LLaMP.", + "We present a new additive image factorization technique that treats images to\nbe composed of multiple latent specular components which can be simply\nestimated recursively by modulating the sparsity during decomposition. Our\nmodel-driven {\\em RSFNet} estimates these factors by unrolling the optimization\ninto network layers requiring only a few scalars to be learned. The resultant\nfactors are interpretable by design and can be fused for different image\nenhancement tasks via a network or combined directly by the user in a\ncontrollable fashion. Based on RSFNet, we detail a zero-reference Low Light\nEnhancement (LLE) application trained without paired or unpaired supervision.\nOur system improves the state-of-the-art performance on standard benchmarks and\nachieves better generalization on multiple other datasets. We also integrate\nour factors with other task specific fusion networks for applications like\nderaining, deblurring and dehazing with negligible overhead thereby\nhighlighting the multi-domain and multi-task generalizability of our proposed\nRSFNet. The code and data is released for reproducibility on the project\nhomepage.", + "This paper presents Paint3D, a novel coarse-to-fine generative framework that\nis capable of producing high-resolution, lighting-less, and diverse 2K UV\ntexture maps for untextured 3D meshes conditioned on text or image inputs. The\nkey challenge addressed is generating high-quality textures without embedded\nillumination information, which allows the textures to be re-lighted or\nre-edited within modern graphics pipelines. To achieve this, our method first\nleverages a pre-trained depth-aware 2D diffusion model to generate\nview-conditional images and perform multi-view texture fusion, producing an\ninitial coarse texture map. However, as 2D models cannot fully represent 3D\nshapes and disable lighting effects, the coarse texture map exhibits incomplete\nareas and illumination artifacts. To resolve this, we train separate UV\nInpainting and UVHD diffusion models specialized for the shape-aware refinement\nof incomplete areas and the removal of illumination artifacts. Through this\ncoarse-to-fine process, Paint3D can produce high-quality 2K UV textures that\nmaintain semantic consistency while being lighting-less, significantly\nadvancing the state-of-the-art in texturing 3D objects.", + "Visual language models (VLMs) rapidly progressed with the recent success of\nlarge language models. There have been growing efforts on visual instruction\ntuning to extend the LLM with visual inputs, but lacks an in-depth study of the\nvisual language pre-training process, where the model learns to perform joint\nmodeling on both modalities. In this work, we examine the design options for\nVLM pre-training by augmenting LLM towards VLM through step-by-step\ncontrollable comparisons. We introduce three main findings: (1) freezing LLMs\nduring pre-training can achieve decent zero-shot performance, but lack\nin-context learning capability, which requires unfreezing the LLM; (2)\ninterleaved pre-training data is beneficial whereas image-text pairs alone are\nnot optimal; (3) re-blending text-only instruction data to image-text data\nduring instruction fine-tuning not only remedies the degradation of text-only\ntasks, but also boosts VLM task accuracy.", + "With an enhanced pre-training recipe\nwe build VILA, a Visual Language model family that consistently outperforms the\nstate-of-the-art models, e.g., LLaVA-1.5, across main benchmarks without bells\nand whistles. Multi-modal pre-training also helps unveil appealing properties\nof VILA, including multi-image reasoning, enhanced in-context learning, and\nbetter world knowledge.", + "We demonstrate text as a strong cross-modal interface. Rather than relying on\ndeep embeddings to connect image and language as the interface representation,\nour approach represents an image as text, from which we enjoy the\ninterpretability and flexibility inherent to natural language. We employ an\nautoencoder that uses a pre-trained text-to-image diffusion model for decoding.\nThe encoder is trained to transform an input image into text, which is then fed\ninto the fixed text-to-image diffusion decoder to reconstruct the original\ninput -- a process we term De-Diffusion. Experiments validate both the\nprecision and comprehensiveness of De-Diffusion text representing images, such\nthat it can be readily ingested by off-the-shelf text-to-image tools and LLMs\nfor diverse multi-modal tasks. For example, a single De-Diffusion model can\ngeneralize to provide transferable prompts for different text-to-image tools,\nand also achieves a new state of the art on open-ended vision-language tasks by\nsimply prompting large language models with few-shot examples.", + "The most performant spatio-temporal action localisation models use external\nperson proposals and complex external memory banks. We propose a fully\nend-to-end, purely-transformer based model that directly ingests an input\nvideo, and outputs tubelets -- a sequence of bounding boxes and the action\nclasses at each frame. Our flexible model can be trained with either sparse\nbounding-box supervision on individual frames, or full tubelet annotations. And\nin both cases, it predicts coherent tubelets as the output. Moreover, our\nend-to-end model requires no additional pre-processing in the form of\nproposals, or post-processing in terms of non-maximal suppression. We perform\nextensive ablation experiments, and significantly advance the state-of-the-art\nresults on four different spatio-temporal action localisation benchmarks with\nboth sparse keyframes and full tubelet annotations.", + "We propose a text-guided variational image generation method to address the\nchallenge of getting clean data for anomaly detection in industrial\nmanufacturing. Our method utilizes text information about the target object,\nlearned from extensive text library documents, to generate non-defective data\nimages resembling the input image. The proposed framework ensures that the\ngenerated non-defective images align with anticipated distributions derived\nfrom textual and image-based knowledge, ensuring stability and generality.\nExperimental results demonstrate the effectiveness of our approach, surpassing\nprevious methods even with limited non-defective data. Our approach is\nvalidated through generalization tests across four baseline models and three\ndistinct datasets. We present an additional analysis to enhance the\neffectiveness of anomaly detection models by utilizing the generated images.", + "Artifact-free super-resolution (SR) aims to translate low-resolution images\ninto their high-resolution counterparts with a strict integrity of the original\ncontent, eliminating any distortions or synthetic details. While traditional\ndiffusion-based SR techniques have demonstrated remarkable abilities to enhance\nimage detail, they are prone to artifact introduction during iterative\nprocedures. Such artifacts, ranging from trivial noise to unauthentic textures,\ndeviate from the true structure of the source image, thus challenging the\nintegrity of the super-resolution process. In this work, we propose\nSelf-Adaptive Reality-Guided Diffusion (SARGD), a training-free method that\ndelves into the latent space to effectively identify and mitigate the\npropagation of artifacts. Our SARGD begins by using an artifact detector to\nidentify implausible pixels, creating a binary mask that highlights artifacts.\nFollowing this, the Reality Guidance Refinement (RGR) process refines artifacts\nby integrating this mask with realistic latent representations, improving\nalignment with the original image. Nonetheless, initial realistic-latent\nrepresentations from lower-quality images result in over-smoothing in the final\noutput.", + "Following this, the Reality Guidance Refinement (RGR) process refines artifacts\nby integrating this mask with realistic latent representations, improving\nalignment with the original image. Nonetheless, initial realistic-latent\nrepresentations from lower-quality images result in over-smoothing in the final\noutput. To address this, we introduce a Self-Adaptive Guidance (SAG) mechanism.\nIt dynamically computes a reality score, enhancing the sharpness of the\nrealistic latent. These alternating mechanisms collectively achieve\nartifact-free super-resolution. Extensive experiments demonstrate the\nsuperiority of our method, delivering detailed artifact-free high-resolution\nimages while reducing sampling steps by 2X. We release our code at\nhttps://github.com/ProAirVerse/Self-Adaptive-Guidance-Diffusion.git.", + "Recently, temporal action detection (TAD) has seen significant performance\nimprovement with end-to-end training. However, due to the memory bottleneck,\nonly models with limited scales and limited data volumes can afford end-to-end\ntraining, which inevitably restricts TAD performance. In this paper, we reduce\nthe memory consumption for end-to-end training, and manage to scale up the TAD\nbackbone to 1 billion parameters and the input video to 1,536 frames, leading\nto significant detection performance. The key to our approach lies in our\nproposed temporal-informative adapter (TIA), which is a novel lightweight\nmodule that reduces training memory. Using TIA, we free the humongous backbone\nfrom learning to adapt to the TAD task by only updating the parameters in TIA.\nTIA also leads to better TAD representation by temporally aggregating context\nfrom adjacent frames throughout the backbone. We evaluate our model across four\nrepresentative datasets.", + "TIA also leads to better TAD representation by temporally aggregating context\nfrom adjacent frames throughout the backbone. We evaluate our model across four\nrepresentative datasets. Owing to our efficient design, we are able to train\nend-to-end on VideoMAEv2-giant and achieve 75.4% mAP on THUMOS14, being the\nfirst end-to-end model to outperform the best feature-based methods. Code is\navailable at https://github.com/sming256/AdaTAD.", + "Multimodal learning, which integrates data from diverse sensory modes, plays\na pivotal role in artificial intelligence. However, existing multimodal\nlearning methods often struggle with challenges where some modalities appear\nmore dominant than others during multimodal learning, resulting in suboptimal\nperformance. To address this challenge, we propose MLA (Multimodal Learning\nwith Alternating Unimodal Adaptation). MLA reframes the conventional joint\nmultimodal learning process by transforming it into an alternating unimodal\nlearning process, thereby minimizing interference between modalities.\nSimultaneously, it captures cross-modal interactions through a shared head,\nwhich undergoes continuous optimization across different modalities. This\noptimization process is controlled by a gradient modification mechanism to\nprevent the shared head from losing previously acquired information. During the\ninference phase, MLA utilizes a test-time uncertainty-based model fusion\nmechanism to integrate multimodal information. Extensive experiments are\nconducted on five diverse datasets, encompassing scenarios with complete\nmodalities and scenarios with missing modalities. These experiments demonstrate\nthe superiority of MLA over competing prior approaches.", + "Extensive experiments are\nconducted on five diverse datasets, encompassing scenarios with complete\nmodalities and scenarios with missing modalities. These experiments demonstrate\nthe superiority of MLA over competing prior approaches. Our code is available\nat\nhttps://github.com/Cecile-hi/Multimodal-Learning-with-Alternating-Unimodal-Adaptation.", + "Producing quality segmentation masks for images is a fundamental problem in\ncomputer vision. Recent research has explored large-scale supervised training\nto enable zero-shot segmentation on virtually any image style and unsupervised\ntraining to enable segmentation without dense annotations. However,\nconstructing a model capable of segmenting anything in a zero-shot manner\nwithout any annotations is still challenging. In this paper, we propose to\nutilize the self-attention layers in stable diffusion models to achieve this\ngoal because the pre-trained stable diffusion model has learned inherent\nconcepts of objects within its attention layers. Specifically, we introduce a\nsimple yet effective iterative merging process based on measuring KL divergence\namong attention maps to merge them into valid segmentation masks. The proposed\nmethod does not require any training or language dependency to extract quality\nsegmentation for any images. On COCO-Stuff-27, our method surpasses the prior\nunsupervised zero-shot SOTA method by an absolute 26% in pixel accuracy and 17%\nin mean IoU. The project page is at\n\\url{https://sites.google.com/view/diffseg/home}.", + "Due to the depth degradation effect in residual connections, many efficient\nVision Transformers models that rely on stacking layers for information\nexchange often fail to form sufficient information mixing, leading to unnatural\nvisual perception. To address this issue, in this paper, we propose Aggregated\nAttention, a biomimetic design-based token mixer that simulates biological\nfoveal vision and continuous eye movement while enabling each token on the\nfeature map to have a global perception. Furthermore, we incorporate learnable\ntokens that interact with conventional queries and keys, which further\ndiversifies the generation of affinity matrices beyond merely relying on the\nsimilarity between queries and keys. Our approach does not rely on stacking for\ninformation exchange, thus effectively avoiding depth degradation and achieving\nnatural visual perception. Additionally, we propose Convolutional GLU, a\nchannel mixer that bridges the gap between GLU and SE mechanism, which empowers\neach token to have channel attention based on its nearest neighbor image\nfeatures, enhancing local modeling capability and model robustness. We combine\naggregated attention and convolutional GLU to create a new visual backbone\ncalled TransNeXt.", + "We combine\naggregated attention and convolutional GLU to create a new visual backbone\ncalled TransNeXt. Extensive experiments demonstrate that our TransNeXt achieves\nstate-of-the-art performance across multiple model sizes. At a resolution of\n$224^2$, TransNeXt-Tiny attains an ImageNet accuracy of 84.0%, surpassing\nConvNeXt-B with 69% fewer parameters. Our TransNeXt-Base achieves an ImageNet\naccuracy of 86.2% and an ImageNet-A accuracy of 61.6% at a resolution of\n$384^2$, a COCO object detection mAP of 57.1, and an ADE20K semantic\nsegmentation mIoU of 54.7.", + "Visible-Infrared Person Re-identification (VI-ReID) is a challenging\ncross-modal pedestrian retrieval task, due to significant intra-class\nvariations and cross-modal discrepancies among different cameras. Existing\nworks mainly focus on embedding images of different modalities into a unified\nspace to mine modality-shared features. They only seek distinctive information\nwithin these shared features, while ignoring the identity-aware useful\ninformation that is implicit in the modality-specific features. To address this\nissue, we propose a novel Implicit Discriminative Knowledge Learning (IDKL)\nnetwork to uncover and leverage the implicit discriminative information\ncontained within the modality-specific. First, we extract modality-specific and\nmodality-shared features using a novel dual-stream network. Then, the\nmodality-specific features undergo purification to reduce their modality style\ndiscrepancies while preserving identity-aware discriminative knowledge.\nSubsequently, this kind of implicit knowledge is distilled into the\nmodality-shared feature to enhance its distinctiveness. Finally, an alignment\nloss is proposed to minimize modality discrepancy on enhanced modality-shared\nfeatures. Extensive experiments on multiple public datasets demonstrate the\nsuperiority of IDKL network over the state-of-the-art methods.", + "Finally, an alignment\nloss is proposed to minimize modality discrepancy on enhanced modality-shared\nfeatures. Extensive experiments on multiple public datasets demonstrate the\nsuperiority of IDKL network over the state-of-the-art methods. Code is\navailable at https://github.com/1KK077/IDKL.", + "Integrating whole-slide images (WSIs) and bulk transcriptomics for predicting\npatient survival can improve our understanding of patient prognosis. However,\nthis multimodal task is particularly challenging due to the different nature of\nthese data: WSIs represent a very high-dimensional spatial description of a\ntumor, while bulk transcriptomics represent a global description of gene\nexpression levels within that tumor. In this context, our work aims to address\ntwo key challenges: (1) how can we tokenize transcriptomics in a semantically\nmeaningful and interpretable way?, and (2) how can we capture dense multimodal\ninteractions between these two modalities? Specifically, we propose to learn\nbiological pathway tokens from transcriptomics that can encode specific\ncellular functions. Together with histology patch tokens that encode the\ndifferent morphological patterns in the WSI, we argue that they form\nappropriate reasoning units for downstream interpretability analyses. We\npropose fusing both modalities using a memory-efficient multimodal Transformer\nthat can model interactions between pathway and histology patch tokens.", + "We\npropose fusing both modalities using a memory-efficient multimodal Transformer\nthat can model interactions between pathway and histology patch tokens. Our\nproposed model, SURVPATH, achieves state-of-the-art performance when evaluated\nagainst both unimodal and multimodal baselines on five datasets from The Cancer\nGenome Atlas. Our interpretability framework identifies key multimodal\nprognostic factors, and, as such, can provide valuable insights into the\ninteraction between genotype and phenotype, enabling a deeper understanding of\nthe underlying biological mechanisms at play. We make our code public at:\nhttps://github.com/ajv012/SurvPath.", + "This paper focuses on self-supervised monocular depth estimation in dynamic\nscenes trained on monocular videos. Existing methods jointly estimate\npixel-wise depth and motion, relying mainly on an image reconstruction loss.\nDynamic regions1 remain a critical challenge for these methods due to the\ninherent ambiguity in depth and motion estimation, resulting in inaccurate\ndepth estimation. This paper proposes a self-supervised training framework\nexploiting pseudo depth labels for dynamic regions from training data. The key\ncontribution of our framework is to decouple depth estimation for static and\ndynamic regions of images in the training data. We start with an unsupervised\ndepth estimation approach, which provides reliable depth estimates for static\nregions and motion cues for dynamic regions and allows us to extract moving\nobject information at the instance level. In the next stage, we use an object\nnetwork to estimate the depth of those moving objects assuming rigid motions.\nThen, we propose a new scale alignment module to address the scale ambiguity\nbetween estimated depths for static and dynamic regions. We can then use the\ndepth labels generated to train an end-to-end depth estimation network and\nimprove its performance.", + "Then, we propose a new scale alignment module to address the scale ambiguity\nbetween estimated depths for static and dynamic regions. We can then use the\ndepth labels generated to train an end-to-end depth estimation network and\nimprove its performance. Extensive experiments on the Cityscapes and KITTI\ndatasets show that our self-training strategy consistently outperforms existing\nself/unsupervised depth estimation methods.", + "Recent advancements in domain generalization (DG) for face anti-spoofing\n(FAS) have garnered considerable attention. Traditional methods have focused on\ndesigning learning objectives and additional modules to isolate domain-specific\nfeatures while retaining domain-invariant characteristics in their\nrepresentations. However, such approaches often lack guarantees of consistent\nmaintenance of domain-invariant features or the complete removal of\ndomain-specific features. Furthermore, most prior works of DG for FAS do not\nensure convergence to a local flat minimum, which has been shown to be\nadvantageous for DG. In this paper, we introduce GAC-FAS, a novel learning\nobjective that encourages the model to converge towards an optimal flat minimum\nwithout necessitating additional learning modules. Unlike conventional\nsharpness-aware minimizers, GAC-FAS identifies ascending points for each domain\nand regulates the generalization gradient updates at these points to align\ncoherently with empirical risk minimization (ERM) gradient updates. This unique\napproach specifically guides the model to be robust against domain shifts. We\ndemonstrate the efficacy of GAC-FAS through rigorous testing on challenging\ncross-domain FAS datasets, where it establishes state-of-the-art performance.", + "This unique\napproach specifically guides the model to be robust against domain shifts. We\ndemonstrate the efficacy of GAC-FAS through rigorous testing on challenging\ncross-domain FAS datasets, where it establishes state-of-the-art performance.\nThe code is available at https://github.com/leminhbinh0209/CVPR24-FAS.", + "Depression Recognition (DR) poses a considerable challenge, especially in the\ncontext of the growing concerns surrounding privacy. Traditional automatic\ndiagnosis of DR technology necessitates the use of facial images, undoubtedly\nexpose the patient identity features and poses privacy risks. In order to\nmitigate the potential risks associated with the inappropriate disclosure of\npatient facial images, we design a new imaging system to erase the identity\ninformation of captured facial images while retain disease-relevant features.\nIt is irreversible for identity information recovery while preserving essential\ndisease-related characteristics necessary for accurate DR. More specifically,\nwe try to record a de-identified facial image (erasing the identifiable\nfeatures as much as possible) by a learnable lens, which is optimized in\nconjunction with the following DR task as well as a range of face analysis\nrelated auxiliary tasks in an end-to-end manner. These aforementioned\nstrategies form our final Optical deep Depression Recognition network\n(OpticalDR).", + "These aforementioned\nstrategies form our final Optical deep Depression Recognition network\n(OpticalDR). Experiments on CelebA, AVEC 2013, and AVEC 2014 datasets\ndemonstrate that our OpticalDR has achieved state-of-the-art privacy protection\nperformance with an average AUC of 0.51 on popular facial recognition models,\nand competitive results for DR with MAE/RMSE of 7.53/8.48 on AVEC 2013 and\n7.89/8.82 on AVEC 2014, respectively.", + "We propose a novel diffusion-based image generation method called the\nobservation-guided diffusion probabilistic model (OGDM), which effectively\naddresses the tradeoff between quality control and fast sampling. Our approach\nreestablishes the training objective by integrating the guidance of the\nobservation process with the Markov chain in a principled way. This is achieved\nby introducing an additional loss term derived from the observation based on a\nconditional discriminator on noise level, which employs a Bernoulli\ndistribution indicating whether its input lies on the (noisy) real manifold or\nnot. This strategy allows us to optimize the more accurate negative\nlog-likelihood induced in the inference stage especially when the number of\nfunction evaluations is limited. The proposed training scheme is also\nadvantageous even when incorporated only into the fine-tuning process, and it\nis compatible with various fast inference strategies since our method yields\nbetter denoising networks using the exactly the same inference procedure\nwithout incurring extra computational cost. We demonstrate the effectiveness of\nour training algorithm using diverse inference techniques on strong diffusion\nmodel baselines. Our implementation is available at\nhttps://github.com/Junoh-Kang/OGDM_edm.", + "The portrait matting task aims to extract an alpha matte with complete\nsemantics and finely-detailed contours. In comparison to CNN-based approaches,\ntransformers with self-attention module have a better capacity to capture\nlong-range dependencies and low-frequency semantic information of a portrait.\nHowever, the recent research shows that self-attention mechanism struggles with\nmodeling high-frequency contour information and capturing fine contour details,\nwhich can lead to bias while predicting the portrait's contours. To deal with\nthis issue, we propose EFormer to enhance the model's attention towards both of\nthe low-frequency semantic and high-frequency contour features. For the\nhigh-frequency contours, our research demonstrates that cross-attention module\nbetween different resolutions can guide our model to allocate attention\nappropriately to these contour regions. Supported on this, we can successfully\nextract the high-frequency detail information around the portrait's contours,\nwhich are previously ignored by self-attention. Based on cross-attention\nmodule, we further build a semantic and contour detector (SCD) to accurately\ncapture both of the low-frequency semantic and high-frequency contour features.", + "Based on cross-attention\nmodule, we further build a semantic and contour detector (SCD) to accurately\ncapture both of the low-frequency semantic and high-frequency contour features.\nAnd we design contour-edge extraction branch and semantic extraction branch to\nextract refined high-frequency contour features and complete low-frequency\nsemantic information, respectively. Finally, we fuse the two kinds of features\nand leverage segmentation head to generate a predicted portrait matte.\nExperiments on VideoMatte240K (JPEG SD Format) and Adobe Image Matting (AIM)\ndatasets demonstrate that EFormer outperforms previous portrait matte methods.", + "Recently, lightweight Vision Transformers (ViTs) demonstrate superior\nperformance and lower latency, compared with lightweight Convolutional Neural\nNetworks (CNNs), on resource-constrained mobile devices. Researchers have\ndiscovered many structural connections between lightweight ViTs and lightweight\nCNNs. However, the notable architectural disparities in the block structure,\nmacro, and micro designs between them have not been adequately examined. In\nthis study, we revisit the efficient design of lightweight CNNs from ViT\nperspective and emphasize their promising prospect for mobile devices.\nSpecifically, we incrementally enhance the mobile-friendliness of a standard\nlightweight CNN, \\ie, MobileNetV3, by integrating the efficient architectural\ndesigns of lightweight ViTs. This ends up with a new family of pure lightweight\nCNNs, namely RepViT. Extensive experiments show that RepViT outperforms\nexisting state-of-the-art lightweight ViTs and exhibits favorable latency in\nvarious vision tasks.", + "This ends up with a new family of pure lightweight\nCNNs, namely RepViT. Extensive experiments show that RepViT outperforms\nexisting state-of-the-art lightweight ViTs and exhibits favorable latency in\nvarious vision tasks. Notably, on ImageNet, RepViT achieves over 80\\% top-1\naccuracy with 1.0 ms latency on an iPhone 12, which is the first time for a\nlightweight model, to the best of our knowledge. Besides, when RepViT meets\nSAM, our RepViT-SAM can achieve nearly 10$\\times$ faster inference than the\nadvanced MobileSAM. Codes and models are available at\n\\url{https://github.com/THU-MIG/RepViT}.", + "We present Monocular Neural Parametric Head Models (MonoNPHM) for dynamic 3D\nhead reconstructions from monocular RGB videos. To this end, we propose a\nlatent appearance space that parameterizes a texture field on top of a neural\nparametric model. We constrain predicted color values to be correlated with the\nunderlying geometry such that gradients from RGB effectively influence latent\ngeometry codes during inverse rendering. To increase the representational\ncapacity of our expression space, we augment our backward deformation field\nwith hyper-dimensions, thus improving color and geometry representation in\ntopologically challenging expressions. Using MonoNPHM as a learned prior, we\napproach the task of 3D head reconstruction using signed distance field based\nvolumetric rendering. By numerically inverting our backward deformation field,\nwe incorporated a landmark loss using facial anchor points that are closely\ntied to our canonical geometry representation. To evaluate the task of dynamic\nface reconstruction from monocular RGB videos we record 20 challenging Kinect\nsequences under casual conditions. MonoNPHM outperforms all baselines with a\nsignificant margin, and makes an important step towards easily accessible\nneural parametric face models through RGB tracking.", + "Given a single image of a 3D object, this paper proposes a novel method\n(named ConsistNet) that is able to generate multiple images of the same object,\nas if seen they are captured from different viewpoints, while the 3D\n(multi-view) consistencies among those multiple generated images are\neffectively exploited. Central to our method is a multi-view consistency block\nwhich enables information exchange across multiple single-view diffusion\nprocesses based on the underlying multi-view geometry principles. ConsistNet is\nan extension to the standard latent diffusion model, and consists of two\nsub-modules: (a) a view aggregation module that unprojects multi-view features\ninto global 3D volumes and infer consistency, and (b) a ray aggregation module\nthat samples and aggregate 3D consistent features back to each view to enforce\nconsistency. Our approach departs from previous methods in multi-view image\ngeneration, in that it can be easily dropped-in pre-trained LDMs without\nrequiring explicit pixel correspondences or depth prediction.", + "Our approach departs from previous methods in multi-view image\ngeneration, in that it can be easily dropped-in pre-trained LDMs without\nrequiring explicit pixel correspondences or depth prediction. Experiments show\nthat our method effectively learns 3D consistency over a frozen Zero123\nbackbone and can generate 16 surrounding views of the object within 40 seconds\non a single A100 GPU. Our code will be made available on\nhttps://github.com/JiayuYANG/ConsistNet", + "We present GenN2N, a unified NeRF-to-NeRF translation framework for various\nNeRF translation tasks such as text-driven NeRF editing, colorization,\nsuper-resolution, inpainting, etc. Unlike previous methods designed for\nindividual translation tasks with task-specific schemes, GenN2N achieves all\nthese NeRF editing tasks by employing a plug-and-play image-to-image translator\nto perform editing in the 2D domain and lifting 2D edits into the 3D NeRF\nspace. Since the 3D consistency of 2D edits may not be assured, we propose to\nmodel the distribution of the underlying 3D edits through a generative model\nthat can cover all possible edited NeRFs. To model the distribution of 3D\nedited NeRFs from 2D edited images, we carefully design a VAE-GAN that encodes\nimages while decoding NeRFs. The latent space is trained to align with a\nGaussian distribution and the NeRFs are supervised through an adversarial loss\non its renderings.", + "The latent space is trained to align with a\nGaussian distribution and the NeRFs are supervised through an adversarial loss\non its renderings. To ensure the latent code does not depend on 2D viewpoints\nbut truly reflects the 3D edits, we also regularize the latent code through a\ncontrastive learning scheme. Extensive experiments on various editing tasks\nshow GenN2N, as a universal framework, performs as well or better than\ntask-specific specialists while possessing flexible generative power. More\nresults on our project page: https://xiangyueliu.github.io/GenN2N/", + "Considerable efforts have been devoted to Oriented Object Detection (OOD).\nHowever, one lasting issue regarding the discontinuity in Oriented Bounding Box\n(OBB) representation remains unresolved, which is an inherent bottleneck for\nextant OOD methods. This paper endeavors to completely solve this issue in a\ntheoretically guaranteed manner and puts an end to the ad-hoc efforts in this\ndirection. Prior studies typically can only address one of the two cases of\ndiscontinuity: rotation and aspect ratio, and often inadvertently introduce\ndecoding discontinuity, e.g. Decoding Incompleteness (DI) and Decoding\nAmbiguity (DA) as discussed in literature. Specifically, we propose a novel\nrepresentation method called Continuous OBB (COBB), which can be readily\nintegrated into existing detectors e.g. Faster-RCNN as a plugin. It can\ntheoretically ensure continuity in bounding box regression which to our best\nknowledge, has not been achieved in literature for rectangle-based object\nrepresentation.", + "Faster-RCNN as a plugin. It can\ntheoretically ensure continuity in bounding box regression which to our best\nknowledge, has not been achieved in literature for rectangle-based object\nrepresentation. For fairness and transparency of experiments, we have developed\na modularized benchmark based on the open-source deep learning framework\nJittor's detection toolbox JDet for OOD evaluation. On the popular DOTA\ndataset, by integrating Faster-RCNN as the same baseline model, our new method\noutperforms the peer method Gliding Vertex by 1.13% mAP50 (relative improvement\n1.54%), and 2.46% mAP75 (relative improvement 5.91%), without any tricks.", + "Most of the recent literature on image Super-Resolution (SR) can be\nclassified into two main approaches. The first one involves learning a\ncorruption model tailored to a specific dataset, aiming to mimic the noise and\ncorruption in low-resolution images, such as sensor noise. However, this\napproach is data-specific, tends to lack adaptability, and its accuracy\ndiminishes when faced with unseen types of image corruptions. A second and more\nrecent approach, referred to as Robust Super-Resolution (RSR), proposes to\nimprove real-world SR by harnessing the generalization capabilities of a model\nby making it robust to adversarial attacks. To delve further into this second\napproach, our paper explores the universality of various methods for enhancing\nthe robustness of deep learning SR models. In other words, we inquire: \"Which\nrobustness method exhibits the highest degree of adaptability when dealing with\na wide range of adversarial attacks ?\".", + "In other words, we inquire: \"Which\nrobustness method exhibits the highest degree of adaptability when dealing with\na wide range of adversarial attacks ?\". Our extensive experimentation on both\nsynthetic and real-world images empirically demonstrates that median randomized\nsmoothing (MRS) is more general in terms of robustness compared to adversarial\nlearning techniques, which tend to focus on specific types of attacks.\nFurthermore, as expected, we also illustrate that the proposed universal robust\nmethod enables the SR model to handle standard corruptions more effectively,\nsuch as blur and Gaussian noise, and notably, corruptions naturally present in\nreal-world images. These results support the significance of shifting the\nparadigm in the development of real-world SR methods towards RSR, especially\nvia MRS.", + "The prevalent use of commercial and open-source diffusion models (DMs) for\ntext-to-image generation prompts risk mitigation to prevent undesired\nbehaviors. Existing concept erasing methods in academia are all based on full\nparameter or specification-based fine-tuning, from which we observe the\nfollowing issues: 1) Generation alternation towards erosion: Parameter drift\nduring target elimination causes alternations and potential deformations across\nall generations, even eroding other concepts at varying degrees, which is more\nevident with multi-concept erased; 2) Transfer inability & deployment\ninefficiency: Previous model-specific erasure impedes the flexible combination\nof concepts and the training-free transfer towards other models, resulting in\nlinear cost growth as the deployment scenarios increase. To achieve\nnon-invasive, precise, customizable, and transferable elimination, we ground\nour erasing framework on one-dimensional adapters to erase multiple concepts\nfrom most DMs at once across versatile erasing applications.", + "To achieve\nnon-invasive, precise, customizable, and transferable elimination, we ground\nour erasing framework on one-dimensional adapters to erase multiple concepts\nfrom most DMs at once across versatile erasing applications. The\nconcept-SemiPermeable structure is injected as a Membrane (SPM) into any DM to\nlearn targeted erasing, and meantime the alteration and erosion phenomenon is\neffectively mitigated via a novel Latent Anchoring fine-tuning strategy. Once\nobtained, SPMs can be flexibly combined and plug-and-play for other DMs without\nspecific re-tuning, enabling timely and efficient adaptation to diverse\nscenarios. During generation, our Facilitated Transport mechanism dynamically\nregulates the permeability of each SPM to respond to different input prompts,\nfurther minimizing the impact on other concepts. Quantitative and qualitative\nresults across ~40 concepts, 7 DMs and 4 erasing applications have demonstrated\nthe superior erasing of SPM. Our code and pre-tuned SPMs are available on the\nproject page https://lyumengyao.github.io/projects/spm.", + "Recent advances in decentralized deep learning algorithms have demonstrated\ncutting-edge performance on various tasks with large pre-trained models.\nHowever, a pivotal prerequisite for achieving this level of competitiveness is\nthe significant communication and computation overheads when updating these\nmodels, which prohibits the applications of them to real-world scenarios. To\naddress this issue, drawing inspiration from advanced model merging techniques\nwithout requiring additional training, we introduce the Decentralized Iterative\nMerging-And-Training (DIMAT) paradigm--a novel decentralized deep learning\nframework. Within DIMAT, each agent is trained on their local data and\nperiodically merged with their neighboring agents using advanced model merging\ntechniques like activation matching until convergence is achieved. DIMAT\nprovably converges with the best available rate for nonconvex functions with\nvarious first-order methods, while yielding tighter error bounds compared to\nthe popular existing approaches. We conduct a comprehensive empirical analysis\nto validate DIMAT's superiority over baselines across diverse computer vision\ntasks sourced from multiple datasets.", + "We conduct a comprehensive empirical analysis\nto validate DIMAT's superiority over baselines across diverse computer vision\ntasks sourced from multiple datasets. Empirical results validate our\ntheoretical claims by showing that DIMAT attains faster and higher initial gain\nin accuracy with independent and identically distributed (IID) and non-IID\ndata, incurring lower communication overhead. This DIMAT paradigm presents a\nnew opportunity for the future decentralized learning, enhancing its\nadaptability to real-world with sparse and light-weight communication and\ncomputation.", + "Image segmentation algorithms can be understood as a collection of pixel\nclassifiers, for which the outcomes of nearby pixels are correlated. Classifier\nmodels can be calibrated using Inductive Conformal Prediction, but this\nrequires holding back a sufficiently large calibration dataset for computing\nthe distribution of non-conformity scores of the model's predictions. If one\nonly requires only marginal calibration on the image level, this calibration\nset consists of all individual pixels in the images available for calibration.\nHowever, if the goal is to attain proper calibration for each individual pixel\nclassifier, the calibration set consists of individual images. In a scenario\nwhere data are scarce (such as the medical domain), it may not always be\npossible to set aside sufficiently many images for this pixel-level\ncalibration. The method we propose, dubbed ``Kandinsky calibration'', makes use\nof the spatial structure present in the distribution of natural images to\nsimultaneously calibrate the classifiers of ``similar'' pixels. This can be\nseen as an intermediate approach between marginal (imagewise) and conditional\n(pixelwise) calibration, where non-conformity scores are aggregated over\nsimilar image regions, thereby making more efficient use of the images\navailable for calibration.", + "This can be\nseen as an intermediate approach between marginal (imagewise) and conditional\n(pixelwise) calibration, where non-conformity scores are aggregated over\nsimilar image regions, thereby making more efficient use of the images\navailable for calibration. We run experiments on segmentation algorithms\ntrained and calibrated on subsets of the public MS-COCO and Medical Decathlon\ndatasets, demonstrating that Kandinsky calibration method can significantly\nimprove the coverage. When compared to both pixelwise and imagewise calibration\non little data, the Kandinsky method achieves much lower coverage errors,\nindicating the data efficiency of the Kandinsky calibration.", + "StyleGAN has shown remarkable performance in unconditional image generation.\nHowever, its high computational cost poses a significant challenge for\npractical applications. Although recent efforts have been made to compress\nStyleGAN while preserving its performance, existing compressed models still lag\nbehind the original model, particularly in terms of sample diversity. To\novercome this, we propose a novel channel pruning method that leverages varying\nsensitivities of channels to latent vectors, which is a key factor in sample\ndiversity. Specifically, by assessing channel importance based on their\nsensitivities to latent vector perturbations, our method enhances the diversity\nof samples in the compressed model. Since our method solely focuses on the\nchannel pruning stage, it has complementary benefits with prior training\nschemes without additional training cost. Extensive experiments demonstrate\nthat our method significantly enhances sample diversity across various\ndatasets. Moreover, in terms of FID scores, our method not only surpasses\nstate-of-the-art by a large margin but also achieves comparable scores with\nonly half training iterations.", + "Images of the natural world, collected by a variety of cameras, from drones\nto individual phones, are increasingly abundant sources of biological\ninformation. There is an explosion of computational methods and tools,\nparticularly computer vision, for extracting biologically relevant information\nfrom images for science and conservation. Yet most of these are bespoke\napproaches designed for a specific task and are not easily adaptable or\nextendable to new questions, contexts, and datasets. A vision model for general\norganismal biology questions on images is of timely need. To approach this, we\ncurate and release TreeOfLife-10M, the largest and most diverse ML-ready\ndataset of biology images. We then develop BioCLIP, a foundation model for the\ntree of life, leveraging the unique properties of biology captured by\nTreeOfLife-10M, namely the abundance and variety of images of plants, animals,\nand fungi, together with the availability of rich structured biological\nknowledge. We rigorously benchmark our approach on diverse fine-grained biology\nclassification tasks and find that BioCLIP consistently and substantially\noutperforms existing baselines (by 16% to 17% absolute).", + "We rigorously benchmark our approach on diverse fine-grained biology\nclassification tasks and find that BioCLIP consistently and substantially\noutperforms existing baselines (by 16% to 17% absolute). Intrinsic evaluation\nreveals that BioCLIP has learned a hierarchical representation conforming to\nthe tree of life, shedding light on its strong generalizability.\nhttps://imageomics.github.io/bioclip has models, data and code.", + "Scene graph generation (SGG) aims to parse a visual scene into an\nintermediate graph representation for downstream reasoning tasks. Despite\nrecent advancements, existing methods struggle to generate scene graphs with\nnovel visual relation concepts. To address this challenge, we introduce a new\nopen-vocabulary SGG framework based on sequence generation. Our framework\nleverages vision-language pre-trained models (VLM) by incorporating an\nimage-to-graph generation paradigm. Specifically, we generate scene graph\nsequences via image-to-text generation with VLM and then construct scene graphs\nfrom these sequences. By doing so, we harness the strong capabilities of VLM\nfor open-vocabulary SGG and seamlessly integrate explicit relational modeling\nfor enhancing the VL tasks. Experimental results demonstrate that our design\nnot only achieves superior performance with an open vocabulary but also\nenhances downstream vision-language task performance through explicit relation\nmodeling knowledge.", + "Regression tasks in computer vision, such as age estimation or counting, are\noften formulated into classification by quantizing the target space into\nclasses. Yet real-world data is often imbalanced -- the majority of training\nsamples lie in a head range of target values, while a minority of samples span\na usually larger tail range. By selecting the class quantization, one can\nadjust imbalanced regression targets into balanced classification outputs,\nthough there are trade-offs in balancing classification accuracy and\nquantization error. To improve regression performance over the entire range of\ndata, we propose to construct hierarchical classifiers for solving imbalanced\nregression tasks. The fine-grained classifiers limit the quantization error\nwhile being modulated by the coarse predictions to ensure high accuracy.\nStandard hierarchical classification approaches, however, when applied to the\nregression problem, fail to ensure that predicted ranges remain consistent\nacross the hierarchy. As such, we propose a range-preserving distillation\nprocess that can effectively learn a single classifier from the set of\nhierarchical classifiers. Our novel hierarchical classification adjustment\n(HCA) for imbalanced regression shows superior results on three diverse tasks:\nage estimation, crowd counting and depth estimation. We will release the source\ncode upon acceptance.", + "Multi-view depth estimation has achieved impressive performance over various\nbenchmarks. However, almost all current multi-view systems rely on given ideal\ncamera poses, which are unavailable in many real-world scenarios, such as\nautonomous driving. In this work, we propose a new robustness benchmark to\nevaluate the depth estimation system under various noisy pose settings.\nSurprisingly, we find current multi-view depth estimation methods or\nsingle-view and multi-view fusion methods will fail when given noisy pose\nsettings. To address this challenge, we propose a single-view and multi-view\nfused depth estimation system, which adaptively integrates high-confident\nmulti-view and single-view results for both robust and accurate depth\nestimations. The adaptive fusion module performs fusion by dynamically\nselecting high-confidence regions between two branches based on a wrapping\nconfidence map. Thus, the system tends to choose the more reliable branch when\nfacing textureless scenes, inaccurate calibration, dynamic objects, and other\ndegradation or challenging conditions. Our method outperforms state-of-the-art\nmulti-view and fusion methods under robustness testing.", + "Thus, the system tends to choose the more reliable branch when\nfacing textureless scenes, inaccurate calibration, dynamic objects, and other\ndegradation or challenging conditions. Our method outperforms state-of-the-art\nmulti-view and fusion methods under robustness testing. Furthermore, we achieve\nstate-of-the-art performance on challenging benchmarks (KITTI and DDAD) when\ngiven accurate pose estimations. Project website:\nhttps://github.com/Junda24/AFNet/.", + "We investigate a fundamental aspect of machine vision: the measurement of\nfeatures, by revisiting clustering, one of the most classic approaches in\nmachine learning and data analysis. Existing visual feature extractors,\nincluding ConvNets, ViTs, and MLPs, represent an image as rectangular regions.\nThough prevalent, such a grid-style paradigm is built upon engineering practice\nand lacks explicit modeling of data distribution. In this work, we propose\nfeature extraction with clustering (FEC), a conceptually elegant yet\nsurprisingly ad-hoc interpretable neural clustering framework, which views\nfeature extraction as a process of selecting representatives from data and thus\nautomatically captures the underlying data distribution. Given an image, FEC\nalternates between grouping pixels into individual clusters to abstract\nrepresentatives and updating the deep features of pixels with current\nrepresentatives. Such an iterative working mechanism is implemented in the form\nof several neural layers and the final representatives can be used for\ndownstream tasks. The cluster assignments across layers, which can be viewed\nand inspected by humans, make the forward process of FEC fully transparent and\nempower it with promising ad-hoc interpretability.", + "The cluster assignments across layers, which can be viewed\nand inspected by humans, make the forward process of FEC fully transparent and\nempower it with promising ad-hoc interpretability. Extensive experiments on\nvarious visual recognition models and tasks verify the effectiveness,\ngenerality, and interpretability of FEC. We expect this work will provoke a\nrethink of the current de facto grid-style paradigm.", + "Self-supervised learning is an efficient pre-training method for medical\nimage analysis. However, current research is mostly confined to\nspecific-modality data pre-training, consuming considerable time and resources\nwithout achieving universality across different modalities. A straightforward\nsolution is combining all modality data for joint self-supervised pre-training,\nwhich poses practical challenges. Firstly, our experiments reveal conflicts in\nrepresentation learning as the number of modalities increases. Secondly,\nmulti-modal data collected in advance cannot cover all real-world scenarios. In\nthis paper, we reconsider versatile self-supervised learning from the\nperspective of continual learning and propose MedCoSS, a continuous\nself-supervised learning approach for multi-modal medical data. Unlike joint\nself-supervised learning, MedCoSS assigns different modality data to different\ntraining stages, forming a multi-stage pre-training process. To balance modal\nconflicts and prevent catastrophic forgetting, we propose a rehearsal-based\ncontinual learning method. We introduce the k-means sampling strategy to retain\ndata from previous modalities and rehearse it when learning new modalities.", + "To balance modal\nconflicts and prevent catastrophic forgetting, we propose a rehearsal-based\ncontinual learning method. We introduce the k-means sampling strategy to retain\ndata from previous modalities and rehearse it when learning new modalities.\nInstead of executing the pretext task on buffer data, a feature distillation\nstrategy and an intra-modal mixup strategy are applied to these data for\nknowledge retention. We conduct continuous self-supervised pre-training on a\nlarge-scale multi-modal unlabeled dataset, including clinical reports, X-rays,\nCT scans, MRI scans, and pathological images. Experimental results demonstrate\nMedCoSS's exceptional generalization ability across nine downstream datasets\nand its significant scalability in integrating new modality data. Code and\npre-trained weight are available at https://github.com/yeerwen/MedCoSS.", + "In Federated Learning (FL), the data in each client is typically assumed\nfixed or static. However, data often comes in an incremental manner in\nreal-world applications, where the data domain may increase dynamically. In\nthis work, we study catastrophic forgetting with data heterogeneity in\nFederated Incremental Learning (FIL) scenarios where edge clients may lack\nenough storage space to retain full data. We propose to employ a simple,\ngeneric framework for FIL named Re-Fed, which can coordinate each client to\ncache important samples for replay. More specifically, when a new task arrives,\neach client first caches selected previous samples based on their global and\nlocal importance. Then, the client trains the local model with both the cached\nsamples and the samples from the new task. Theoretically, we analyze the\nability of Re-Fed to discover important samples for replay thus alleviating the\ncatastrophic forgetting problem. Moreover, we empirically show that Re-Fed\nachieves competitive performance compared to state-of-the-art methods.", + "Despite the success of diffusion-based customization methods on visual\ncontent creation, increasing concerns have been raised about such techniques\nfrom both privacy and political perspectives. To tackle this issue, several\nanti-customization methods have been proposed in very recent months,\npredominantly grounded in adversarial attacks. Unfortunately, most of these\nmethods adopt straightforward designs, such as end-to-end optimization with a\nfocus on adversarially maximizing the original training loss, thereby\nneglecting nuanced internal properties intrinsic to the diffusion model, and\neven leading to ineffective optimization in some diffusion time steps.In this\npaper, we strive to bridge this gap by undertaking a comprehensive exploration\nof these inherent properties, to boost the performance of current\nanti-customization approaches. Two aspects of properties are investigated: 1)\nWe examine the relationship between time step selection and the model's\nperception in the frequency domain of images and find that lower time steps can\ngive much more contributions to adversarial noises. This inspires us to propose\nan adaptive greedy search for optimal time steps that seamlessly integrates\nwith existing anti-customization methods.", + "This inspires us to propose\nan adaptive greedy search for optimal time steps that seamlessly integrates\nwith existing anti-customization methods. 2) We scrutinize the roles of\nfeatures at different layers during denoising and devise a sophisticated\nfeature-based optimization framework for anti-customization.Experiments on\nfacial benchmarks demonstrate that our approach significantly increases\nidentity disruption, thereby protecting user privacy and copyright. Our code is\navailable at: https://github.com/somuchtome/SimAC.", + "We present an approach to accelerate Neural Field training by efficiently\nselecting sampling locations. While Neural Fields have recently become popular,\nit is often trained by uniformly sampling the training domain, or through\nhandcrafted heuristics. We show that improved convergence and final training\nquality can be achieved by a soft mining technique based on importance\nsampling: rather than either considering or ignoring a pixel completely, we\nweigh the corresponding loss by a scalar. To implement our idea we use Langevin\nMonte-Carlo sampling. We show that by doing so, regions with higher error are\nbeing selected more frequently, leading to more than 2x improvement in\nconvergence speed. The code and related resources for this study are publicly\navailable at https://ubc-vision.github.io/nf-soft-mining/.", + "Unsupervised Domain Adaptation (UDA) can effectively address domain gap\nissues in real-world image Super-Resolution (SR) by accessing both the source\nand target data. Considering privacy policies or transmission restrictions of\nsource data in practical scenarios, we propose a SOurce-free Domain Adaptation\nframework for image SR (SODA-SR) to address this issue, i.e., adapt a\nsource-trained model to a target domain with only unlabeled target data.\nSODA-SR leverages the source-trained model to generate refined pseudo-labels\nfor teacher-student learning. To better utilize pseudo-labels, we propose a\nnovel wavelet-based augmentation method, named Wavelet Augmentation Transformer\n(WAT), which can be flexibly incorporated with existing networks, to implicitly\nproduce useful augmented data. WAT learns low-frequency information of varying\nlevels across diverse samples, which is aggregated efficiently via deformable\nattention. Furthermore, an uncertainty-aware self-training mechanism is\nproposed to improve the accuracy of pseudo-labels, with inaccurate predictions\nbeing rectified by uncertainty estimation.", + "WAT learns low-frequency information of varying\nlevels across diverse samples, which is aggregated efficiently via deformable\nattention. Furthermore, an uncertainty-aware self-training mechanism is\nproposed to improve the accuracy of pseudo-labels, with inaccurate predictions\nbeing rectified by uncertainty estimation. To acquire better SR results and\navoid overfitting pseudo-labels, several regularization losses are proposed to\nconstrain target LR and SR images in the frequency domain. Experiments show\nthat without accessing source data, SODA-SR outperforms state-of-the-art UDA\nmethods in both synthetic$\\rightarrow$real and real$\\rightarrow$real adaptation\nsettings, and is not constrained by specific network architectures.", + "Novel view synthesis of dynamic scenes has been an intriguing yet challenging\nproblem. Despite recent advancements, simultaneously achieving high-resolution\nphotorealistic results, real-time rendering, and compact storage remains a\nformidable task. To address these challenges, we propose Spacetime Gaussian\nFeature Splatting as a novel dynamic scene representation, composed of three\npivotal components. First, we formulate expressive Spacetime Gaussians by\nenhancing 3D Gaussians with temporal opacity and parametric motion/rotation.\nThis enables Spacetime Gaussians to capture static, dynamic, as well as\ntransient content within a scene. Second, we introduce splatted feature\nrendering, which replaces spherical harmonics with neural features. These\nfeatures facilitate the modeling of view- and time-dependent appearance while\nmaintaining small size. Third, we leverage the guidance of training error and\ncoarse depth to sample new Gaussians in areas that are challenging to converge\nwith existing pipelines. Experiments on several established real-world datasets\ndemonstrate that our method achieves state-of-the-art rendering quality and\nspeed, while retaining compact storage.", + "Third, we leverage the guidance of training error and\ncoarse depth to sample new Gaussians in areas that are challenging to converge\nwith existing pipelines. Experiments on several established real-world datasets\ndemonstrate that our method achieves state-of-the-art rendering quality and\nspeed, while retaining compact storage. At 8K resolution, our lite-version\nmodel can render at 60 FPS on an Nvidia RTX 4090 GPU. Our code is available at\nhttps://github.com/oppo-us-research/SpacetimeGaussians.", + "This paper revisits few-shot 3D point cloud semantic segmentation (FS-PCS),\nwith a focus on two significant issues in the state-of-the-art: foreground\nleakage and sparse point distribution. The former arises from non-uniform point\nsampling, allowing models to distinguish the density disparities between\nforeground and background for easier segmentation. The latter results from\nsampling only 2,048 points, limiting semantic information and deviating from\nthe real-world practice. To address these issues, we introduce a standardized\nFS-PCS setting, upon which a new benchmark is built. Moreover, we propose a\nnovel FS-PCS model. While previous methods are based on feature optimization by\nmainly refining support features to enhance prototypes, our method is based on\ncorrelation optimization, referred to as Correlation Optimization Segmentation\n(COSeg). Specifically, we compute Class-specific Multi-prototypical Correlation\n(CMC) for each query point, representing its correlations to category\nprototypes. Then, we propose the Hyper Correlation Augmentation (HCA) module to\nenhance CMC.", + "Specifically, we compute Class-specific Multi-prototypical Correlation\n(CMC) for each query point, representing its correlations to category\nprototypes. Then, we propose the Hyper Correlation Augmentation (HCA) module to\nenhance CMC. Furthermore, tackling the inherent property of few-shot training\nto incur base susceptibility for models, we propose to learn non-parametric\nprototypes for the base classes during training. The learned base prototypes\nare used to calibrate correlations for the background class through a Base\nPrototypes Calibration (BPC) module. Experiments on popular datasets\ndemonstrate the superiority of COSeg over existing methods. The code is\navailable at: https://github.com/ZhaochongAn/COSeg", + "Continual learning has gained substantial attention within the deep learning\ncommunity, offering promising solutions to the challenging problem of\nsequential learning. Yet, a largely unexplored facet of this paradigm is its\nsusceptibility to adversarial attacks, especially with the aim of inducing\nforgetting. In this paper, we introduce \"BrainWash,\" a novel data poisoning\nmethod tailored to impose forgetting on a continual learner. By adding the\nBrainWash noise to a variety of baselines, we demonstrate how a trained\ncontinual learner can be induced to forget its previously learned tasks\ncatastrophically, even when using these continual learning baselines. An\nimportant feature of our approach is that the attacker requires no access to\nprevious tasks' data and is armed merely with the model's current parameters\nand the data belonging to the most recent task. Our extensive experiments\nhighlight the efficacy of BrainWash, showcasing degradation in performance\nacross various regularization-based continual learning methods.", + "Vision graph neural networks (ViG) offer a new avenue for exploration in\ncomputer vision. A major bottleneck in ViGs is the inefficient k-nearest\nneighbor (KNN) operation used for graph construction. To solve this issue, we\npropose a new method for designing ViGs, Dynamic Axial Graph Construction\n(DAGC), which is more efficient than KNN as it limits the number of considered\ngraph connections made within an image. Additionally, we propose a novel\nCNN-GNN architecture, GreedyViG, which uses DAGC. Extensive experiments show\nthat GreedyViG beats existing ViG, CNN, and ViT architectures in terms of\naccuracy, GMACs, and parameters on image classification, object detection,\ninstance segmentation, and semantic segmentation tasks. Our smallest model,\nGreedyViG-S, achieves 81.1% top-1 accuracy on ImageNet-1K, 2.9% higher than\nVision GNN and 2.2% higher than Vision HyperGraph Neural Network (ViHGNN), with\nless GMACs and a similar number of parameters.", + "Our largest model, GreedyViG-B\nobtains 83.9% top-1 accuracy, 0.2% higher than Vision GNN, with a 66.6%\ndecrease in parameters and a 69% decrease in GMACs. GreedyViG-B also obtains\nthe same accuracy as ViHGNN with a 67.3% decrease in parameters and a 71.3%\ndecrease in GMACs. Our work shows that hybrid CNN-GNN architectures not only\nprovide a new avenue for designing efficient models, but that they can also\nexceed the performance of current state-of-the-art models.", + "This paper tackles the challenge of creating relightable and animatable\nneural avatars from sparse-view (or even monocular) videos of dynamic humans\nunder unknown illumination. Compared to studio environments, this setting is\nmore practical and accessible but poses an extremely challenging ill-posed\nproblem. Previous neural human reconstruction methods are able to reconstruct\nanimatable avatars from sparse views using deformed Signed Distance Fields\n(SDF) but cannot recover material parameters for relighting. While\ndifferentiable inverse rendering-based methods have succeeded in material\nrecovery of static objects, it is not straightforward to extend them to dynamic\nhumans as it is computationally intensive to compute pixel-surface intersection\nand light visibility on deformed SDFs for inverse rendering. To solve this\nchallenge, we propose a Hierarchical Distance Query (HDQ) algorithm to\napproximate the world space distances under arbitrary human poses.\nSpecifically, we estimate coarse distances based on a parametric human model\nand compute fine distances by exploiting the local deformation invariance of\nSDF. Based on the HDQ algorithm, we leverage sphere tracing to efficiently\nestimate the surface intersection and light visibility.", + "Specifically, we estimate coarse distances based on a parametric human model\nand compute fine distances by exploiting the local deformation invariance of\nSDF. Based on the HDQ algorithm, we leverage sphere tracing to efficiently\nestimate the surface intersection and light visibility. This allows us to\ndevelop the first system to recover animatable and relightable neural avatars\nfrom sparse view (or monocular) inputs. Experiments demonstrate that our\napproach is able to produce superior results compared to state-of-the-art\nmethods. Our code will be released for reproducibility.", + "Instance segmentation of point clouds is a crucial task in 3D field with\nnumerous applications that involve localizing and segmenting objects in a\nscene. However, achieving satisfactory results requires a large number of\nmanual annotations, which is a time-consuming and expensive process. To\nalleviate dependency on annotations, we propose a method, called FreePoint, for\nunderexplored unsupervised class-agnostic instance segmentation on point\nclouds. In detail, we represent the point features by combining coordinates,\ncolors, normals, and self-supervised deep features. Based on the point\nfeatures, we perform a multicut algorithm to segment point clouds into coarse\ninstance masks as pseudo labels, which are used to train a point cloud instance\nsegmentation model. To alleviate the inaccuracy of coarse masks during\ntraining, we propose a weakly-supervised training strategy and corresponding\nloss. Our work can also serve as an unsupervised pre-training pretext for\nsupervised semantic instance segmentation with limited annotations.", + "To alleviate the inaccuracy of coarse masks during\ntraining, we propose a weakly-supervised training strategy and corresponding\nloss. Our work can also serve as an unsupervised pre-training pretext for\nsupervised semantic instance segmentation with limited annotations. For\nclass-agnostic instance segmentation on point clouds, FreePoint largely fills\nthe gap with its fully-supervised counterpart based on the state-of-the-art\ninstance segmentation model Mask3D and even surpasses some previous\nfully-supervised methods. When serving as a pretext task and fine-tuning on\nS3DIS, FreePoint outperforms training from scratch by 5.8% AP with only 10%\nmask annotations.", + "Estimating the pose of objects from images is a crucial task of 3D scene\nunderstanding, and recent approaches have shown promising results on very large\nbenchmarks. However, these methods experience a significant performance drop\nwhen dealing with unseen objects. We believe that it results from the limited\ngeneralizability of image features. To address this problem, we have an\nin-depth analysis on the features of diffusion models, e.g. Stable Diffusion,\nwhich hold substantial potential for modeling unseen objects. Based on this\nanalysis, we then innovatively introduce these diffusion features for object\npose estimation. To achieve this, we propose three distinct architectures that\ncan effectively capture and aggregate diffusion features of different\ngranularity, greatly improving the generalizability of object pose estimation.\nOur approach outperforms the state-of-the-art methods by a considerable margin\non three popular benchmark datasets, LM, O-LM, and T-LESS. In particular, our\nmethod achieves higher accuracy than the previous best arts on unseen objects:\n98.2% vs. 93.5% on Unseen LM, 85.9% vs.", + "In particular, our\nmethod achieves higher accuracy than the previous best arts on unseen objects:\n98.2% vs. 93.5% on Unseen LM, 85.9% vs. 76.3% on Unseen O-LM, showing the\nstrong generalizability of our method. Our code is released at\nhttps://github.com/Tianfu18/diff-feats-pose.", + "We describe a method for recovering the irradiance underlying a collection of\nimages corrupted by atmospheric turbulence. Since supervised data is often\ntechnically impossible to obtain, assumptions and biases have to be imposed to\nsolve this inverse problem, and we choose to model them explicitly. Rather than\ninitializing a latent irradiance (\"template\") by heuristics to estimate\ndeformation, we select one of the images as a reference, and model the\ndeformation in this image by the aggregation of the optical flow from it to\nother images, exploiting a prior imposed by Central Limit Theorem. Then with a\nnovel flow inversion module, the model registers each image TO the template but\nWITHOUT the template, avoiding artifacts related to poor template\ninitialization. To illustrate the robustness of the method, we simply (i)\nselect the first frame as the reference and (ii) use the simplest optical flow\nto estimate the warpings, yet the improvement in registration is decisive in\nthe final reconstruction, as we achieve state-of-the-art performance despite\nits simplicity. The method establishes a strong baseline that can be further\nimproved by integrating it seamlessly into more sophisticated pipelines, or\nwith domain-specific methods if so desired.", + "The robust generalization of models to rare, in-distribution (ID) samples\ndrawn from the long tail of the training distribution and to\nout-of-training-distribution (OOD) samples is one of the major challenges of\ncurrent deep learning methods. For image classification, this manifests in the\nexistence of adversarial attacks, the performance drops on distorted images,\nand a lack of generalization to concepts such as sketches. The current\nunderstanding of generalization in neural networks is very limited, but some\nbiases that differentiate models from human vision have been identified and\nmight be causing these limitations. Consequently, several attempts with varying\nsuccess have been made to reduce these biases during training to improve\ngeneralization. We take a step back and sanity-check these attempts. Fixing the\narchitecture to the well-established ResNet-50, we perform a large-scale study\non 48 ImageNet models obtained via different training methods to understand how\nand if these biases - including shape bias, spectral biases, and critical bands\n- interact with generalization. Our extensive study results reveal that\ncontrary to previous findings, these biases are insufficient to accurately\npredict the generalization of a model holistically.", + "Our extensive study results reveal that\ncontrary to previous findings, these biases are insufficient to accurately\npredict the generalization of a model holistically. We provide access to all\ncheckpoints and evaluation code at\nhttps://github.com/paulgavrikov/biases_vs_generalization", + "Faithfully modeling the space of articulations is a crucial task that allows\nrecovery and generation of realistic poses, and remains a notorious challenge.\nTo this end, we introduce Neural Riemannian Distance Fields (NRDFs),\ndata-driven priors modeling the space of plausible articulations, represented\nas the zero-level-set of a neural field in a high-dimensional\nproduct-quaternion space. To train NRDFs only on positive examples, we\nintroduce a new sampling algorithm, ensuring that the geodesic distances follow\na desired distribution, yielding a principled distance field learning paradigm.\nWe then devise a projection algorithm to map any random pose onto the level-set\nby an adaptive-step Riemannian optimizer, adhering to the product manifold of\njoint rotations at all times. NRDFs can compute the Riemannian gradient via\nbackpropagation and by mathematical analogy, are related to Riemannian flow\nmatching, a recent generative model. We conduct a comprehensive evaluation of\nNRDF against other pose priors in various downstream tasks, i.e., pose\ngeneration, image-based pose estimation, and solving inverse kinematics,\nhighlighting NRDF's superior performance.", + "We conduct a comprehensive evaluation of\nNRDF against other pose priors in various downstream tasks, i.e., pose\ngeneration, image-based pose estimation, and solving inverse kinematics,\nhighlighting NRDF's superior performance. Besides humans, NRDF's versatility\nextends to hand and animal poses, as it can effectively represent any\narticulation.", + "The astonishing development of single-photon cameras has created an\nunprecedented opportunity for scientific and industrial imaging. However, the\nhigh data throughput generated by these 1-bit sensors creates a significant\nbottleneck for low-power applications. In this paper, we explore the\npossibility of generating a color image from a single binary frame of a\nsingle-photon camera. We evidently find this problem being particularly\ndifficult to standard colorization approaches due to the substantial degree of\nexposure variation. The core innovation of our paper is an exposure synthesis\nmodel framed under a neural ordinary differential equation (Neural ODE) that\nallows us to generate a continuum of exposures from a single observation. This\ninnovation ensures consistent exposure in binary images that colorizers take\non, resulting in notably enhanced colorization. We demonstrate applications of\nthe method in single-image and burst colorization and show superior generative\nperformance over baselines. Project website can be found at\nhttps://vishal-s-p.github.io/projects/2023/generative_quanta_color.html.", + "Nowadays, the deployment of deep learning-based applications is an essential\ntask owing to the increasing demands on intelligent services. In this paper, we\ninvestigate latency attacks on deep learning applications. Unlike common\nadversarial attacks for misclassification, the goal of latency attacks is to\nincrease the inference time, which may stop applications from responding to the\nrequests within a reasonable time. This kind of attack is ubiquitous for\nvarious applications, and we use object detection to demonstrate how such kind\nof attacks work. We also design a framework named Overload to generate latency\nattacks at scale. Our method is based on a newly formulated optimization\nproblem and a novel technique, called spatial attention. This attack serves to\nescalate the required computing costs during the inference time, consequently\nleading to an extended inference time for object detection. It presents a\nsignificant threat, especially to systems with limited computing resources. We\nconducted experiments using YOLOv5 models on Nvidia NX. Compared to existing\nmethods, our method is simpler and more effective.", + "It presents a\nsignificant threat, especially to systems with limited computing resources. We\nconducted experiments using YOLOv5 models on Nvidia NX. Compared to existing\nmethods, our method is simpler and more effective. The experimental results\nshow that with latency attacks, the inference time of a single image can be\nincreased ten times longer in reference to the normal setting. Moreover, our\nfindings pose a potential new threat to all object detection tasks requiring\nnon-maximum suppression (NMS), as our attack is NMS-agnostic.", + "3D generation has raised great attention in recent years. With the success of\ntext-to-image diffusion models, the 2D-lifting technique becomes a promising\nroute to controllable 3D generation. However, these methods tend to present\ninconsistent geometry, which is also known as the Janus problem. We observe\nthat the problem is caused mainly by two aspects, i.e., viewpoint bias in 2D\ndiffusion models and overfitting of the optimization objective. To address it,\nwe propose a two-stage 2D-lifting framework, namely DreamControl, which\noptimizes coarse NeRF scenes as 3D self-prior and then generates fine-grained\nobjects with control-based score distillation. Specifically, adaptive viewpoint\nsampling and boundary integrity metric are proposed to ensure the consistency\nof generated priors. The priors are then regarded as input conditions to\nmaintain reasonable geometries, in which conditional LoRA and weighted score\nare further proposed to optimize detailed textures. DreamControl can generate\nhigh-quality 3D content in terms of both geometry consistency and texture\nfidelity.", + "The priors are then regarded as input conditions to\nmaintain reasonable geometries, in which conditional LoRA and weighted score\nare further proposed to optimize detailed textures. DreamControl can generate\nhigh-quality 3D content in terms of both geometry consistency and texture\nfidelity. Moreover, our control-based optimization guidance is applicable to\nmore downstream tasks, including user-guided generation and 3D animation. The\nproject page is available at https://github.com/tyhuang0428/DreamControl.", + "Recently, infrared small target detection (IRSTD) has been dominated by\ndeep-learning-based methods. However, these methods mainly focus on the design\nof complex model structures to extract discriminative features, leaving the\nloss functions for IRSTD under-explored. For example, the widely used\nIntersection over Union (IoU) and Dice losses lack sensitivity to the scales\nand locations of targets, limiting the detection performance of detectors. In\nthis paper, we focus on boosting detection performance with a more effective\nloss but a simpler model structure. Specifically, we first propose a novel\nScale and Location Sensitive (SLS) loss to handle the limitations of existing\nlosses: 1) for scale sensitivity, we compute a weight for the IoU loss based on\ntarget scales to help the detector distinguish targets with different scales:\n2) for location sensitivity, we introduce a penalty term based on the center\npoints of targets to help the detector localize targets more precisely. Then,\nwe design a simple Multi-Scale Head to the plain U-Net (MSHNet).", + "Then,\nwe design a simple Multi-Scale Head to the plain U-Net (MSHNet). By applying\nSLS loss to each scale of the predictions, our MSHNet outperforms existing\nstate-of-the-art methods by a large margin. In addition, the detection\nperformance of existing detectors can be further improved when trained with our\nSLS loss, demonstrating the effectiveness and generalization of our SLS loss.\nThe code is available at https://github.com/ying-fu/MSHNet.", + "Spurious correlations can cause strong biases in deep neural networks,\nimpairing generalization ability. While most existing debiasing methods require\nfull supervision on either spurious attributes or target labels, training a\ndebiased model from a limited amount of both annotations is still an open\nquestion. To address this issue, we investigate an interesting phenomenon using\nthe spectral analysis of latent representations: spuriously correlated\nattributes make neural networks inductively biased towards encoding lower\neffective rank representations. We also show that a rank regularization can\namplify this bias in a way that encourages highly correlated features.\nLeveraging these findings, we propose a self-supervised debiasing framework\npotentially compatible with unlabeled samples. Specifically, we first pretrain\na biased encoder in a self-supervised manner with the rank regularization,\nserving as a semantic bottleneck to enforce the encoder to learn the spuriously\ncorrelated attributes. This biased encoder is then used to discover and\nupweight bias-conflicting samples in a downstream task, serving as a boosting\nto effectively debias the main model.", + "This biased encoder is then used to discover and\nupweight bias-conflicting samples in a downstream task, serving as a boosting\nto effectively debias the main model. Remarkably, the proposed debiasing\nframework significantly improves the generalization performance of\nself-supervised learning baselines and, in some cases, even outperforms\nstate-of-the-art supervised debiasing approaches.", + "State-of-the-art models on contemporary 3D segmentation benchmarks like\nScanNet consume and label dataset-provided 3D point clouds, obtained through\npost processing of sensed multiview RGB-D images. They are typically trained\nin-domain, forego large-scale 2D pre-training and outperform alternatives that\nfeaturize the posed RGB-D multiview images instead. The gap in performance\nbetween methods that consume posed images versus post-processed 3D point clouds\nhas fueled the belief that 2D and 3D perception require distinct model\narchitectures. In this paper, we challenge this view and propose ODIN\n(Omni-Dimensional INstance segmentation), a model that can segment and label\nboth 2D RGB images and 3D point clouds, using a transformer architecture that\nalternates between 2D within-view and 3D cross-view information fusion. Our\nmodel differentiates 2D and 3D feature operations through the positional\nencodings of the tokens involved, which capture pixel coordinates for 2D patch\ntokens and 3D coordinates for 3D feature tokens.", + "Our\nmodel differentiates 2D and 3D feature operations through the positional\nencodings of the tokens involved, which capture pixel coordinates for 2D patch\ntokens and 3D coordinates for 3D feature tokens. ODIN achieves state-of-the-art\nperformance on ScanNet200, Matterport3D and AI2THOR 3D instance segmentation\nbenchmarks, and competitive performance on ScanNet, S3DIS and COCO. It\noutperforms all previous works by a wide margin when the sensed 3D point cloud\nis used in place of the point cloud sampled from 3D mesh. When used as the 3D\nperception engine in an instructable embodied agent architecture, it sets a new\nstate-of-the-art on the TEACh action-from-dialogue benchmark. Our code and\ncheckpoints can be found at the project website (https://odin-seg.github.io).", + "In this paper, we address the challenge of matching semantically similar\nkeypoints across image pairs. Existing research indicates that the intermediate\noutput of the UNet within the Stable Diffusion (SD) can serve as robust image\nfeature maps for such a matching task. We demonstrate that by employing a basic\nprompt tuning technique, the inherent potential of Stable Diffusion can be\nharnessed, resulting in a significant enhancement in accuracy over previous\napproaches. We further introduce a novel conditional prompting module that\nconditions the prompt on the local details of the input image pairs, leading to\na further improvement in performance. We designate our approach as SD4Match,\nshort for Stable Diffusion for Semantic Matching. Comprehensive evaluations of\nSD4Match on the PF-Pascal, PF-Willow, and SPair-71k datasets show that it sets\nnew benchmarks in accuracy across all these datasets. Particularly, SD4Match\noutperforms the previous state-of-the-art by a margin of 12 percentage points\non the challenging SPair-71k dataset.", + "Recent strides in the development of diffusion models, exemplified by\nadvancements such as Stable Diffusion, have underscored their remarkable\nprowess in generating visually compelling images. However, the imperative of\nachieving a seamless alignment between the generated image and the provided\nprompt persists as a formidable challenge. This paper traces the root of these\ndifficulties to invalid initial noise, and proposes a solution in the form of\nInitial Noise Optimization (InitNO), a paradigm that refines this noise.\nConsidering text prompts, not all random noises are effective in synthesizing\nsemantically-faithful images. We design the cross-attention response score and\nthe self-attention conflict score to evaluate the initial noise, bifurcating\nthe initial latent space into valid and invalid sectors. A strategically\ncrafted noise optimization pipeline is developed to guide the initial noise\ntowards valid regions. Our method, validated through rigorous experimentation,\nshows a commendable proficiency in generating images in strict accordance with\ntext prompts. Our code is available at https://github.com/xiefan-guo/initno.", + "The emerging conditional coding-based neural video codec (NVC) shows\nsuperiority over commonly-used residual coding-based codec and the latest NVC\nalready claims to outperform the best traditional codec. However, there still\nexist critical problems blocking the practicality of NVC. In this paper, we\npropose a powerful conditional coding-based NVC that solves two critical\nproblems via feature modulation. The first is how to support a wide quality\nrange in a single model. Previous NVC with this capability only supports about\n3.8 dB PSNR range on average. To tackle this limitation, we modulate the latent\nfeature of the current frame via the learnable quantization scaler. During the\ntraining, we specially design the uniform quantization parameter sampling\nmechanism to improve the harmonization of encoding and quantization. This\nresults in a better learning of the quantization scaler and helps our NVC\nsupport about 11.4 dB PSNR range. The second is how to make NVC still work\nunder a long prediction chain. We expose that the previous SOTA NVC has an\nobvious quality degradation problem when using a large intra-period setting.", + "The second is how to make NVC still work\nunder a long prediction chain. We expose that the previous SOTA NVC has an\nobvious quality degradation problem when using a large intra-period setting. To\nthis end, we propose modulating the temporal feature with a periodically\nrefreshing mechanism to boost the quality. %Besides solving the above two\nproblems, we also design a single model that can support both RGB and YUV\ncolorspaces. Notably, under single intra-frame setting, our codec can achieve\n29.7\\% bitrate saving over previous SOTA NVC with 16\\% MACs reduction. Our\ncodec serves as a notable landmark in the journey of NVC evolution. The codes\nare at https://github.com/microsoft/DCVC.", + "Contrastive learning (CL) pre-trains general-purpose encoders using an\nunlabeled pre-training dataset, which consists of images or image-text pairs.\nCL is vulnerable to data poisoning based backdoor attacks (DPBAs), in which an\nattacker injects poisoned inputs into the pre-training dataset so the encoder\nis backdoored. However, existing DPBAs achieve limited effectiveness. In this\nwork, we take the first step to analyze the limitations of existing backdoor\nattacks and propose new DPBAs called CorruptEncoder to CL. CorruptEncoder\nintroduces a new attack strategy to create poisoned inputs and uses a\ntheory-guided method to maximize attack effectiveness. Our experiments show\nthat CorruptEncoder substantially outperforms existing DPBAs. In particular,\nCorruptEncoder is the first DPBA that achieves more than 90% attack success\nrates with only a few (3) reference images and a small poisoning ratio 0.5%.\nMoreover, we also propose a defense, called localized cropping, to defend\nagainst DPBAs. Our results show that our defense can reduce the effectiveness\nof DPBAs, but it sacrifices the utility of the encoder, highlighting the need\nfor new defenses.", + "The success of a specific neural network architecture is closely tied to the\ndataset and task it tackles; there is no one-size-fits-all solution. Thus,\nconsiderable efforts have been made to quickly and accurately estimate the\nperformances of neural architectures, without full training or evaluation, for\ngiven tasks and datasets. Neural architecture encoding has played a crucial\nrole in the estimation, and graphbased methods, which treat an architecture as\na graph, have shown prominent performance. For enhanced representation learning\nof neural architectures, we introduce FlowerFormer, a powerful graph\ntransformer that incorporates the information flows within a neural\narchitecture. FlowerFormer consists of two key components: (a) bidirectional\nasynchronous message passing, inspired by the flows; (b) global attention built\non flow-based masking. Our extensive experiments demonstrate the superiority of\nFlowerFormer over existing neural encoding methods, and its effectiveness\nextends beyond computer vision models to include graph neural networks and auto\nspeech recognition models. Our code is available at\nhttp://github.com/y0ngjaenius/CVPR2024_FLOWERFormer.", + "Recent years have witnessed remarkable progress in image generation task,\nwhere users can create visually astonishing images with high-quality. However,\nexisting text-to-image diffusion models are proficient in generating concrete\nconcepts (dogs) but encounter challenges with more abstract ones (emotions).\nSeveral efforts have been made to modify image emotions with color and style\nadjustments, facing limitations in effectively conveying emotions with fixed\nimage contents. In this work, we introduce Emotional Image Content Generation\n(EICG), a new task to generate semantic-clear and emotion-faithful images given\nemotion categories. Specifically, we propose an emotion space and construct a\nmapping network to align it with the powerful Contrastive Language-Image\nPre-training (CLIP) space, providing a concrete interpretation of abstract\nemotions. Attribute loss and emotion confidence are further proposed to ensure\nthe semantic diversity and emotion fidelity of the generated images. Our method\noutperforms the state-of-the-art text-to-image approaches both quantitatively\nand qualitatively, where we derive three custom metrics, i.e., emotion\naccuracy, semantic clarity and semantic diversity. In addition to generation,\nour method can help emotion understanding and inspire emotional art design.", + "We propose InNeRF360, an automatic system that accurately removes\ntext-specified objects from 360-degree Neural Radiance Fields (NeRF). The\nchallenge is to effectively remove objects while inpainting perceptually\nconsistent content for the missing regions, which is particularly demanding for\nexisting NeRF models due to their implicit volumetric representation. Moreover,\nunbounded scenes are more prone to floater artifacts in the inpainted region\nthan frontal-facing scenes, as the change of object appearance and background\nacross views is more sensitive to inaccurate segmentations and inconsistent\ninpainting. With a trained NeRF and a text description, our method efficiently\nremoves specified objects and inpaints visually consistent content without\nartifacts. We apply depth-space warping to enforce consistency across multiview\ntext-encoded segmentations, and then refine the inpainted NeRF model using\nperceptual priors and 3D diffusion-based geometric priors to ensure visual\nplausibility. Through extensive experiments in segmentation and inpainting on\n360-degree and frontal-facing NeRFs, we show that our approach is effective and\nenhances NeRF's editability. Project page: https://ivrl.github.io/InNeRF360.", + "We address the problem of building digital twins of unknown articulated\nobjects from two RGBD scans of the object at different articulation states. We\ndecompose the problem into two stages, each addressing distinct aspects. Our\nmethod first reconstructs object-level shape at each state, then recovers the\nunderlying articulation model including part segmentation and joint\narticulations that associate the two states. By explicitly modeling point-level\ncorrespondences and exploiting cues from images, 3D reconstructions, and\nkinematics, our method yields more accurate and stable results compared to\nprior work. It also handles more than one movable part and does not rely on any\nobject shape or structure priors. Project page:\nhttps://github.com/NVlabs/DigitalTwinArt", + "Zero-shot learning (ZSL) recognizes the unseen classes by conducting\nvisual-semantic interactions to transfer semantic knowledge from seen classes\nto unseen ones, supported by semantic information (e.g., attributes). However,\nexisting ZSL methods simply extract visual features using a pre-trained network\nbackbone (i.e., CNN or ViT), which fail to learn matched visual-semantic\ncorrespondences for representing semantic-related visual features as lacking of\nthe guidance of semantic information, resulting in undesirable visual-semantic\ninteractions. To tackle this issue, we propose a progressive semantic-guided\nvision transformer for zero-shot learning (dubbed ZSLViT). ZSLViT mainly\nconsiders two properties in the whole network: i) discover the semantic-related\nvisual representations explicitly, and ii) discard the semantic-unrelated\nvisual information. Specifically, we first introduce semantic-embedded token\nlearning to improve the visual-semantic correspondences via semantic\nenhancement and discover the semantic-related visual tokens explicitly with\nsemantic-guided token attention. Then, we fuse low semantic-visual\ncorrespondence visual tokens to discard the semantic-unrelated visual\ninformation for visual enhancement.", + "Then, we fuse low semantic-visual\ncorrespondence visual tokens to discard the semantic-unrelated visual\ninformation for visual enhancement. These two operations are integrated into\nvarious encoders to progressively learn semantic-related visual representations\nfor accurate visual-semantic interactions in ZSL. The extensive experiments\nshow that our ZSLViT achieves significant performance gains on three popular\nbenchmark datasets, i.e., CUB, SUN, and AWA2.", + "Reference-based super-resolution (RefSR) has the potential to build bridges\nacross spatial and temporal resolutions of remote sensing images. However,\nexisting RefSR methods are limited by the faithfulness of content\nreconstruction and the effectiveness of texture transfer in large scaling\nfactors. Conditional diffusion models have opened up new opportunities for\ngenerating realistic high-resolution images, but effectively utilizing\nreference images within these models remains an area for further exploration.\nFurthermore, content fidelity is difficult to guarantee in areas without\nrelevant reference information. To solve these issues, we propose a\nchange-aware diffusion model named Ref-Diff for RefSR, using the land cover\nchange priors to guide the denoising process explicitly. Specifically, we\ninject the priors into the denoising model to improve the utilization of\nreference information in unchanged areas and regulate the reconstruction of\nsemantically relevant content in changed areas. With this powerful guidance, we\ndecouple the semantics-guided denoising and reference texture-guided denoising\nprocesses to improve the model performance.", + "With this powerful guidance, we\ndecouple the semantics-guided denoising and reference texture-guided denoising\nprocesses to improve the model performance. Extensive experiments demonstrate\nthe superior effectiveness and robustness of the proposed method compared with\nstate-of-the-art RefSR methods in both quantitative and qualitative\nevaluations. The code and data are available at\nhttps://github.com/dongrunmin/RefDiff.", + "The estimation of implicit cross-frame correspondences and the high\ncomputational cost have long been major challenges in video semantic\nsegmentation (VSS) for driving scenes. Prior works utilize keyframes, feature\npropagation, or cross-frame attention to address these issues. By contrast, we\nare the first to harness vanishing point (VP) priors for more effective\nsegmentation. Intuitively, objects near VPs (i.e., away from the vehicle) are\nless discernible. Moreover, they tend to move radially away from the VP over\ntime in the usual case of a forward-facing camera, a straight road, and linear\nforward motion of the vehicle. Our novel, efficient network for VSS, named\nVPSeg, incorporates two modules that utilize exactly this pair of static and\ndynamic VP priors: sparse-to-dense feature mining (DenseVP) and VP-guided\nmotion fusion (MotionVP). MotionVP employs VP-guided motion estimation to\nestablish explicit correspondences across frames and help attend to the most\nrelevant features from neighboring frames, while DenseVP enhances weak dynamic\nfeatures in distant regions around VPs.", + "MotionVP employs VP-guided motion estimation to\nestablish explicit correspondences across frames and help attend to the most\nrelevant features from neighboring frames, while DenseVP enhances weak dynamic\nfeatures in distant regions around VPs. These modules operate within a\ncontext-detail framework, which separates contextual features from\nhigh-resolution local features at different input resolutions to reduce\ncomputational costs. Contextual and local features are integrated through\ncontextualized motion attention (CMA) for the final prediction. Extensive\nexperiments on two popular driving segmentation benchmarks, Cityscapes and\nACDC, demonstrate that VPSeg outperforms previous SOTA methods, with only\nmodest computational overhead.", + "In the image classification task, deep neural networks frequently rely on\nbias attributes that are spuriously correlated with a target class in the\npresence of dataset bias, resulting in degraded performance when applied to\ndata without bias attributes. The task of debiasing aims to compel classifiers\nto learn intrinsic attributes that inherently define a target class rather than\nfocusing on bias attributes. While recent approaches mainly focus on\nemphasizing the learning of data samples without bias attributes (i.e.,\nbias-conflicting samples) compared to samples with bias attributes (i.e.,\nbias-aligned samples), they fall short of directly guiding models where to\nfocus for learning intrinsic features. To address this limitation, this paper\nproposes a method that provides the model with explicit spatial guidance that\nindicates the region of intrinsic features. We first identify the intrinsic\nfeatures by investigating the class-discerning common features between a\nbias-aligned (BA) sample and a bias-conflicting (BC) sample (i.e.,\nbias-contrastive pair). Next, we enhance the intrinsic features in the BA\nsample that are relatively under-exploited for prediction compared to the BC\nsample.", + "Next, we enhance the intrinsic features in the BA\nsample that are relatively under-exploited for prediction compared to the BC\nsample. To construct the bias-contrastive pair without using bias information,\nwe introduce a bias-negative score that distinguishes BC samples from BA\nsamples employing a biased model. The experiments demonstrate that our method\nachieves state-of-the-art performance on synthetic and real-world datasets with\nvarious levels of bias severity.", + "The combination of strong visual backbones and Large Language Model (LLM)\nreasoning has led to Large Multimodal Models (LMMs) becoming the current\nstandard for a wide range of vision and language (VL) tasks. However, recent\nresearch has shown that even the most advanced LMMs still struggle to capture\naspects of compositional visual reasoning, such as attributes and relationships\nbetween objects. One solution is to utilize scene graphs (SGs)--a formalization\nof objects and their relations and attributes that has been extensively used as\na bridge between the visual and textual domains. Yet, scene graph data requires\nscene graph annotations, which are expensive to collect and thus not easily\nscalable. Moreover, finetuning an LMM based on SG data can lead to catastrophic\nforgetting of the pretraining objective. To overcome this, inspired by\nchain-of-thought methods, we propose Compositional Chain-of-Thought (CCoT), a\nnovel zero-shot Chain-of-Thought prompting method that utilizes SG\nrepresentations in order to extract compositional knowledge from an LMM.", + "To overcome this, inspired by\nchain-of-thought methods, we propose Compositional Chain-of-Thought (CCoT), a\nnovel zero-shot Chain-of-Thought prompting method that utilizes SG\nrepresentations in order to extract compositional knowledge from an LMM.\nSpecifically, we first generate an SG using the LMM, and then use that SG in\nthe prompt to produce a response. Through extensive experiments, we find that\nthe proposed CCoT approach not only improves LMM performance on several vision\nand language VL compositional benchmarks but also improves the performance of\nseveral popular LMMs on general multimodal benchmarks, without the need for\nfine-tuning or annotated ground-truth SGs. Code:\nhttps://github.com/chancharikmitra/CCoT", + "Score distillation sampling~(SDS) has been widely adopted to overcome the\nabsence of unseen views in reconstructing 3D objects from a \\textbf{single}\nimage. It leverages pre-trained 2D diffusion models as teacher to guide the\nreconstruction of student 3D models. Despite their remarkable success,\nSDS-based methods often encounter geometric artifacts and texture saturation.\nWe find out the crux is the overlooked indiscriminate treatment of diffusion\ntime-steps during optimization: it unreasonably treats the student-teacher\nknowledge distillation to be equal at all time-steps and thus entangles\ncoarse-grained and fine-grained modeling. Therefore, we propose the Diffusion\nTime-step Curriculum one-image-to-3D pipeline (DTC123), which involves both the\nteacher and student models collaborating with the time-step curriculum in a\ncoarse-to-fine manner. Extensive experiments on NeRF4, RealFusion15, GSO and\nLevel50 benchmark demonstrate that DTC123 can produce multi-view consistent,\nhigh-quality, and diverse 3D assets. Codes and more generation demos will be\nreleased in https://github.com/yxymessi/DTC123.", + "Neural radiance field is an emerging rendering method that generates\nhigh-quality multi-view consistent images from a neural scene representation\nand volume rendering. Although neural radiance field-based techniques are\nrobust for scene reconstruction, their ability to add or remove objects remains\nlimited. This paper proposes a new language-driven approach for object\nmanipulation with neural radiance fields through dataset updates. Specifically,\nto insert a new foreground object represented by a set of multi-view images\ninto a background radiance field, we use a text-to-image diffusion model to\nlearn and generate combined images that fuse the object of interest into the\ngiven background across views. These combined images are then used for refining\nthe background radiance field so that we can render view-consistent images\ncontaining both the object and the background. To ensure view consistency, we\npropose a dataset updates strategy that prioritizes radiance field training\nwith camera views close to the already-trained views prior to propagating the\ntraining to remaining views. We show that under the same dataset updates\nstrategy, we can easily adapt our method for object insertion using data from\ntext-to-3D models as well as object removal.", + "We show that under the same dataset updates\nstrategy, we can easily adapt our method for object insertion using data from\ntext-to-3D models as well as object removal. Experimental results show that our\nmethod generates photorealistic images of the edited scenes, and outperforms\nstate-of-the-art methods in 3D reconstruction and neural radiance field\nblending.", + "While there has been remarkable progress recently in the fields of\nmanipulation and locomotion, mobile manipulation remains a long-standing\nchallenge. Compared to locomotion or static manipulation, a mobile system must\nmake a diverse range of long-horizon tasks feasible in unstructured and dynamic\nenvironments. While the applications are broad and interesting, there are a\nplethora of challenges in developing these systems such as coordination between\nthe base and arm, reliance on onboard perception for perceiving and interacting\nwith the environment, and most importantly, simultaneously integrating all\nthese parts together. Prior works approach the problem using disentangled\nmodular skills for mobility and manipulation that are trivially tied together.\nThis causes several limitations such as compounding errors, delays in\ndecision-making, and no whole-body coordination. In this work, we present a\nreactive mobile manipulation framework that uses an active visual system to\nconsciously perceive and react to its environment. Similar to how humans\nleverage whole-body and hand-eye coordination, we develop a mobile manipulator\nthat exploits its ability to move and see, more specifically -- to move in\norder to see and to see in order to move.", + "Similar to how humans\nleverage whole-body and hand-eye coordination, we develop a mobile manipulator\nthat exploits its ability to move and see, more specifically -- to move in\norder to see and to see in order to move. This allows it to not only move\naround and interact with its environment but also, choose \"when\" to perceive\n\"what\" using an active visual system. We observe that such an agent learns to\nnavigate around complex cluttered scenarios while displaying agile whole-body\ncoordination using only ego-vision without needing to create environment maps.\nResults visualizations and videos at https://spin-robot.github.io/", + "We present DREAM, a novel training framework representing Diffusion\nRectification and Estimation Adaptive Models, requiring minimal code changes\n(just three lines) yet significantly enhancing the alignment of training with\nsampling in diffusion models. DREAM features two components: diffusion\nrectification, which adjusts training to reflect the sampling process, and\nestimation adaptation, which balances perception against distortion. When\napplied to image super-resolution (SR), DREAM adeptly navigates the tradeoff\nbetween minimizing distortion and preserving high image quality. Experiments\ndemonstrate DREAM's superiority over standard diffusion-based SR methods,\nshowing a $2$ to $3\\times $ faster training convergence and a $10$ to\n$20\\times$ reduction in sampling steps to achieve comparable results. We hope\nDREAM will inspire a rethinking of diffusion model training paradigms.", + "Open-vocabulary human-object interaction (HOI) detection, which is concerned\nwith the problem of detecting novel HOIs guided by natural language, is crucial\nfor understanding human-centric scenes. However, prior zero-shot HOI detectors\noften employ the same levels of feature maps to model HOIs with varying\ndistances, leading to suboptimal performance in scenes containing human-object\npairs with a wide range of distances. In addition, these detectors primarily\nrely on category names and overlook the rich contextual information that\nlanguage can provide, which is essential for capturing open vocabulary concepts\nthat are typically rare and not well-represented by category names alone. In\nthis paper, we introduce a novel end-to-end open vocabulary HOI detection\nframework with conditional multi-level decoding and fine-grained semantic\nenhancement (CMD-SE), harnessing the potential of Visual-Language Models\n(VLMs). Specifically, we propose to model human-object pairs with different\ndistances with different levels of feature maps by incorporating a soft\nconstraint during the bipartite matching process.", + "Specifically, we propose to model human-object pairs with different\ndistances with different levels of feature maps by incorporating a soft\nconstraint during the bipartite matching process. Furthermore, by leveraging\nlarge language models (LLMs) such as GPT models, we exploit their extensive\nworld knowledge to generate descriptions of human body part states for various\ninteractions. Then we integrate the generalizable and fine-grained semantics of\nhuman body parts to improve interaction recognition. Experimental results on\ntwo datasets, SWIG-HOI and HICO-DET, demonstrate that our proposed method\nachieves state-of-the-art results in open vocabulary HOI detection. The code\nand models are available at https://github.com/ltttpku/CMD-SE-release." +] \ No newline at end of file