id stringlengths 10 10 | url stringlengths 42 42 | title stringlengths 5 214 | average_rating float64 -1 8.5 | average_confidence float64 -1 5 | ratings listlengths 0 9 | confidences listlengths 0 9 | reviewers_num int64 0 9 | keywords listlengths 1 42 | abstract stringlengths 26 4.31k | tldr stringlengths 0 250 | primary_area stringclasses 21 values | pdf_url stringlengths 40 40 | submission_date timestamp[s]date 2025-09-01 19:59:51 2025-09-20 20:18:08 | total_reviews int64 0 18 | reviews listlengths 0 9 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
vxkzW4ljeX | https://openreview.net/forum?id=vxkzW4ljeX | A universal compression theory: Lottery ticket hypothesis and superpolynomial scaling laws | 5.5 | 3 | [
4,
6,
8,
4
] | [
3,
3,
2,
4
] | 4 | [
"Neural scaling law",
"model compression",
"lottery ticket hypothesis",
"deep learning theory"
] | When training large-scale models, the performance typically scales with the number of parameters and the dataset size according to a slow power law. A fundamental theoretical and practical question is whether comparable performance can be achieved with significantly smaller models and substantially less data. In this work, we provide a positive and constructive answer. We prove that a generic permutation-invariant function of $d$ objects can be asymptotically compressed into a function of $\operatorname{polylog} d$ objects with vanishing error. This theorem yields two key implications: (Ia) a large neural network can be compressed to polylogarithmic width while preserving its learning dynamics; (Ib) a large dataset can be compressed to polylogarithmic size while leaving the loss landscape of the corresponding model unchanged. (Ia) directly establishes a proof of the \textit{dynamical} lottery ticket hypothesis, which states that any ordinary network can be strongly compressed such that the learning dynamics and result remain unchanged. (Ib) shows that a neural scaling law of the form $L\sim d^{-\alpha}$ can be boosted to an arbitrarily fast power law decay, and ultimately to $\exp(-\alpha' \sqrt[m]{d})$. | We prove that permutation symmetry enables polylogarithmic compression of neural networks and datasets, thus establishing the dynamical lottery ticket hypothesis and boosting neural scaling laws | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=vxkzW4ljeX | 2025-09-19T05:07:02 | 4 | [
{
"id": "vvIZ8RIzRX",
"forum": "vxkzW4ljeX",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14172/Reviewer_YxjE",
"reviewer_name": "Reviewer_YxjE",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
fwCoRzh0Dw | https://openreview.net/forum?id=fwCoRzh0Dw | InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU | 4 | 3 | [
6,
4,
2
] | [
2,
3,
4
] | 3 | [
"Sparse Attention",
"Efficient Attention",
"Context Extrapolation",
"KV Cache Offloading"
] | In modern large language models (LLMs), handling very long context lengths presents significant challenges as it causes slower inference speeds and increased memory costs. Additionally, most existing pre-trained LLMs fail to generalize beyond their original training sequence lengths. To enable efficient and practical long-context utilization, we introduce \textit{InfiniteHiP}, a novel and practical LLM inference framework that accelerates processing by dynamically eliminating irrelevant context tokens through a modular hierarchical token pruning algorithm. Our method also allows generalization to longer sequences by selectively applying various RoPE adjustment methods according to the internal attention patterns within LLMs. Furthermore, we offload the key-value cache to host memory during inference, significantly reducing GPU memory pressure. As a result, InfiniteHiP enables the processing of up to 3 million tokens on a single L40s 48GB GPU -- 3x larger -- without any permanent loss of context information. Our framework achieves an 18.95x speedup in attention decoding for a 1 million token context without requiring additional training. We implement our method in the SGLang framework and demonstrate its effectiveness and practicality through extensive evaluations. | InfiniteHiP extends the servable model context length beyond VRAM and pretrained model context limitation. | infrastructure, software libraries, hardware, systems, etc. | https://openreview.net/pdf?id=fwCoRzh0Dw | 2025-09-17T09:29:23 | 3 | [
{
"id": "1VQ0xZHvLL",
"forum": "fwCoRzh0Dw",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission8178/Reviewer_SD7R",
"reviewer_name": "Reviewer_SD7R",
"rating": 6,
"confidence": 2,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "Infini... |
5rjSeZCM6l | https://openreview.net/forum?id=5rjSeZCM6l | FedSumUp:Secure Federated Learning Without Client-Side Training for Resource-Constrained Edge Devices | 3.5 | 3.25 | [
4,
2,
4,
4
] | [
3,
3,
3,
4
] | 4 | [
"Federated Learning",
"Data Condensation",
"Server-Side Optimization",
"Privacy-Preserving",
"Edge Devices",
"Variational Autoencoder"
] | Horizontal Federated Learning (HFL) enables multiple clients with private data to collaboratively train a global model without sharing their local data. As a research branch of HFL, Federated Data Condensation with Distribution Matching (FDCDM) introduces a novel collaborative paradigm where clients upload small synthetic datasets instead of gradients and parameters. FDCDM faces two key challenges: privacy leakage risk, where synthetic data may leak the privacy of real data; and high computational cost on the client side, which limits the deployment capability of FDCDM on resource-constrained devices. To address these challenges, we propose FedSumUp, an improved FDCDM method. The core designs of FedSumUp include: generating initial data templates based on a Variational Autoencoder (VAE); and migrating the entire synthetic data optimization process to the server side, requiring clients only to upload distilled synthetic data and the mean of raw data features without exposing the original data itself. Experimental results on multiple real-world datasets demonstrate that FedSumUp achieves notable advantages in the following aspects: drastically reducing the visual similarity between synthetic and real data, and effectively resisting membership inference attacks; significantly lowering client-side computational overhead, making it deployable on edge devices. FedSumUp is the first work to systematically analyze privacy risks in FDCDM from the perspective of data similarity, providing a new direction for building efficient and privacy-preserving federated learning frameworks. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=5rjSeZCM6l | 2025-09-20T12:40:47 | 4 | [
{
"id": "GcXZTsH254",
"forum": "5rjSeZCM6l",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23402/Reviewer_VTkQ",
"reviewer_name": "Reviewer_VTkQ",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 3,
"summary": "This ... | |
qN0Il4dtGg | https://openreview.net/forum?id=qN0Il4dtGg | HARMAP: Hierarchical Atomic Representation for Materials Property Prediction | 3.5 | 3 | [
2,
2,
4,
6
] | [
4,
3,
3,
2
] | 4 | [
"AI for Materials",
"Atomic Representation",
"Material Property Prediction"
] | Accurate prediction of material properties is a key step toward rapid materials discovery and cost-effective exploration of vast chemical spaces. Recent advances in machine learning (ML) offer a data-driven alternative that enables fast and scalable property estimation. However, prevailing graph-based pipelines use one-hot or shallow element embeddings and simple distance-based edges, which under-encode element-specific characteristics and cannot faithfully capture bond relations. Thus, we develop HARMAP, a Hierarchical Atomic Representation for Materials Property prediction. First, we build a chemistry-informed Hierarchical Element Knowledge Tree (HEK-Tree) that classifies elements from coarse to fine (e.g., metal vs. non-metal, subgroupings), producing atomic embeddings that preserve unique identities and inter-atomic relations. Second, we map these features into hyperbolic spaces that preserve hierarchical structure, enabling compact separation of levels and smooth similarity across related elements. Finally, we construct a compound graph whose nodes use the learned atomic embeddings and whose edges combine geometric proximity, providing bond-aware connectivity. Across three large public datasets, HARMAP consistently improves over formula-only, structure-only, and standard graph baselines, indicating the effectiveness of HARMAP's unique atomic and bond representations. | A Hierarchical Atomic Representation for Materials Property prediction. | applications to physical sciences (physics, chemistry, biology, etc.) | https://openreview.net/pdf?id=qN0Il4dtGg | 2025-09-10T21:25:01 | 4 | [
{
"id": "Kr0LTtqs14",
"forum": "qN0Il4dtGg",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission3745/Reviewer_CDzq",
"reviewer_name": "Reviewer_CDzq",
"rating": 2,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This p... |
0hLuQAT3fV | https://openreview.net/forum?id=0hLuQAT3fV | Universal Image Immunization against Diffusion-based Image Editing via Semantic Injection | 5 | 3.5 | [
4,
4,
4,
8
] | [
3,
4,
4,
3
] | 4 | [
"Diffusion Model",
"AI Safety",
"Image Immunization",
"Adversarial Attack",
"Image Editing"
] | Recent advances in diffusion models have enabled powerful image editing capabilities guided by natural language prompts, unlocking new creative possibilities. However, they introduce significant ethical and legal risks, such as deepfakes and unauthorized use of copyrighted visual content. To address these risks, image immunization has emerged as a promising defense against AI-driven semantic
manipulation. Yet, most existing approaches rely on image-specific adversarial perturbations that require individual optimization for each image, thereby limiting scalability and practicality. In this paper, we propose the first universal image immunization framework that generates a single, broadly applicable adversarial perturbation specifically designed for diffusion-based editing pipelines. Inspired by
universal adversarial perturbation (UAP) techniques used in targeted attacks, our method generates a UAP that embeds a semantic target into images to be protected. Simultaneously, it suppresses original content to effectively misdirect the model’s attention during editing. As a result, our approach effectively blocks malicious editing attempts by overwriting the original semantic content in the image via the
UAP. Moreover, our method operates effectively even in data-free settings without requiring access to training data or domain knowledge, further enhancing its practicality and broad applicability in real-world scenarios. Extensive experiments show that our method, as the first universal immunization approach, significantly outperforms several baselines in the UAP setting. In addition, despite the inherent difficulty of universal perturbations, our method also achieves performance on par with image-specific methods under a more restricted perturbation budget, while also exhibiting strong black-box transferability across different diffusion models. | alignment, fairness, safety, privacy, and societal considerations | https://openreview.net/pdf?id=0hLuQAT3fV | 2025-09-12T19:50:27 | 4 | [
{
"id": "Cp6SNqZd08",
"forum": "0hLuQAT3fV",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission4421/Reviewer_nGCo",
"reviewer_name": "Reviewer_nGCo",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "This p... | |
3sJ4zKToW6 | https://openreview.net/forum?id=3sJ4zKToW6 | Consistent Low-Rank Approximation | 6.666667 | 3.333333 | [
4,
8,
8
] | [
3,
2,
5
] | 3 | [
"low-rank approximation",
"online algorithms",
"consistency",
"recourse"
] | We introduce and study the problem of consistent low-rank approximation, in which rows of an input matrix $\mathbf{A}\in\mathbb{R}^{n\times d}$ arrive sequentially and the goal is to provide a sequence of subspaces that well-approximate the optimal rank-$k$ approximation to the submatrix $\mathbf{A}^{(t)}$ that has arrived at each time $t$, while minimizing the recourse, i.e., the overall change in the sequence of solutions. We first show that when the goal is to achieve a low-rank cost within an additive $\varepsilon\cdot||\mathbf{A}^{(t)}||_F^2$ factor of the optimal cost, roughly $\mathcal{O}\left(\frac{k}{\varepsilon}\log(nd)\right)$ recourse is feasible. For the more challenging goal of achieving a relative $(1+\varepsilon)$-multiplicative approximation of the optimal rank-$k$ cost, we show that a simple upper bound in this setting is $\frac{k^2}{\varepsilon^2}\cdot\text{poly}\log(nd)$ recourse, which we further improve to $\frac{k^{3/2}}{\varepsilon^2}\cdot\text{poly}\log(nd)$ for integer-bounded matrices and $\frac{k}{\varepsilon^2}\cdot\text{poly}\log(nd)$ for data streams with polynomial online condition number. We also show that $\Omega\left(\frac{k}{\varepsilon}\log\frac{n}{k}\right)$ recourse is necessary for any algorithm that maintains a multiplicative $(1+\varepsilon)$-approximation to the optimal low-rank cost, even if the full input is known in advance. Finally, we perform a number of empirical evaluations to complement our theoretical guarantees, demonstrating the efficacy of our algorithms in practice. | optimization | https://openreview.net/pdf?id=3sJ4zKToW6 | 2025-09-19T05:52:21 | 3 | [
{
"id": "G9M6d2dYmo",
"forum": "3sJ4zKToW6",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission14297/Reviewer_ex4U",
"reviewer_name": "Reviewer_ex4U",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This ... | |
OyIJvyyB3R | https://openreview.net/forum?id=OyIJvyyB3R | LLM2Fx-Tools: Tool Calling for Music Post-Production | 5.5 | 3.5 | [
4,
8,
6,
4
] | [
3,
3,
4,
4
] | 4 | [
"Music Post Production",
"Fx Chain Generation",
"Tool Calling"
] | This paper introduces LLM2Fx-Tools, a multimodal tool-calling framework that generates executable sequences of audio effects (Fx-chain) for music post-production. LLM2Fx-Tools uses a large language model (LLM) to understand audio inputs, select audio effects types, determine their order, and estimate parameters, guided by chain-of-thought (CoT) planning. We also present LP-Fx, a new instruction-following dataset with structured CoT annotations and tool calls for audio effects modules. Experiments show that LLM2Fx-Tools can infer an Fx-chain and its parameters from pairs of unprocessed and processed audio, enabled by autoregressive sequence modeling, tool calling, and CoT reasoning. We further validate the system in a style transfer setting, where audio effects information is transferred from a reference source and applied to new content. Finally, LLM-as-a-judge evaluation demonstrates that our approach generates appropriate CoT reasoning and responses for music production queries. To our knowledge, this is the first work to apply LLM-based tool calling to audio effects modules, enabling interpretable and controllable music production where users can incorporate their own audio plugins. | LLM2Fx-Tools is a framework that uses a multimodal LLM to automatically generate executable audio effect chains (as tools), chain-of-thought reasoning, and natural language responses. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=OyIJvyyB3R | 2025-09-19T13:42:11 | 4 | [
{
"id": "B7fQjc5nan",
"forum": "OyIJvyyB3R",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission16141/Reviewer_Rbd9",
"reviewer_name": "Reviewer_Rbd9",
"rating": 4,
"confidence": 3,
"soundness": 3,
"contribution": 2,
"presentation": 3,
"summary": "This ... |
rcsZNV9A5j | https://openreview.net/forum?id=rcsZNV9A5j | Flash Multi-Head Feed-Forward Network | 5 | 3.75 | [
6,
4,
4,
6
] | [
3,
4,
4,
4
] | 4 | [
"Machine Learning Systems",
"Machine Learning",
"Software-Hardware Codesign",
"Natural Language Processing",
"Transformer",
"Deep Learning",
"Model Architecture"
] | We explore Multi-Head FFN (MH-FFN) as a replacement of FFN in the Transformer architecture, motivated by the structural similarity between single-head attention and FFN. While multi-head mechanisms enhance expressivity in attention, naively applying them to FFNs faces two challenges: memory consumption scaling with the head count, and an imbalanced ratio between the growing intermediate size and the fixed head dimension as models scale, which degrades scalability and expressive power. To address these challenges, we propose Flash Multi-Head FFN (FlashMHF), with two key innovations: an I/O-aware fused kernel computing outputs online in SRAM akin to FlashAttention, and a design using dynamically weighted parallel sub-networks to maintain a balanced ratio between intermediate and head dimensions. Validated on models from 128M to 1.3B parameters, FlashMHF consistently improves perplexity and downstream task accuracy over SwiGLU FFNs, while reducing peak memory usage by 3-5x and accelerating inference by up to 1.08x. Our work establishes the multi-head design as a superior architectural principle for FFNs, presenting FlashMHF as a powerful, efficient, and scalable alternative to FFNs in Transformers. | We propose a novel multi-head FFN that achieves better transformer model performance while using 3-5x less memory and running 1.00-1.08x faster than standard SwiGLU FFNs. | unsupervised, self-supervised, semi-supervised, and supervised representation learning | https://openreview.net/pdf?id=rcsZNV9A5j | 2025-09-16T16:13:44 | 4 | [
{
"id": "TygVX9zSRX",
"forum": "rcsZNV9A5j",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission7175/Reviewer_i2pJ",
"reviewer_name": "Reviewer_i2pJ",
"rating": 6,
"confidence": 3,
"soundness": 3,
"contribution": 3,
"presentation": 2,
"summary": "This p... |
eS4MAmmCHy | https://openreview.net/forum?id=eS4MAmmCHy | PEL-NAS: Search Space Partitioned Architecture Prompt Co-evolutionary LLM-driven Hardware-Aware Neural Architecture Search | 3.5 | 4 | [
4,
4,
4,
2
] | [
4,
4,
4,
4
] | 4 | [
"Large Language Model",
"Hardware-aware",
"Neural Architecture Search"
] | Hardware-Aware Neural Architecture Search (HW-NAS) requires joint optimization of accuracy and latency under device constraints.
Traditional supernet-based methods require multiple GPU days per dataset. Large Language Model (LLM)-driven approaches avoid training a large supernet and can provide quick feedback, but we observe an exploration bias: the LLM repeatedly proposes neural network designs within limited search space and fails to discover architectures across different latency ranges in the whole search space. To address this issue, we propose PEL-NAS: a search space Partitioned, architecture prompt co-Evolutionary and LLM-driven Neural Architecture Search that can generate neural networks with high accuracy and low latency with reduced search cost. Our proposed PEL-NAS has three key components: 1) a complexity-driven partitioning engine that divides the search space by complexity to enforce diversity and mitigate exploration bias; 2) an LLM-powered architecture prompt co-evolution operator, in which the LLM first updates a knowledge base of design heuristics based on results from the previous round, then performs a guided evolution algorithm on architectures with prompts that incorporate this knowledge base. Prompts and designs improve together across rounds which avoid random guesswork and improve efficiency; 3) a zero-cost predictor to avoid training a large number of candidates from scratch. Experimental results show that on HW-NAS-Bench, PEL-NAS can achieve overall higher HV, lower IGD, and up to 54% lower latency than baselines at similar accuracy. Meanwhile, the search cost drops from days to minutes compared with traditional supernet baselines. | infrastructure, software libraries, hardware, systems, etc. | https://openreview.net/pdf?id=eS4MAmmCHy | 2025-09-18T03:16:21 | 4 | [
{
"id": "r5WN4tP0vh",
"forum": "eS4MAmmCHy",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9721/Reviewer_ygWA",
"reviewer_name": "Reviewer_ygWA",
"rating": 4,
"confidence": 4,
"soundness": 3,
"contribution": 2,
"presentation": 2,
"summary": "This p... | |
MgVNhx5uaa | https://openreview.net/forum?id=MgVNhx5uaa | ATOM-Bench: From Atoms to Conclusions in Objective Evaluation of Large Multimodal Models Reasoning | 3 | 3.75 | [
2,
4,
4,
2
] | [
4,
4,
3,
4
] | 4 | [
"multimodal Large Language Models",
"benchmark",
"chain of thought"
] | Chain-of-Thought (CoT) reasoning has significantly enhanced the ability of Large Multimodal Models (LMMs) to tackle complex image–text tasks, establishing itself as a cornerstone of multimodal learning. Despite significant progress, the impact of CoT on LMMs still lacks objective evaluation and in-depth research. Current CoT evaluation paradigms rely on powerful LLMs as judges of free-form text, but this introduces bias and hallucination from the evaluator itself. Moreover, it may penalize models for stylistic variations rather than genuine reasoning failures, thereby undermining the fairness and reliability of the assessment. To address this gap, we introduce ATOM-Bench, a CoT evaluation framework built on objective atomic questions. ATOM-Bench decomposes complex reasoning tasks into a series of atomic nodes, covering 570 high-resolution real-world images and 2,920 questions across 4 cognitive dimensions, and 12 domains, including architecture, text, transportation, culture, climate, and geology. Our benchmark introduces three novel quantitative metrics to objectively analyze reasoning faithfulness, consistency, and robustness. Extensive experiments with 22 LMMs validate the effectiveness of our framework. The results reveal that even the strongest models often exhibit a mismatch between surface-level correctness of final answers and their underlying evidence comprehension, while also exposing cognitive rigidity when faced with objective facts.We believe that ATOM-Bench, as a more objective and diagnostic tool, will advance LMMs toward more reliable and faithful reasoning. | We introduce ATOM-Bench, a diagnostic benchmark for evaluating Chain-of-Thought reasoning in Large Multimodal Models via objective atomic questions, spanning 2,920 QAs over 570 real-world images, to address challenges of reasoning reliability. | datasets and benchmarks | https://openreview.net/pdf?id=MgVNhx5uaa | 2025-09-18T21:58:39 | 4 | [
{
"id": "qyea8A8FPG",
"forum": "MgVNhx5uaa",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission11801/Reviewer_sAqG",
"reviewer_name": "Reviewer_sAqG",
"rating": 2,
"confidence": 4,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The p... |
wztR0XcNW9 | https://openreview.net/forum?id=wztR0XcNW9 | TopoCore: Unifying Topology Manifolds and Persistent Homology for Data Pruning | 4 | 3 | [
4,
2,
6
] | [
3,
3,
3
] | 3 | [
"Coreset Selection",
"Topological Data Analysis",
"Persistent Homology",
"Architectural Transferability",
"Data-Efficient Learning",
"Manifold Learning",
"Pretrained Models"
] | Geometric coreset selection methods, while practical for leveraging pretrained models, are fundamentally unstable. Their reliance on extrinsic geometric metrics makes them highly sensitive to variations in feature embeddings, leading to poor performance when transferring across different network architectures or when dealing with noisy features. We introduce TopoCore, a novel framework that resolves this challenge by leveraging the principles of topology to capture the intrinsic, stable structure of data. TopoCore operates in two stages, (1) utilizing a _topology-aware manifold approximation_ to establish a global low-dimensional embedding of the dataset. Subsequently, (2) it employs _differentiable persistent homology_ to perform a local topological optimization on the manifold embeddings, scoring samples based on their structural complexity. We show that at high pruning rates (e.g., 90\%), our _dual-scale topological approach_ yields a coreset selection method that boosts accuracy with up to 4$\times$ better precision than existing methods. Furthermore, through the inherent stability properties of topology, TopoCore is (a) exceptionally robust to noise perturbations of the feature embeddings and (b) demonstrates superior architecture transferability, improving both accuracy and stability across diverse network architectures. This study demonstrates a promising avenue towards stable and principled topology-based frameworks for robust data-efficient learning. | learning on graphs and other geometries & topologies | https://openreview.net/pdf?id=wztR0XcNW9 | 2025-09-18T02:54:05 | 3 | [
{
"id": "p1cclI53pH",
"forum": "wztR0XcNW9",
"review_number": 3,
"reviewer_id": "ICLR.cc/2026/Conference/Submission9698/Reviewer_Sq9q",
"reviewer_name": "Reviewer_Sq9q",
"rating": 4,
"confidence": 3,
"soundness": 2,
"contribution": 2,
"presentation": 3,
"summary": "The pa... | |
WnRzN4U8Y8 | https://openreview.net/forum?id=WnRzN4U8Y8 | WIMFRIS: WIndow Mamba Fusion and Parameter Efficient Tuning for Referring Image Segmentation | 5 | 4.5 | [
4,
6,
4,
6
] | [
5,
5,
3,
5
] | 4 | [
"Referring image segmentation",
"parameter efficient tuning",
"computer vision"
] | Existing Parameter-Efficient Tuning (PET) methods for Referring Image Segmentation (RIS) primarily focus on layer-wise feature alignment, often neglecting the crucial role of a neck module for the intermediate fusion of aggregated multi-scale features, which creates a significant performance bottleneck. To address this limitation, we introduce WIMFRIS, a novel framework that establishes a powerful neck architecture alongside a simple yet effective PET strategy. At its core is our proposed HMF block, which first aggregates multi-scale features and then employs a novel WMF module to perform effective intermediate fusion. This WMF module leverages non-overlapping window partitioning to mitigate the information decay problem inherent in SSMs while ensuring rich local-global context interaction. Furthermore, our PET strategy enhances primary alignment with a MTA for robust textual priors, a MSA for precise vision-language fusion, and learnable emphasis parameters for adaptive stage-wise feature weighting. Extensive experiments demonstrate that WIMFRIS achieves new state-of-the-art performance across all public RIS benchmarks. | This paper introduces WIMFRIS, a framework that achieves state-of-the-art in referring image segmentation by proposing a novel HMF neck module to efficiently fuse text with visual features , overcoming a key performance bottleneck in prior methods. | applications to computer vision, audio, language, and other modalities | https://openreview.net/pdf?id=WnRzN4U8Y8 | 2025-09-20T14:00:25 | 4 | [
{
"id": "l3NeqmvthW",
"forum": "WnRzN4U8Y8",
"review_number": 4,
"reviewer_id": "ICLR.cc/2026/Conference/Submission23757/Reviewer_N61Y",
"reviewer_name": "Reviewer_N61Y",
"rating": 4,
"confidence": 5,
"soundness": 2,
"contribution": 2,
"presentation": 2,
"summary": "The p... |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 29