Differentiable latent structure discovery for interpretable forecasting in clinical time seriesIvan Lerner, Jean Feydy, Alexandre Kalimouttou et al.
cs.LGApr 30, 2026
Background: Timely, uncertainty-aware forecasting from irregular electronic health records (EHR) can support critical-care decisions, yet most approaches either impute to a grid or sacrifice interpret…
Strait: Perceiving Priority and Interference in ML Inference ServingHaidong Zhao, Nikolaos Georgantas
cs.LGApr 30, 2026
Machine learning (ML) inference serving systems host deep neural network (DNN) models and schedule incoming inference requests across deployed GPUs. However, limited support for task prioritization an…
TwinGate: Stateful Defense against Decompositional Jailbreaks in Untraceable Traffic via Asymmetric Contrastive LearningBowen Sun, Chaozhuo Li, Yaodong Yang et al.
cs.CRcs.CLcs.LGApr 30, 2026
Decompositional jailbreaks pose a critical threat to large language models (LLMs) by allowing adversaries to fragment a malicious objective into a sequence of individually benign queries that collecti…
MIFair: A Mutual-Information Framework for Intersectionality and Multiclass FairnessJeanne Monnier, Thomas George, Frédéric Guyard et al.
cs.LGcs.AIcs.CYcs.ITApr 30, 2026
Fairness in machine learning remains challenging due to its ethical complexity, the absence of a universal definition, and the need for context-specific bias metrics. Existing methods still struggle w…
VibroML: an automated toolkit for high-throughput vibrational analysis and dynamic instability remediation of crystalline materials using machine-learned potentialsRogério Almeida Gouvêa, Gian-Marco Rignanese
cond-mat.mtrl-scics.AIcs.LGphysics.comp-phApr 30, 2026
While machine-learned interatomic potentials (MLIPs) accelerate phonon dispersion calculations, merely identifying dynamical instabilities in computationally predicted materials is insufficient; autom…
Heterogeneous Scientific Foundation Model CollaborationZihao Li, Jiaru Zou, Feihao Fang et al.
cs.AIcs.CLcs.LGApr 30, 2026
Agentic large language model systems have demonstrated strong capabilities. However, their reliance on language as the universal interface fundamentally limits their applicability to many real-world p…
Synthetic Computers at Scale for Long-Horizon Productivity SimulationTao Ge, Baolin Peng, Hao Cheng et al.
cs.AIcs.CLcs.LGApr 30, 2026
Realistic long-horizon productivity work is strongly conditioned on user-specific computer environments, where much of the work context is stored and organized through directory structures and content…
In-Context Prompting Obsoletes Agent Orchestration for Procedural TasksSimon Dennis, Michael Diamond, Rivaan Patil et al.
cs.AIcs.LGApr 30, 2026
Agent orchestration frameworks -- LangGraph, CrewAI, Google ADK, OpenAI Agents SDK, and others -- place an external orchestrator above the LLM, tracking state and injecting routing instructions at eve…
Post-Optimization Adaptive Rank Allocation for LoRAVishnuprasadh Kumaravelu, Sunil Gupta, P. K. Srijith
cs.AIApr 30, 2026
Exponential growth in the scale of modern foundation models has led to the widespread adoption of Low-Rank Adaptation (LoRA) as a parameter-efficient fine-tuning technique. However, standard LoRA impl…
Defending Quantum Classifiers against Adversarial Perturbations through Quantum AutoencodersEmma Andrews, Sahan Sanjaya, Prabhat Mishra
quant-phcs.LGApr 30, 2026
Machine learning models can learn from data samples to carry out various tasks efficiently. When data samples are adversarially manipulated, such as by insertion of carefully crafted noise, it can cau…
FlexiTac: A Low-Cost, Open-Source, Scalable Tactile Sensing Solution for Robotic SystemsBinghao Huang, Yunzhu Li
cs.ROcs.AIcs.LGApr 30, 2026
We present FlexiTac, a low-cost, open-source, and scalable piezoresistive tactile sensing solution designed for robotic end-effectors. FlexiTac is a practical "plug-in" module consisting of (i) thin, …
Efficient Multivector Retrieval with Token-Aware Clustering and Hierarchical IndexingSilvio Martinico, Franco Maria Nardini, Cosimo Rulli et al.
cs.IRcs.LGApr 30, 2026
Multivector retrieval models achieve state-of-the-art effectiveness through fine-grained token-level representations, but their deployment incurs substantial computational and memory costs. Current so…
Sequential Inference for Gaussian Processes: A Signal Processing PerspectiveDaniel Waxman, Fernando Llorente, Petar M. Djurić
eess.SPcs.LGstat.COstat.MLApr 30, 2026
The proliferation of capable and efficient machine learning (ML) models marks one of the strongest methodological shifts in signal processing (SP) in its nearly 100-year history. ML models support the…
Differential Subgroup Discovery: Characterizing Where Two Populations Differ, and WhySascha Xu, Jilles Vreeken
cs.LGApr 30, 2026
We study the problem of understanding where two populations differ within a feature space, which we formalize in the concept of a differential subgroup: a subset of individuals from both populations w…
Explainable Load Forecasting with Covariate-Informed Time Series Foundation ModelsMatthias Hertel, Alexandra Nikoltchovska, Sebastian Pütz et al.
cs.LGApr 30, 2026
Time Series Foundation Models (TSFMs) have recently emerged as general-purpose forecasting models and show considerable potential for applications in energy systems. However, applications in critical …
Mind the Gap: Structure-Aware Consistency in Preference LearningMehryar Mohri, Yutao Zhong
cs.LGstat.MLApr 30, 2026
Preference learning has become the foundation of aligning Large Language Models (LLMs) with human intent. Popular methods, such as Direct Preference Optimization (DPO), minimize surrogate losses as pr…
Linear-Core Surrogates: Smooth Loss Functions with Linear Rates for Classification and Structured PredictionMehryar Mohri, Yutao Zhong
cs.LGstat.MLApr 30, 2026
The choice of loss function in classification involves a fundamental trade-off: smooth losses (like Cross-Entropy) enable fast optimization rates but yield slow square-root consistency bounds, while p…
Exploration Hacking: Can LLMs Learn to Resist RL Training?Eyon Jang, Damon Falck, Joschka Braun et al.
cs.LGcs.CLApr 30, 2026
Reinforcement learning (RL) has become essential to the post-training of large language models (LLMs) for reasoning, agentic capabilities and alignment. Successful RL relies on sufficient exploration …
Why Self-Supervised Encoders Want to Be NormalYuval Domb
cs.ITcs.AIcs.LGApr 30, 2026
We develop a geometric and information-theoretic framework for encoder-decoder learning built on the Information Bottleneck (IB) principle. Recasting IB as a rate-distortion problem with Kullback-Leib…
Learning When to Remember: Risk-Sensitive Contextual Bandits for Abstention-Aware Memory Retrieval in LLM-Based Coding AgentsMehmet Iscan
cs.CLcs.AIcs.LGApr 30, 2026
Large language model (LLM)-based coding agents increasingly rely on external memory to reuse prior debugging experience, repair traces, and repository-local operational knowledge. However, retrieved m…
FedHarmony: Harmonizing Heterogeneous Label Correlations in Federated Multi-Label LearningZhiqiang Kou, Junxiang Wu, Wenke Huang et al.
cs.LGApr 30, 2026
Federated Multi-Label Learning is a distributed paradigm where multiple clients possess heterogeneous multi-label data and perform collaborative learning under privacy constraints without sharing raw …
ITS-Mina: A Harris Hawks Optimization-Based All-MLP Framework with Iterative Refinement and External Attention for Multivariate Time Series ForecastingPourya Zamanvaziri, Amirhossein Sadr, Aida Pakniyat et al.
cs.LGcs.AIApr 30, 2026
Multivariate time series forecasting plays a pivotal role in numerous real-world applications, including financial analysis, energy management, and traffic planning. While Transformer-based architectu…
Kernelized Advantage Estimation: From Nonparametric Statistics to LLM ReasoningShijin Gong, Kai Ye, Jin Zhu et al.
cs.LGstat.MLApr 30, 2026
Recent advances in large language models (LLMs) have increasingly relied on reinforcement learning (RL) to improve their reasoning capabilities. Three approaches have been widely adopted: (i) Proximal…
Sampling two-dimensional spin systems with transformersPiotr Białas, Piotr Korcyl, Tomasz Stebel et al.
cond-mat.dis-nncond-mat.stat-mechcs.LGhep-latApr 30, 2026
Autoregressive Neural Networks based on dense or convolutional layers have recently been shown to be a viable strategy for generating classical spin systems. Unlike these methods, sampling with transf…
A Unified Framework of Hyperbolic Graph Representation Learning MethodsSofía Pérez Casulo, Marcelo Fiori, Bernardo Marenco et al.
cs.LGApr 30, 2026
Hyperbolic geometry has emerged as an effective latent space for representing complex networks, owing to its ability to capture hierarchical organization and heterogeneous connectivity patterns using …
PROMISE-AD: Progression-aware Multi-horizon Survival Estimation for Alzheimer's Disease Progression and Dynamic TrackingQing Lyu, Jeremy Hudson, Mohammad Kawas et al.
cs.LGcs.AIeess.IVApr 30, 2026
Individualized Alzheimer's disease (AD) progression prediction requires models that use irregular visits, account for censoring, avoid diagnostic leakage, and provide calibrated horizon risks. We prop…
Calibrating Attribution Proxies for Reward Allocation in Participatory Weather SensingMark C. Ballandies, Michael T. C. Chiu, Claudio J. Tessone
cs.LGcs.CYcs.GTphysics.ao-phApr 30, 2026
Large-scale IoT weather sensing networks require incentive mechanisms to sustain participation, yet determining how much value individual data contributions bring to the network remains an open proble…
Computing Equilibrium beyond Unilateral DeviationMingyang Liu, Gabriele Farina, Asuman Ozdaglar
cs.GTcs.AIcs.CCcs.LGApr 30, 2026
Most familiar equilibrium concepts, such as Nash and correlated equilibrium, guarantee only that no single player can improve their utility by deviating unilaterally. They offer no guarantees against …
TopBench: A Benchmark for Implicit Prediction and Reasoning over Tabular Question AnsweringAn-Yang Ji, Jun-Peng Jiang, De-Chuan Zhan et al.
cs.CLcs.AIcs.LGApr 30, 2026
Large Language Models (LLMs) have advanced Table Question Answering, where most queries can be answered by extracting information or simple aggregation. However, a common class of real-world queries i…
The TEA Nets framework combines AI and cognitive network science to model targets, events and actors in textSebastiano Franchini, Alexis Carrillo, Edoardo Sebastiano De Duro et al.
cs.AIcs.CYcs.HCcs.LGApr 30, 2026
We introduce Target-Event-Agent Networks (TEA Nets) as a computational framework to extract subjects (``Agents"), verbs (``Events"), and objects (``Targets") from texts. Grounded in cognitive network …
Track Fine-tuning & PEFT — Get notified when new papers are scored
Sign up free and get daily digests tailored to your research interests.
Sign up free