Total: 1
Advances in Parameter-efficient Fine-tuning (PEFT) bridged the performance gap with Full Fine-Tuning (FFT) through sophisticated analysis of pre-trained parameter spaces. Starting from drawing insights from Neural Engrams (NE) in Biological Neural Networks (BNNs), we establish a connection between the low-rank property observed during PEFT's parameter space shifting and neurobiological mechanisms. This observation leads to our proposed method, **S**ynapse and **N**euron (**SAN**), which decomposes and propagates the scaling component from anterior feature adjustment vectors towards posterior weight matrices. Our approach is theoretically grounded in Long-Term Potentiation/Depression (LTP/D) phenomena, which govern synapse development through neurotransmitter release modulation. Extensive experiments demonstrate its effectiveness: on **vision tasks** across VTAB, FGVC, and GIC (25 datasets) using ViT, Swin-T and ConvNeXt architectures, SAN outperforms FFT up to *8.7%* and LoRA by *3.2%*; on **language tasks** using Commonsense Reasoning (8 datasets) with LLaMA models (all generations), surpassing ChatGPT up to *8.5%* and LoRA by *4.7%*; on **vision-language tasks** using Visual Instruction Tuning (7 datasets) with LLaVA models, it exceeds FFT up to *2.4%* and LoRA by *1.9%*. Our code and W&B log will be released