B3-Seg Camera-Free, Training-Free 3DGS Segmentation via Analytic EIG and Beta–Bernoulli Bayesian Updates
Overview
Interactive 3DGS segmentation without predefined cameras or retraining.
B³-Seg overview with camera-free view selection and Bayesian updates.
Problem
Existing methods rely on preset viewpoints, ground-truth labels, or costly retraining, which slows low-latency editing workflows.
Idea
Treat segmentation as Bayesian inference over per-Gaussian foreground probabilities and choose the next view by expected information gain.
Result
Fast, adaptive segmentation that matches supervised baselines while using only a handful of views.
Method
Active view selection with analytic EIG and fast Bayesian updates for camera-free 3DGS segmentation.
Initialize Beta Priors
Assign a Beta distribution to each Gaussian’s foreground probability and estimate an initial mask.
Sample Candidate Views
Uniformly sample camera candidates around the estimated object center to evaluate informativeness.
Compute EIG
Render once per view to estimate responsibilities and score Expected Information Gain.
Select + Update
Pick the best view, run Grounding DINO + SAM2 + CLIP, then update Beta parameters.
Theoretical Foundations
Bayesian reformulation and analytic EIG with greedy view selection guarantees.
Bayesian Reformulation
Each Gaussian $g_i$ has a latent label $y_i \in \\{0,1\\}$, with a Beta prior over the foreground probability $p_i$:
Given a view $v$ and mask $M(v)$, per-Gaussian evidence is the sum of visibility-weighted responsibilities inside or outside the mask:
By conjugacy, the posterior update is closed-form:
Analytic EIG + Greedy Guarantee
To avoid running SAM2 on all candidates, we approximate counts using the current mean $m_i = a_i/(a_i+b_i)$ and compute Expected Information Gain:
We select the next view greedily:
EIG is adaptive monotone and submodular, giving a greedy $(1 - 1/e)$ approximation to the optimal view-selection policy.
Information gain visualization used to score candidate views without running SAM2.
Results
Competitive 3D segmentation with far fewer views.
Qualitative segmentation across diverse scenes.
EIG-selected view sequence (left to right: RGB, SAM2 mask, Beta mean map) across iterations.
Quantitative comparison on LERF-Mask dataset.
360° segmentation results.
Qualitative results on 3DOVS dataset.
Qualitative results on LERF-Mask dataset.
Downloads
Swap in final links when ready.
BibTeX
Update with the final citation entry.
@misc{kamata2026b3segcamerafreetrainingfree3dgs,
title={B$^3$-Seg: Camera-Free, Training-Free 3DGS Segmentation via Analytic EIG and Beta-Bernoulli Bayesian Updates},
author={Hiromichi Kamata and Samuel Arthur Munro and Fuminori Homma},
year={2026},
eprint={2602.17134},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2602.17134},
}