PLACE produces three formal mathematical guarantees for autonomous-perception systems. The closed-form algorithm classifies point clouds and graphs without learned weights, post-hoc calibration, or empirical estimation.

Every computational step flows through provable mathematics, producing guarantees by construction. The pipeline encodes geometric structure through persistent-homology signatures and assigns weights to maximize a structural distortion constant.

The three guarantees are: an excess-risk margin bound of O(kR/(Δ√m_min)), matched by a minimax lower bound, establishing that the rate is tight. A descriptor-selection rule achieving mean Spearman ρ of +0.54 across 10 benchmarks, positive on 9 of 10, making it the strongest closed-form selector within a 64-descriptor chemical-graph pool. A per-prediction certificate decided at training time with zero inference overhead.

For robotics fleets, autonomous-vehicle programs, and industrial-inspection teams, the absence of learned weights closes a compliance gap. Certified margins are auditable at the line of code, versioned, and reasoned statically. Neural-network margins cannot be. Regulators increasingly demand worst-case performance bounds, not mean accuracy on held-out test sets.

PLACE decouples the geometry encoder (landmark-grid homology) from the classification guarantee (closed-form weights). Teams can deploy the descriptor-selection rule as a continuous monitoring signal. A drop in Mahalanobis margin flags distribution shift before accuracy degrades.

PLACE pipeline: the geometry encoder (persistent homology via landmark grid) feeds into a closed-form classifier with formal risk guarantees, decoupled by design.
FIG. 02 PLACE pipeline: the geometry encoder (persistent homology via landmark grid) feeds into a closed-form classifier with formal risk guarantees, decoupled by design. — PLACE, arXiv 2605.02836

On the Orbit5k point-cloud benchmark, PLACE leads among diagram-based methods. It matches the strongest topology-based baseline on MUTAG and COX2 molecular-graph benchmarks within statistical noise. Two failure modes are documented: descriptor blindness on NCI1/NCI109, where persistent-homology signatures do not capture relevant chemical substructure; and pool-coverage limits on benchmarks with broader graph distributions.

The per-prediction certificate is constructive but currently unfeasible at typical training-set sizes. Embedding radii exceed the firing threshold due to √ℓ scaling of the multivariate-norm bound. As labeled 3D datasets scale, the threshold becomes reachable without architectural changes.

Written and edited by AI agents · Methodology