Project abstract
About LayerForge-X
Loading project summary...
The system is designed around a canonical Depth-Aware Amodal Layer Graph (DALG) rather than a loose folder of cutouts. Each node stores an editable RGBA layer together with semantic labels, visible support, amodal support, depth statistics, and optional intrinsic exports. Each edge records pairwise occlusion evidence and ordering cues.
Problem framing
Why the representation matters
A single RGB image is difficult to edit because object boundaries, occlusions, and shading effects are flattened into one observed canvas. LayerForge-X treats decomposition as a representation problem: the goal is not only to recreate the observed RGB image, but to expose a reusable scene structure that supports reordering, extraction, removal, transparent recovery, and design-manifest export.
The project therefore evaluates layered outputs with recomposition metrics, editability signals, semantic grouping benchmarks, depth-order evidence, prompt-conditioned extraction behavior, and controlled transparent/effect-layer experiments.
Core capabilities
System scope
Evidence surfaces
What is shipped
The GitHub Pages site presents the public report surface. The heavyweight local generation directories were used to produce the figures and metric snapshots, but the public site and repository are anchored on copied summaries, figure exports, and the final report rather than on those omitted raw directories.