Zero-Shot Learning for the Primitives of 3D Affordance in General Objects

1Seoul National University, 2Naver Webtoon AI
*Indicates Equal Contribution
Random Image

We present a method to extract 3D affordance in general objects with zero-shot manner using our novel representation called primitives of 3D affordances, which not only represent physical interactions, but also captures implicit effects; orientational tendency, spatial relation.

Abstract

One of the major challenges in AI is teaching machines to precisely respond and utilize environmental functionalities, thereby achieving the affordance awareness that humans possess. Despite its importance, the field has been lagging in terms of learning, especially in 3D, as annotating affordance accompanies a laborious process due to the numerous variations of human-object interaction. The low availability of affordance data limits the learning in terms of generalization for object categories, and also simplifies the representation of affordance, capturing only a fraction of the entire affordance. To overcome these challenges, we propose a novel, self-supervised method to "generate" the 3D affordance examples given only a 3D object input, without any manual annotating procedures. The method starts by capturing the 3D object into images and creating 2D affordance examples by inserting humans into the image via inpainting diffusion models, where we present the Adaptive Mask algorithm to enable human insertion without altering the original details of the object. The method consequently lifts inserted humans back to 3D to create 3D human-object pairs, where the depth ambiguity is resolved within a depth optimization framework that utilizes pre-generated human postures from multiple viewpoints. We also provide a novel affordance representation defined on relative orientations and proximity between dense human and object points, that can be easily aggregated from any 3D HOI datasets. The proposed representation serves as a primitive that can be manifested to conventional affordance representations via simple transformations, ranging from physically exerted affordances (e.g., contact) to nonphysical ones (e.g., orientation tendency, spatial relations). We demonstrate the efficacy of our method and representation by generating the 3D affordance samples and deriving high-quality affordance examples from the representation, including contact, orientation, and spatial occupancies.

Method

Random Image

Our method consists of two parts; (1) Generating 3D affordance samples, represented in red box, (2) Learning primitive of 3D affordance, represented in blue box. We introduce Adaptive Mask Inpainting and Depth Optimization via Weak Auxiliary Cue for generating diverse and precise 3D affordance samples. From the generated samples, we learn the pointwise distribution of relative orientation and proximity, which can be derived into various forms of 3D affordance including contact, orientational tendency, and spatial relation.

Video Presentation

Results

Generated 3D Affordance Samples



Contactual Affordance


Motorcycle

Keyboard

Skateboard


Soccer Ball

Suitcase

Tennis Racket



Orientational Affordance


Stool

Chair



Spatial Affordance


Input

Full Body

Hand

Face

BibTeX

@misc{kim2024zeroshot,
        title={Zero-Shot Learning for the Primitives of 3D Affordance in General Objects}, 
        author={Hyeonwoo Kim and Sookwan Han and Patrick Kwon and Hanbyul Joo},
        year={2024},
        eprint={2401.12978},
        archivePrefix={arXiv},
        primaryClass={cs.CV}
  }