Skip to content

liblaf.flame_pytorch ¤

Modules:

Classes:

Attributes:

__version__ module-attribute ¤

__version__: str = '0.1.dev33+g0152325c2'

__version_tuple__ module-attribute ¤

__version_tuple__: tuple[int | str, ...] = (
    0,
    1,
    "dev33",
    "g0152325c2",
)

FLAME ¤

FLAME(config: FlameConfig | None = None)

Bases: FLAME


              flowchart TD
              liblaf.flame_pytorch.FLAME[FLAME]
              liblaf.flame_pytorch.upstream.flame.FLAME[FLAME]

                              liblaf.flame_pytorch.upstream.flame.FLAME --> liblaf.flame_pytorch.FLAME
                


              click liblaf.flame_pytorch.FLAME href "" "liblaf.flame_pytorch.FLAME"
              click liblaf.flame_pytorch.upstream.flame.FLAME href "" "liblaf.flame_pytorch.upstream.flame.FLAME"
            

Methods:

Attributes:

Source code in src/liblaf/flame_pytorch/flame.py
14
15
16
17
18
19
20
def __init__(self, config: FlameConfig | None = None) -> None:
    if config is None:
        config = FlameConfig()
    super().__init__(config)
    self.config = config
    if torch.cuda.is_available():
        self.cuda()

NECK_IDX instance-attribute ¤

NECK_IDX = 1

batch_size instance-attribute ¤

batch_size: int

config instance-attribute ¤

config: FlameConfig = config

dtype instance-attribute ¤

dtype: dtype

faces instance-attribute ¤

faces: Integer[ndarray, 'faces 3']

flame_model instance-attribute ¤

flame_model = Struct(**(load(f, encoding='latin1')))

shapedirs instance-attribute ¤

shapedirs: Tensor

use_3D_translation instance-attribute ¤

use_3D_translation: bool

use_face_contour instance-attribute ¤

use_face_contour: bool

__call__ ¤

__call__(
    shape: Float[Tensor, "batch shape"] | None = None,
    expression: Float[Tensor, "batch expression"]
    | None = None,
    pose: Float[Tensor, "batch pose"] | None = None,
    neck_pose: Float[Tensor, "batch 3"] | None = None,
    eye_pose: Float[Tensor, "batch 6"] | None = None,
    translation: Float[Tensor, "batch 3"] | None = None,
) -> tuple[
    Float[Tensor, "batch vertices"],
    Float[Tensor, "batch landmarks 3"],
]
Source code in src/liblaf/flame_pytorch/flame.py
18
19
20
21
22
23
24
25
26
    self.config = config
    if torch.cuda.is_available():
        self.cuda()

def forward(  # pyright: ignore[reportIncompatibleMethodOverride]
    self,
    shape: Float[Tensor, "#batch shape"] | None = None,
    expression: Float[Tensor, "#batch expression"] | None = None,
    pose: Float[Tensor, "#batch pose"] | None = None,

forward ¤

forward(
    shape: Float[Tensor, "batch shape"] | None = None,
    expression: Float[Tensor, "batch expression"]
    | None = None,
    pose: Float[Tensor, "batch pose"] | None = None,
    neck_pose: Float[Tensor, "batch 3"] | None = None,
    eye_pose: Float[Tensor, "batch 6"] | None = None,
    translation: Float[Tensor, "batch 3"] | None = None,
) -> tuple[
    Float[Tensor, "batch vertices"],
    Float[Tensor, "batch landmarks 3"],
]
Input

shape_params: N X number of shape parameters expression_params: N X number of expression parameters pose_params: N X number of pose parameters

return: vertices: N X V X 3 landmarks: N X number of landmarks X 3

Source code in src/liblaf/flame_pytorch/flame.py
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
def forward(  # pyright: ignore[reportIncompatibleMethodOverride]
    self,
    shape: Float[Tensor, "#batch shape"] | None = None,
    expression: Float[Tensor, "#batch expression"] | None = None,
    pose: Float[Tensor, "#batch pose"] | None = None,
    neck_pose: Float[Tensor, "#batch 3"] | None = None,
    eye_pose: Float[Tensor, "#batch 6"] | None = None,
    translation: Float[Tensor, "#batch 3"] | None = None,
) -> tuple[Float[Tensor, "#batch vertices 3"], Float[Tensor, "#batch landmarks 3"]]:
    if shape is None:
        shape = torch.zeros(
            (self.config.batch_size, self.config.shape_params),
            device=self.shapedirs.device,
            requires_grad=False,
        )
    if expression is None:
        expression = torch.zeros(
            (self.config.batch_size, self.config.expression_params),
            device=self.shapedirs.device,
            requires_grad=False,
        )
    if pose is None:
        pose = torch.zeros(
            (self.config.batch_size, self.config.pose_params),
            device=self.shapedirs.device,
            requires_grad=False,
        )
    return super().forward(
        shape_params=shape,
        expression_params=expression,
        pose_params=pose,
        neck_pose=neck_pose,
        eye_pose=eye_pose,
        transl=translation,
    )

FlameConfig pydantic-model ¤

Bases: BaseModel

Parameters:

  • flame_model_path ¤

    (Path, default: PosixPath('model/generic_model.pkl') ) –

    flame model path

  • static_landmark_embedding_path ¤

    (Path, default: PosixPath('/home/runner/.cache/liblaf/flame-pytorch/flame_static_embedding.pkl') ) –

    Static landmark embeddings path for FLAME

  • dynamic_landmark_embedding_path ¤

    (Path, default: PosixPath('/home/runner/.cache/liblaf/flame-pytorch/flame_dynamic_embedding.npy') ) –

    Dynamic contour embedding path for FLAME

  • shape_params ¤

    (int, default: 100 ) –

    the number of shape parameters

  • expression_params ¤

    (int, default: 50 ) –

    the number of expression parameters

  • pose_params ¤

    (int, default: 6 ) –

    the number of pose parameters

  • use_face_contour ¤

    (bool, default: True ) –

    If true apply the landmark loss on also on the face contour.

  • use_3d_translation ¤

    (bool, default: True ) –

    If true apply the landmark loss on also on the face contour.

  • optimize_eyeballpose ¤

    (bool, default: True ) –

    If true optimize for the eyeball pose.

  • optimize_neckpose ¤

    (bool, default: True ) –

    If true optimize for the neck pose.

  • num_worker ¤

    (int, default: 4 ) –

    pytorch number worker.

  • batch_size ¤

    (int, default: 1 ) –

    Training batch size.

  • ring_margin ¤

    (float, default: 0.5 ) –

    ring margin.

  • ring_loss_weight ¤

    (float, default: 1.0 ) –

    weight on ring loss.

Show JSON schema:
{
  "properties": {
    "flame_model_path": {
      "format": "path",
      "title": "Flame Model Path",
      "type": "string"
    },
    "static_landmark_embedding_path": {
      "format": "path",
      "title": "Static Landmark Embedding Path",
      "type": "string"
    },
    "dynamic_landmark_embedding_path": {
      "format": "path",
      "title": "Dynamic Landmark Embedding Path",
      "type": "string"
    },
    "shape_params": {
      "default": 100,
      "title": "Shape Params",
      "type": "integer"
    },
    "expression_params": {
      "default": 50,
      "title": "Expression Params",
      "type": "integer"
    },
    "pose_params": {
      "default": 6,
      "title": "Pose Params",
      "type": "integer"
    },
    "use_face_contour": {
      "default": true,
      "title": "Use Face Contour",
      "type": "boolean"
    },
    "use_3d_translation": {
      "default": true,
      "title": "Use 3D Translation",
      "type": "boolean"
    },
    "optimize_eyeballpose": {
      "default": true,
      "title": "Optimize Eyeballpose",
      "type": "boolean"
    },
    "optimize_neckpose": {
      "default": true,
      "title": "Optimize Neckpose",
      "type": "boolean"
    },
    "num_worker": {
      "default": 4,
      "title": "Num Worker",
      "type": "integer"
    },
    "batch_size": {
      "default": 1,
      "title": "Batch Size",
      "type": "integer"
    },
    "ring_margin": {
      "default": 0.5,
      "title": "Ring Margin",
      "type": "number"
    },
    "ring_loss_weight": {
      "default": 1.0,
      "title": "Ring Loss Weight",
      "type": "number"
    }
  },
  "title": "FlameConfig",
  "type": "object"
}

Fields:

batch_size pydantic-field ¤

batch_size: int = 1

Training batch size.

dynamic_landmark_embedding_path pydantic-field ¤

dynamic_landmark_embedding_path: Path

Dynamic contour embedding path for FLAME

expression_params pydantic-field ¤

expression_params: int = 50

the number of expression parameters

flame_model_path pydantic-field ¤

flame_model_path: Path

flame model path

num_worker pydantic-field ¤

num_worker: int = 4

pytorch number worker.

optimize_eyeballpose pydantic-field ¤

optimize_eyeballpose: bool = True

If true optimize for the eyeball pose.

optimize_neckpose pydantic-field ¤

optimize_neckpose: bool = True

If true optimize for the neck pose.

pose_params pydantic-field ¤

pose_params: int = 6

the number of pose parameters

ring_loss_weight pydantic-field ¤

ring_loss_weight: float = 1.0

weight on ring loss.

ring_margin pydantic-field ¤

ring_margin: float = 0.5

ring margin.

shape_params pydantic-field ¤

shape_params: int = 100

the number of shape parameters

static_landmark_embedding_path pydantic-field ¤

static_landmark_embedding_path: Path

Static landmark embeddings path for FLAME

use_3D_translation property writable ¤

use_3D_translation: bool

use_3d_translation pydantic-field ¤

use_3d_translation: bool = True

If true apply the landmark loss on also on the face contour.

use_face_contour pydantic-field ¤

use_face_contour: bool = True

If true apply the landmark loss on also on the face contour.