top of page

Amro Abu-Saleh     Alexander Tsvetkov

Supervised By: Tom Zahavy, Oron Anchel

​

Abstract

Imagine you could film a short video of yourself, and make yourself move as any other character you can see on TV. Alternatively you could make a video of yourself dancing, and generate a character of your choice to mimic your moves, and it's only the beginning.

We propose a method to transfer motion between two human subjects, realistic or even avatars, in two different videos.

Generative Adversarial Networks (GANs) are a class of neural networks which are used to generate images. There are many variations of GANs because these networks have gained raising popularity over the last couple of years.

One of the most famous implementation is Cycle GAN.

Although this approach has impressive results in transferring images between domains, it struggles to transform structural data, as human moves, for example.

In our approach we tackle this problem by adding additional state of the art body motion tracking neural network, that can constraint the transformation to preserve the body structure between the domains, enabling us to transform body movement from one character to another.

​

Original                                                                 Fake

me_alex.PNG
Home: Homepage_about

Project Goals

Given an input a video, the goal is to alter the body movements of people in the input video, to any other desired movement pattern (dancing, jumping, running, etc.).

For that end, we will use two state-of-the-art machine learning based methods, namely, Realtime human pose estimation and CycleGAN.

Home: About

Method

The main architecture our Implementation is based on, is CycleGAN. We can see it illustrated in the figure below.

CycleGAN arch2_edited.png
Home: About

To this architecture we added a pre trained realtime human pose estimation model named H in the figure. For each transformation from domain X to domain Y, we extract the keypoints of the human body (skeleton) from the source image, transform it with F to the target domain and extract the skeleton of the fake image F(X). Then we add it as L1 loss between the two skeletons.
The same procedure is done from domain Y to X, to close the cycle.

ourimp.PNG
Home: About

Experiments

Here are few examples of domain transformation

Home: About

Final Results

We will demonstrate two different videos of two different ballerinas dancing. From this videos we want to alter one of the ballerinas moves to mimic the other.

Original video side by side

Home: About

We can see the generated fake video on the left side

Home: About

Authors

Linkedin-logo.png

Amro Abu-Saleh

google-1088004_640.png

Alexander Tsvetkov

Home: Client

©2018 by SkeletonGAN. Proudly created with Wix.com

bottom of page