Fast-evolving technologies make it easier than ever to create personalized 3D experiences. In the past, assets in games and other 3D experiences had to be meticulously created and designed using a specific 3D engine. Now, you can import a unique asset into a 3D experience in a matter of minutes using photogrammetry technology, which constructs 3D models from photos of a real-world object. As we’ve worked with beginner creators, one of the things they’ve been most excited about when introduced to photogrammetry is the ability to make 3D models of themselves and their friends and import them into a 3D experience.
Making photogrammetry models of people works the same way as making models of objects, but there are some tips and tricks that can help generate high-quality models of people. At CreateAccess, we’ve used the Polycam mobile app to help new creators make photogrammetry models, as we’ve found it intuitive and easy to use.
To create a 3D model, photogrammetry uses multiple overlapping pictures of an object, taken from different angles. The software is able to stitch the pictures together into a 3D model by determining where consecutive pictures overlap. Sometimes, areas with very similar textures or colors, or small, thin objects are harder to scan, which means hair and fingers can present challenges to making models of people. To help mitigate these challenges, it can be a good idea to wear hats, rings, or watches so the photogrammetry software is able to effectively differentiate between different areas on the object.
The texture of a person’s clothing may also affect the quality of the scan. Clothes that are smooth and monochrome like a basic solid t-shirt or tight black leggings may result in a broken and fragmented model. Textured clothing, such as a flannel shirt or wrinkled pants with pockets, may result in a sharper model, as the photogrammetry software is more easily able to piece those pictures together.
Much in the same way that monochrome clothes are harder to scan than textured ones, the background area where the person is standing or sitting when scanned can also make a difference. It may be best to scan someone standing in a ‘busy’ room since this would allow the software to differentiate and piece the photos together more effectively because of all the variation in the background. Also try to scan people where there is neutral, even lighting and minimal reflective surfaces since a person’s hair and eyes may reflect direct light, which changes the way they look at different angles.
When taking photogrammetry scans of people, it’s important that the person stay as still as possible. If the person moves significantly while being scanned, the software won’t be able to piece the photos together since the “object” (person) has now changed. A neutral face can often scan better than a smile, as a person’s face often changes when smiling for a long period of time.
When you are in the process of scanning someone and you want to capture their entire body, it can be a good idea for them to stand in a “T”-Pose or “A”-Pose. Many animation software platforms allow creators to rig avatars to move by identifying different joints (e.g., elbows, knees), and these poses allow photogrammetry models to be imported into these platforms and rigged the same way a digital avatar would be.
Once you’ve scanned a person using photogrammetry, you can do all sorts of fun things! You can add animations so that the model of the person can walk, dance, and move in a 3D experience. You can also add audio of the person's voice so it seems that the person is guiding a user through the 3D experience. Since a 3D world is not limited in the same ways as the physical world, you can experiment with the size of the person in a 3D space, making them gigantic or very tiny. We’ve had creators make themselves into mountains and shrines. The possibilities are endless!
Learn how you can make photogrammetry models of people and objects in your own life and import them into a 3D experience with CreateAccess' “Bring the Real World to Fortnite” microcourse on the Microcourses Hub!