episode 04 -building our mesh

Hello once again!

On the last blog entry, we were discussing how to create both a sparse point cloud and a dense one, with the ultimate goal of creating a mesh out of them. More specifically, we left our computers working on generating the dense point cloud, and after more than 24 hours of nonstop hard computing, we can finally see the results. Check them out below!

This slideshow requires JavaScript.

Let’s get started once again! After what we’ve done last week, the next step is building our mesh out of the DPC ( Dense Point Cloud, for future reference). Remember how long that took? Luckily, this won’t take as long as some of the other stuff we’ve gone through, and we will be over in about 10 minutes. To do so, we only have to go to the ‘Workflow’ tab, and once in there head over to ‘build mesh’. We can go for the highest settings in here without thinking too much about computing time, since it’s fairly low. We can even go as far as to build a texture as well if we want to, even though this will become much more important later down the road. For the time being, our only aim is to export the mesh created in Agisoft into zBrush and refine our geometry a little bit.

So why do we need to refine ou model? The geometry generated inside PhotoScan is usually very heavy, containing up to millions of triangles and some rough areas that the software couldn’t correctly solve. We need to make a couple of models that have fewer polygons than our original mesh, and refine the geometry wherever we are not completely happy with it. Take, for example, the next set of images: one is a real life picture, and the other one is the model straight out of Agisoft. Even though the program did a great job at capturing depth – a great job overall, by the way -, it doesn’t give us the same kind of quality in the lines between each tile. We want to improve on those areas.

This slideshow requires JavaScript.

In order to do so, we should step into our modelling application of choice – in this case, zBrush -, and sculpt out the missing details just by using what brushes we have available over there. You can choose another program if you like, like Mudbox, but be sure about said program’s ability to handle lots of polygons – since we’ll be working with our RAW meshes that contain millions of polygons. Just as an example, the scan that I’m using for this blog post originally contained 2776735 polys.

Once we are working in zBrush we could decide on what comes next: we´ll eventually need a lower poly mesh that we can work with in another program like Maya, and we want a much lower resolution model that we can import into a game engine like Unreal. Plus, we must fix our mesh in those areas that didn’t come clean out of Agisoft. There are also some other important steps, like generating clean UVs and transferring them between the different models, but we’ll cover them in the near future once we start working with Maya.

Now, let’s take a look at how to generate lower poly versions of our models while still maintaining the detail! This is a nice part of working with zBrush, and its ability to decimate any mesh is something that makes this piece of software very useful. To do this, we can start with any given model and just head over to zPlugin > Decimation Master and start playing with the values. First, you’ll have to click on `Pre-process Current` – so that the program knows what it’s doing -, and after that you can tweak the percentage of decimation, the desired poly count or point count. The real magic comes after you press the `Decimate Current` button, since even though the model is now lighter, it still retains much of the previous information. Just take a look at the next set of images – there’s almost no difference between the first two, even though the second one has six times less polygons than the first. From 600000 polys to 100000. Wow.

This slideshow requires JavaScript.

So hurray, we finally have our different meshes! With all the steps that we’ve covered today, we finally have a good foundation for the rest of the project. Now that we are over this, on the next blog entry we’ll cover the reprojection of the colour information back into the models, as well as the creation of many more textures for our meshes. There are still many more things for us ahead, but we are definately getting there. See you in the next post!

Leave a comment