I understand that unity and blender
have different understanding of
vertices, the vertex number should be
higher in unity, but shouldn’t it also
be the same in both models?
If those are two different models, why should they? Do you even understand why the GPU requires the splitting of vertices? This is nothing related to Unity but what a vertex means to your GPU. Reasons why a vertex may be split up are:
- different normals
- different UV coordinates
- different vertex color
- different
< insert any other vertex attribute here>
.
Modelling software may still treat a vertex as one but due to the data stored in a single vertex it may need to be split into several. The prime example is a cube. Everybody knows a cube has 8 “logical” vertices. However a cube mesh requires 24 vertices because the vertex normal of each face need to be unique for each face. Since there are always 3 faces that meet at a corner, each vertex need to be split into 3. When you apply UV mapping to your model, this could also make the vertices split.
If the two models are based on the same initial model, there are several reasons why it may turn out different. First of all, like @Gentoo mentioned mesh optimisation / compression can be one reason. Other reasons may be that you didn’t export / import normals but let Unity recalculate them. Unity decides which faces should be flat and therefore require a vertex split based on the angle threshold. Mesh optimisation could take out vertices that are coplanar and therefore do not add any information / benefit to the model.
My best bet would be normal smoothing. If he only changed the positions in the second mesh, then the smoothing groups recompute normals and create new vertices.
– Disputation