Initially this all started with a deficiency (Easter egg? Gag?) found in DirectX documentation:
If you ever worked with a hardware tessellation in any GAPI, you would know that you cannot achieve the tessellation on the triangle above (or below right) with ANY combination. Instead, if you tessellate a triangle with a tessfactor = 5 (edge and inside), you would get an abomination like this:
Why is it bad? Obviously, compared to the former topology, the output triangles are no more equilateral, though the input one is. Moreover, even for a pretty uniform geometry you will get a 'spider web' effect at the vertices of input triangles:
Is there any rationale behind this? I think, the main reason to keep the latter tessellation is an ability to generate a meaningful result for an arbitrary inside tessellation factor: any route from inside a triangle from an input vertex to the opposite edge should hop through exactly inside_tess_factor - 1 output vertices.
Is that really needed? I think, not really. All tessellation algorithms I've seen so far do not actually care that much about the inside tess factor. Usually, a max or average edge tess factor is used.
The only problem with the former tessellation could be how to handle different edge tess factors, if adaptive tessellation is used. Well, one could simply eliminate the surplus output edge vertices (and the incident output edges), and split the resultant quads in two triangles each (sorry, no image for that). Yes, the tessellation would be imperfect - but this would be just locally for the transition area.
What are your ideas guys? Does it sound worth trying?