Hardware Benchmark for 3d. Recommendations.


Hi everyone !! I’m sharing this test. In this one we are using a 3d model of an iconic Star Wars spaceship. (Never seen those movies, but maybe you have watch them all).

Of course I will share benchmarks with assets much more related to “Industrial Design”. Started with this topic because it seems that one of the growing specialization niches for ID is in designing and development of assets related to the entertainment industry.

The Test


This test are useful to define the basic requirements of hardware that could handle, refine and use high quality 3d models. In this case there is not any kind of shading work or rendering set-up… that remains to be done.


Software used: Autodesk 3ds Max
Modeling Technique used: Polygon modeling.

The results are:

- The scene size in a hard disk it’s about 724 mb. Quite important to mention that not so complex realistic interior scenes could have a size between 30 or 70 mb.


- Just having the scene open requires at least 16 gb of RAM. Model loaded, and 3ds max idle.
06.JPG
- The model uses 4.2 gb in VRAM. … but there are not textures, materials or any other kind of assets in the scene right now that could need more VRAM.
08_edit.jpg
- This particular file has 7 million polygons and that requires 4.2 gb in VRAM. The geometry is relatively optimized.


Conclusion:[/size][/b]

If you are interested in 3d, or you have as a goal the building of complex scenes … maybe this test could help you with the strange process of defining hardware requirements. So… to handle assets of this kind would require:
**- A dedicated GPU with 6 gb in VRAM as a minimum. 8 gb recommended.
\

  • A CPU with 12 logic cores (maybe 6 physical and 6 logical) at 3 ghz as a minimum for handling the scene with an smooth experience.
    \
  • 16 gb in RAM if your focus is on modeling, 32 gb in RAM if you want to build shaders, test them with any kind of IBL Set-up, etc.**



    *Thank you for your time !!, I truly hope this could help you visualize the relationship between the assets you want to deliver and the hardware requirements to do so. With this in mind it could be easy to target certain hardware components and invoice them so you can have a clear reference about the time and effort that you will need in order to buy that computer you have always wanted !!
    *

Adding more screenshots…



For this to be more useful, I would suggest a way to incrementally build a scene from less complexity to more. As it is now (and as a 3DS Max user for almost 20 years) I can immediately see aspects of this model that almost deliberately defy optimal practices.

For such a large file size it almost certainly doesn’t use “reference” objects but instead uses straight “copies” for duplication. Moreover it seems just as likely that quite a few areas are needlessly “heavy” in terms of edge fillets and poly counts in general.
So the file can’t rightly be called efficient or optimized.

But anyway, setting that aspect aside for a minute, any test or benchmark file needs to have parameters that can be reliably adjusted to accurately measure what’s happening.

For example: is there a point at which suddenly the file size and RAM requirements goes way up? Or is it basically linear, which a certain amount of RAM used per 100 polys?

Thank you for the insights !!.

The process of “incrementally build a scene from less complexity to more” sounds quite interesting.

You’re right that are needlessly heavy parts in the model, but for what I can tell you is that at the same time the asset is not a result of the direct output of a zbrush / mudbox scene file… My mistake of not describing the kind of model in which the benchmark was based and the technique used for building the geometry. Surely this cannot be a model that enters the pre-production phase and I didn’t refer to that.

The original goal was to describe the minimum requirements for hardware if people are considering achieve this kind of assets. Never thought that advanced users could be interested in much technical benchmarks… Or maybe I must change the word “benchmark” for the word “test”… it could be fair.

And for the point in which the RAM goes up… there are sever stages, but these are a few:

1.- After the app (3dsMax), allows the user to modify the asset…
2.- During the “geometry compilation phase” before rendering. This stage in which the render engine is ready to apply the global illumination parameters is quite heavy for this kind of files. (Vray in this case)
3.- All the stages related to voxelization also push the limits of several components at the same time, even this stage sometimes are just short periods of time.

Again, thank you so much for your insights. Have a nice day !!