AN IMPORTANT UPDATE: As of April, 2013, Shapeways no longer offers this high polygon workaround. If your model has over a million polygons, use this handy tutorial to reduce them so you can upload your model in the normal way. Thank you for your understanding!
Ok Zbrush users, organic algorithm artists and character modelers, Shapeways have developed a workaround so that your models can now exceed the 1,000,000 polygon limit currently in place.
It is not elegant, and takes some negotiating, but if we get enough demand we will try and automate the process to make it easier for everyone. Let us know what you think, is this something you would like to see as standard?
Order Process for High Polygon Models:
Send your zipped High Polygon file to Shapeways service(at)shapeways.com
Shapeways will manually check the file and inform you if it can be printed,
Once you have confirmation from Shapeways that it can be printed you will need to upload a dummy file of a cube with the same volume as your high polygon model to your Shapeways account with the file name 'POLYGONENFILE', order and pay for it.
You then need to email your order number to service(at)shapeways.com.
Your High Polygon model will then be 3D printed and delivered to your door.
We warned you it was not pretty, but if there is enough uptake we can automate and speed up the process, until then these manual steps mean there will be an increase to the time it takes to process the model. Update: the volume of the cube needs to be the same actual material volume, no bounding box. The price is the same as for a "low" poly model.
Once the resolution of the model is increased, Shapeways can then work on increasing the resolution of the 3D printing processes to make the most of the high polygon models.
You might wonder why we have not fixed this problem already? Well
processing those big polycount files takes a lot of memory and we
process multiple files at the same time. We can improve our algorithms
(rocket-scientists are welcome to help) or add memory. Currently we
already have around 8GB of processing memory, so this is also not that easy.
I haven't "hit the ceiling" yet since my models are very easily optimized in MeshLab (that quadric edge collapse-thingy with planar simplification on) so I'm not in the target group for this proposed solution... but I just thought of an alternative idea...
I'm guessing you have several servers clustered to do the automatic verification and my idea is based on that assumption.
If that is the case, why not create a separate "pipeline" for high polycount files?
If the polycount is below 500K; queue it for verification as normal.
If the polycount is above 500K; put it in a separate "high polycount" queue with a dedicated server that does less parallell tasks, e.g. 1 or 2 highpolycount models in parallell.
That way, you would only have to inform the uploading user that if the polycount is above the threshold it may take longer to verify since those files are handled in a separate queue.
I always thought that it was the print job preparation stage that imposed these memory issues.
Sure, the automated verification takes longer on a larger file.. but I really thought they were problematic to handle when you had several objects at the same time (prep'ing a printjob).
good idea and we thought of it also and we might still do it. However in reducing polygons we might also damage the model. So we need to give detailed visual feedback to prevent disappointment. Also it is more work than the current bare bones process, so let's start like this.
I have many models that I would upload if we could upload over 500,000 poly models. The method mentioned above seems a bit to much work for right now. So, if there were a more strait forward way to upload them, I'd be interested.
Right now I don't see the need to upload a large polygon model since with your current printing resolution is there any point? (Yes, that's a question, not a statement). I use ZBrush and just decimate the polygons to make the file upload quicker... Certainly if you introduce a jeweller's wax material then this would be a different story.
Build prep would be the main reason. Most Z-Brush users can relate to multi-million triangle models slowing their system down to a crawl. Now take that model, duplicate it 50 times and try and use the program to array the models manually.
In my experience, anything in excess of 500,000 polygons is a waste. The printing technology is likely to be unable to capture details quite that fine. I frequently work with Zbrush files consisting of anything up to about 12 million polygons but have not had any trouble using Decimation Master to reduce this to under the 500K limit.
I would worry that removing the cap would pave the way for lazy artists not optimising and therefore greatly increasing the processing time for a mesh when it is not necessary.
If a super-high detail material is introduced, I can see argument for the cap being raised.
Ben, you are mostly right, but in some cases there are even with todays resolution valid reasons to go beyond 500k polygons. Remember our post on Digi-Fabrics? Those had around 500k with a small model, so in that case you could definitely go beyond.
Perhaps 12M is a bit rich, but it depends a little on the geometries.
Having just checked the Digi-Fabrics post, I can see how the polys could start to add up. Perhaps polygon-density is the more relevant aspect?
I should add that's 12M spread across multiple sub-tools, not simultaneously active
I think what would also be way cool is if users could upload the cage mesh and it would print out the limit surface for you without users having to subdivide before uploading. Of course, STL doesn't support quads, but if there was a way to do that, I see it helping a lot of people who use a subdivision workflow.