Masked SLA projects UV onto photopolymer resin using a 2D pixel array. Of course, object features might not neatly align on pixel boundaries, which causes spatial aliasing on the finished part between the intended geometry and the pixel grid.
It turns out, partially curing the resin voxel with a “grey” pixel instead of a white, full brightness one actually does result in a lower volume of cured material, which grows off of any adjacent cured surface. Taking advantage of this, we can achieve higher effective spatial resolution on the finished object, the same way computer screens make text appear smoother using the same technique.
Autodesk’s Ember team (RIP) put out a great video showing exactly how this works, with a bunch of pretty micrographs and illustrations.
Since AA support was one of the things I was specifically holding out for while avoiding the dive into SLA printing, I was anxious to try it on my new Elegoo Mars to see how effective it is. I printed this aliasing torture test by MakerMatrix 3 different times, sliced with 3 different AA settings: none, 4x, and 8x.
No AA | 4x AA | 8x AA | |
Spherical | |||
Linear | |||
Layer lines | No Image. |
Conclusions
- AA does actually help surface quality significantly
- AA doesn’t really impact layer resolution at all – you still get identical layer lines on Z features compared to something like a laser SLA print. It should be possible to get sub-layer-height Z resolution, per this section of the Ember research, but the current algorithm doesn’t seem to support it. The 4X AA sample layer lines image does look a bit like the layer lines are less pronounced, but I promise you that’s just because I did a bad job taking the image (and because Y aliasing on the lower part of the fillet IS improved, where layer lines on the upper part of the fillet are less visible).
- AA doesn’t result in VISUALLY smooth surfaces, but does result in substantial mechanical smoothing. For complex models where aliasing isn’t broad and uniform like this torture test, this distinction isn’t super important. It’s unclear to me whether the visual non-uniformity of the technique is due to failure to correct for the nonlinearities described here in the Ember research.
- The only penalty to high values of AA are processing time (negligible) and output file size. Unless you like to track every sliced model you’ve ever printed on the one small flash drive, there’s no compelling reason to use any less than the max AA setting.
Further Analysis/Next Steps
That last point is actually a little interesting – the analysis I’ll embed below begs to differ with my conclusion, and he concludes that enabling AA does result in decreased visual detail in the final print. If the algorithm implemented is truly antialiasing in the 2D image processing sense, and filters spatial frequency of the input geometry to sub-nyquist of the LCD panel’s pixel array, it’s true that AA would be expected to reduce fine detail. On the other hand, while we’ve all been referring to this technique as AA, perhaps Ember had it more correct in the beginning, calling it sub-pixel rendering.
In the 2D image processing case, we simply can’t represent black-white-black on two adjacent pixels, so we filter the input image to reduce it’s spatial frequency, resulting in two adjacent grey pixels.
In the sub-pixel rendering case, depending on adjacent geometry, we actually possibly can. Imagine the black-white-black pixels above represent a trench 1/3 of the width wide, cut into the surface of our part. We only have two pixels to “cut” this trench. Assuming the layer above us has a wall on the left and a wall on the right, we could potentially grow cured plastic into the left and right, leaving a gap in the middle, by only partially illuminating the two adjacent voxels.
I propose that a naive approach to pixel intensity calculation might possibly achieve this: Rather than doing any image processing on a given layer, calculate voxel illumination intensity as the ratio of (volume of model present in voxel)/(total voxel volume). For instance, a plane bisects the voxel at 45* pitch and 45* yaw, i.e. it is a plane through 4 of the 6 voxel corners. Therefore, the occupied volume of the voxel is 50% of its total volume, and the pixel intensity should be 0.5. This would need to be corrected for the nonlinearity of curing mentioned in conclusion point 3. Additionally, some locations on the model would be served poorly by this algorithm, but I think only locations where accuracy couldn’t be improved by more robust logic anyway.
Follow up to that thought: I had a bit more thinking about this, and there are actually localized areas where the proposed algorithm would result in worse model accuracy. Consider separately a 45° slope, and a 45° overhang. In the case of a 45* slope, I think it’s plausible that, with full voxels in place above to grow on, a 50% voxel cured into the corner of two 100% voxels may well cure as a positive fillet or even chamfer, achieving the desired effect. In such a case, it’s possible that the algorithm might be optimized by curing 100% voxels first, then partial voxels in a second, subsequent time period, so the partial voxels grow as fillets rather than linearly up in Z. However, in the case of the overhang, each successive layer cured steps out OVER the layer already in place. In this case, the partial voxel will attach only to the adjacent full voxel, growing laterally away from the layer. This is, of course, because there isn’t yet a corner to grow into – only an adjacent wall to grow off of. In this case, the 45° overhang won’t smooth over, so much as it would grow laterally out from the model by 1/2 pixel – no smoother, just less accurate.
In the latter case, I do wonder if it’s possible to reach THROUGH the most recently cured layer to the layer ABOVE, and grow the fillet that way. For instance, I cure my layer, then I light up all the partial pixels from the PREVIOUS layer and expose only those for another layer period, in hopes of growing fillets THROUGH the most recent layer. Of course, this only works where fillets need to exist in voxels directly behind recent voxels of 100% cure.
I made a feature request to this effect in the ChiTuBox forum.
Hopefully sometime I’ll get around to making some illustrations for these concepts.
how about NO Layer lines, see the Red rhino post
https://forums.cgsociety.org/t/zbrush-zcron-uv-resin-slicer-ultra-high-resolution-video/2052405/4
Thanks for the link! Yeah, you can get pretty far with post-processing, it just doesn’t leave you with dimensional accuracy.
no layer lines over accuracy, i think i will take no layer lines,
its a model of a rhino not a machined part,
plus the accuracy on the rhino is pretty good, did you see the post above it, try here
https://forums.cgsociety.org/t/zbrush-zcron-uv-resin-slicer-ultra-high-resolution-video/2052405/3
if you think about it layer lines mess up accuracy,
if you fill the layer lines at least you got a start on how to make it accurate,
all you need to know is how thick the filled resin is when it dries to get rid of the layer lines then make your model that much smaller to compensate to get it accurate
Hello,
You have done a great test, thank you for that!
It would be great if you could do the following test: lowering the print resolution and then with and without antialiasing.
It should be possible to have faster prints (due to the lower resolution) with just little quality loss (due to enabling antialiasing)
Hey Greenbat! The biggest issue with such a test is that, at time of writing and present as far as I know, the mainstream slicers (at least ChiTuBox) don’t support calculating antialiasing in the Z direction, even though it should be possible – they ONLY perform AA on a 2D slice, and using traditional image processing techniques that actually do a much worse job achieving dimensional accuracy on the final print than they could. Traditional 2D AA effectively infers detail about the image contents that the image samples themselves (pixels) no longer contain, where while slicing, you have the whole, infinite-resolution 3D model base truth to calculate on.
In a best-case scenario, you’d expect an optimum superresolution algorithm to adequately smooth slopes, but do much more poorly on overhangs, since you have to grow partial voxels attached to existing cured voxels. But in either case, slicers are currently far off this ideal.
Hey Alex,
great post of yours. I know I’m late to the party. But everything you describe makes perfect sense, and the only reason I can imagine for Slicers not to implement slice extraction of a full voxel set is the sheer amount of memory this may require.
I’ve experimented with this myself and build a full voxel set in Houdini with OpenVDB, and it looks very promising. I have yet to print some tests in the coming days. Also, I have created a version of this by a pure 2D method, which should perform much better.
Sadly, I cannot attach a video here. If you’re still into this subject, shoot me an email.
Thank you
Andreas