I’m having trouble achieving a particular rendering behavior with VTK. I would like to render glyphs of my point cloud with a fixed screen size up until the camera is zoomed in close enough for the glyph to be its actual size. I want them
to be scaled up but not down.
I can fake this behavior by rendering two sets on top of each other. One set not scaled and the other scaled by distance to camera. But this is not ideal for huge data sets and I’d like to learn the correct way to solve this with one set.
At first I thought the scale clamping functionality of the vtkGlyph3DMapper could do this but setting the range to clamp seems to just scale the 'DistanceToCamera' values to the range.
One idea I had was to create a programmable filter that takes the vtkDistanceToCamera as input and outputs the same object but with an additional array. The same ‘DistanceToCamera’ array but with capped values below a certain threshold.
Like any value under 1 would be set to 1.
This is my failed attempt: It causes Python to immediately crash without error message.
I’d be grateful for any ideas or hints on how to accomplish this idea.
I’ve attached a demonstration. The red spheres are not scaled. The green spheres are scaled by distance to camera. The purple spheres show the visual behavior that I want. It’s using the “fake” method of rendering two sets of spheres on
top of each other. One set scaled and the other not.
Powered by www.kitware.com