Hey…a question here. Is there a way to render in Alias so the isoparms are visible? I poked around today a bit, and couldn’t figure it out. So far the only way I can think of doing it is to do 2 separate renderings one is the “normal” and one is the hidden lines …and then overlaying or multiplying ‘em in photoshop. But it seems there’s gotta be a better way of doing it… Could you guys please help me out?
no, the technique you described is pretty much it.
you could assign a grid uv texture to every surface… but that wouldn’t correspond to the isoparm locations per surface unless every surface had the same number of subdivisions and was parameterized uniformly, or you had loads of time on your hands to set up many shaders with different grid repetitions.
one tweak to the hidden line/render method you described: render one hidden line with white lines/black fill and background, render another with colored lines and fills. use the first line render as the mask channel to remove the fill and background from the second render. then you can overlay just the colored hidden lines on top of the rgb raycast/trace in photoshop.
clear as mud?
oh yeah, you could instead use a line swept or extruded along EVERY isoparm to give dimensionality to each iso. like making a tubular cage over each model line. tedious to say the least. maybe you don’t do this to every line.
i should think before i hit “submit”. but that’d kill half the fun.
if the render is lower than screen res then you could also change your colors in studio to black bg, white model lines, toggle antialiasing on, and dump the screen to photoshop to use for your overlay. then you’d have all the lines, not just the unhidden ones.
or you could print to postcript file, open the file in illustrator, copy the stroked lines, paste in photoshop rasterized.
alright, i’ll shut up now.
if capturing the screen you might get distortion between model view and rendering. a rendering will use a camera with lens properties, no?
i don’t have tremendous experience with the more obscure camera settings such as film backs and focal distance, etc. i do know that when i direct render over a modeling window i see the model in the same place as the wire.
so that scenario works for renders with res no greater than screen res. i assume the behaviour is the same even if the wire capture on screen has to be scaled in photoshop to fit a 2048 wide render, for example. just blur or otherwise tweak the screen capture so it ain’t terribly aliased.
depth of field will, however, screw up the match. the lines in studio aren’t depth cued, that is, the lines further away aren’t more faint, blurrier, etc. so that’ll ruin the look when comped over an rgb render with dramatic depth of field.
also, rendering nonsquare pixels, for example ntsc video output to a device with .9 pixel aspect ratio, will also screw up the alignment with a square pixel screen dump. and field rendering will also break this technique. but now i’m getting too esoteric for product designers…
i mention it b/c i’ve tried it. got some vignetting in the corners. but its been years since i tried. would make a nice test. both for Studio and Maya. might test this after my Maya to Pro WIP.
I don’t know how to answer the original question other than the techniques mentioned, but I do have to add that there is a distortion between the prespective window and rendering. It seems when I direct render the render appears to be .2 cm smaller than the wires.
interesting. so when you render to a window somewhere between 640x480 and 1280x1024, for example, and screen dump the same view, your comparison in photoshop yields some real, visible difference between the position of the model in the two views?
do the perspective views have different convergence, i.e. is the model more “off” in some areas of the render than other areas? or is the shift identical for all pixels? or does all the wireframe elements seem “bigger” than the rendered version?
I haven’t thoroughly investigated the subject, just from eyeballing it seems slightly different. I’m not sure if it the lens of the rendering or if it is just image size.
vignetting. center was okay. mismatch increased toward edges. never knew why. but so long ago i dont remember specific settings. only that i was stumped. Photoshopping two screen captures was supposed to work!
will look into this on Maya soon and post results. may be similar to Studio.
not an issue for Maya. found a script that creates nurbs tubes on the isoparms. wonder if there’s a plugin for Studio that does this. Sam?
not to my knowledge, but then i never really got into studio’s plugins beyond center pivot (which was later rolled into the app as part of the default toolset).