Has anyone seen this yet? Gizmag.com has an article on it. Unbelievable:
Wow! I’ve had some projects where this would have been very handy.
Now THAT is stunning. Was the process the film studios used really
that complex? This new bit of code will be a good help for anyone
trying to manipulate new products into existing shots. (which many
of us will have done manually till now.) What is striking is that it is
emulating the manual process and doesn’t rely on complicated data
models from the camera.
It also seems to provide HDRI (or something similar) from the scene as well.
Amazing. Great work UICU team…
Yes, composting is an art in and of itself. It’s very easy to watch movies with bad CG and realize it instantly because the CG doesn’t lay over the image naturally. You need to take into account the exact properties of the lenses used (remembering that some lenses have very unique distortion properties), create some pretty complex, these days often animated HDRI environments to simulate the lighting, take into account any physical interferences or light interferences.
It’s tough doing it with stills, but super hard when you have motion of both the camera AND the objects.
The demo is pretty slick, but I think this will need to be implemented into some existing piece of software, since it doesn’t appear they are writing the 3D software or rendering package on their own. Just the algorithms for generating the lighting and shader properties within an existing piece of software.