BorisFX founder Boris Yamnitsky shares his vision of the future of AI
Jonathan Moser
January 15, 2024

BorisFX founder Boris Yamnitsky shares his vision of the future of AI

His name is legendary in the post world and synonymous with pioneering creative, innovative implementations of a family of effects and 3D products. Continuum, Sapphire, Mocha, Silhouette, Optics, Particle Illusion and Syntheyes and used in a vast range of television and film productions today. 


Photo: Author, Jonathan Moser

Boris Yamnitsky and his company BorisFX own much of the 3D tracking and visual effects post landscape. The company’s tools and their results are seen worldwide in eye-grabbing, dynamic visual pyrotechnics and startling special effects too numerous to name in top-grossing movies and TV shows.

While BorisFX’s whole catalogue of products is immense, it is continually expanding. And now, with its implementation of artificial intelligence in its product line, their future is even more robust. 

The CEO is deeply ensconced in BorisFX’s product development and creative visualization of what AI can (and will) bring to the editor’s table, and to the post production landscape. The term AI has been bandied about willy-nilly over the last few years, and in some ways has lost its true meaning. In order to understand what AI really is, and its use in the post production ecosphere, I went to Boston to spend some time with Boris himself to understand not only how it is being implemented in the product line, but to get his vision of the future of what AI will mean to our workflows.

Current status

BorisFX has implemented AI in a few products already. In the audio suite Crumplepop, it uses AI to reduce noise or eliminate echo and other intrusive sounds. Using AI models, it can enhance voice quality and much more. AI was introduced to their new Silhouette effects and rotoscoping composite package. And Generative AI is being used to help create realistic faces, fix text within images, as well as in numerous other applications, such as rotoscoping and image replacement.


Photo (right): Boris Yamnitsky

“It's a quite common task, where you have something on the side of a building that you want to remove,” Yamnitsky explains. “You can either clone paint it from the same image, or it may actually be better to just take pixels from a completely unrelated image. Before AI, you could go out on the internet, start looking for building walls that look similar, bring it in…that’s the traditional way. The new way is that you can tell your model, ‘Okay, make me a building wall similar to this building wall.’ And it'll give you a different building wall. And then you can even paint using that generated building wall.”

In Continuum, AI has been used to create new particle sprites based on existing samples and a text prompt by the user to simulate rain, stars, fire and any other fully-customizable object, enhancing creative workflow to reflect looks that are only limited by imagination.

Yamnitsky says that generative AI within the product line will allow enhanced generative up-res’ing of low-resolution video to 4K, 6K, and even higher resolutions, from even SD samples, and the ability to expand images beyond the original frame boundaries while maintaining resolution and colorimetry of the original, graded image.

“Basically doing things with the image and removing noise, adding noise, film grain, removing motion blur, adding motion blur, scaling up, scaling down,” he continues. “Removing flash photography. You can enhance details. You can sharpen. You can bring details that were not there before. So image restoration is a very, very large field that is traditionally solved by DSP digital signal processing filtering or algorithms that goes back into the ‘60s and ‘50s. But now, better results can be achieved by machine learning and AI.”

How does it work?

I asked just how AI works to do these miracles — it’s based on a multitude of models and training from them.

“So the model basically is trained on an original image and thousands of those model pairs,” Yamnitsky explains.

AI examines these model pairs and extrapolates needed correction based on the cleanest iteration. But doesn’t this require memory capacity out of the range of desktop users?

“This is where the most challenge comes in, because you see, all those models require a very large footprint on your computer,” says Yamnitsky. “If you install all the software that's necessary to run that, it's like gigabytes and gigabytes, where typically, a lightweight plug-in product, which is easily downloaded, installed within seconds on your machine is much smaller…Our goal is to have everything on your editing machine right there, isolated. So we streamline the data points in each filter to enough memory to handle a limited amount of AI computations for each particular filter.” 

Looking toward the future

“In the future, many of the image restoration and cleanup tasks will basically be integrated,” says Yamnitsky.”It can do things like beauty shots. It can do things like color correction, de-noising, sharpening, anything that makes your clips look better. For color grading, you will be able to put a model of a look you want from a film, or choose from a preselected group of images and have AI process, like a LUT. I cannot say it will take the place of a good colorist, but it will be pretty good. They may come from different cameras. They may come from different formats, shot at different times of the day, with different lighting. It’s happening. It’s just a matter of development.”


Photo: AI noise reduction, before and after

How soon?

“I think 2024 will be a very big year for AI and tools,” states the BorisFX founder.  “There are already titles on the market that kind of attempt to do that. And there'll be many more because the technology is out there. The algorithms are out there. It's just a matter of programming them, writing code, implementing them.”

What does this mean for the editor?

”This is the beauty of it,” says Yamnitsky. “If you look at Continuum. The noise filter is basically drag and drop. Noise filters usually require a lot of tweaking. But here, you drag and drop, and it knows everything. There is very little that you can control. You can probably make [it] stronger or weaker. That’s about it. You play it back. You don't like something, you go back to your project, open that filter, move the slider and now render again.

Text-based effects

“We’re experimenting with text-based prompts to drive the effect,” shares Yamnitsky. “You should be able to describe the desired outcome as text prompts, as opposed to just numerical parameters to mess with. Make the flare wider or glow purple…things like that.”

The future of BorisFX

Over the years, BorisFX has grown through the acquisitions of companies like Mocha, SynthEyes, Silhouette and Optics, corralling the market for 3D camera tracking, effects creation, rotoscoping and more. I had to ask Boris whether there was a grand plan, and how he chose these companies. His answer surprised me, as I thought it was the technology that attracted him.

“The people,” he explains. “People who are talented. People who are capable, knowledgeable, cool and driven. Yeah. People who want their products to succeed. I never buy to destroy. Okay? I always want the new company to flourish and the software to do extremely well.”


Stable Diffusion within BorisFX's Silhouette 

What’s next?

“In the future, I envision that you should be able to just go around with a camera, shoot random things, shoot some people…I want a movie about something like this. And then AI will actually make that movie for you. Not entirely. Not without supervision. Right? But it will help create something that is your vision as the creator.

“I always hear this argument: AI replacing people. Is AI creativity replacing human creativity? No, absolutely not. Actually, it's a powerful technology (that) will allow us to make better and more advanced visuals as humanity advances forward.

“The company has grown. (It) started in 1995, and has grown from one person to about 70 people worldwide. It's a different challenge and a different kind of excitement to build the company of that size, and to serve so many customers in so many different markets and so many different workflows. I'm extremely excited about what I'm doing now. And like I said, 2024 is going to be a big year for everybody, and for AI.”

Jonathan Moser is a New York-based video editor and writer, who can be reached online at www.remoteediting.tv, and via email at flashcutter@gmail.com.