fbpx

TR-X Keynote Speakers In Their Own Words

2020 1 TRX Keynotes Post

This year’s TR-X will examine how the most disruptive changes in technology will fundamentally alter the way each of us practices our craft. Hear from our keynote speakers on what they are each considering as they prepare for the event.

 

Rodney Grubbs
NASA Imaging Experts Program Manager
Marshall Space Flight Center
Huntsville, Alabama

As humans venture deeper into space, the role of human interaction in collecting images will undergo significant changes.

“The latency between ground control and the spacecraft is two to three seconds now, so it’s still practical for humans to interact with the camera system,” says Grubbs.  “But as we go further out – to the moon or Mars – camera systems have to be able to function and track on their own.  Having autonomy and AI embedded into the camera system becomes more and more important.”

Grubbs believes that NASA will be able to exploit the frontiers that the gaming and security industries are already pushing.  “We should be able to use the built-in intelligence in camera security systems and facial recognition to a certain degree,” he says. “And we should be able to take advantage of the interaction with AI engines that cloud companies already have.”

The “tricky bit” with moving ahead with these systems, he says, is the issue of Internet Protocol communications.  IP has been designed for short, earthbound links and won’t work in deep space because of the time lag between transmission and reception or disruptions in transmission.  So, over the last decade, interested parties worldwide have banded together to develop a new IP, Delay-Tolerant Networking, that will enable technologies to continue to function with internet-based systems in deep space.  “Several protocols have been standardized, but we’re still working on a full suite,” Grubbs reports.

He sees a lot of practical applications for VR and AR as man moves to lunar outposts or Mars.  “If a crew member pointed a camera system or tablet at a part of the vehicle that needed repair AR would know what’s behind the panel, find out what the problem is and maybe troubleshoot the wiring,” he muses.

“In the past cameras on spacecraft were capable of a variable field of view thanks to a mechanical pan-tilt device and controller for the crew or ground control to use,” Grubbs continues.  “Now, with VR, we wouldn’t need to fly big, heavy mechanical devices.  We could stitch together multiple images and put intelligence in the system to track an approaching vehicle, for instance.  We could produce data showing where that vehicle is in three-dimensional space.

“Video used to be video – very straightforward.  Now video is data so it dramatically expands the power and complexity of cameras and imaging systems.”

 

Josh Stinehour
Principal Analyst
Devoncroft Partners, LLC
Coronado, California

Research firm Devoncroft Partners follows the B2B segments of the media technology sector.  “We measure things: the size of the market, market segments, what customers think of suppliers, where we are with new technologies,” explains Josh Stinehour.  “Next-generation technologies offer the opportunity to measure things that drive a lot of value for media companies, such as the efficiency of technical operations and the budgets required to deploy them.  We’re always looking to define metrics and create benchmarks to describe efficient technical operations.”

But he notes that over the years, “In a lot of discussions about technology for technology’s sake, discussion has been absent regarding business models and business considerations that might accelerate or pose an obstacle to adopting next-generation technologies.  Now, there’s an opportunity to rethink business processes and bring more financial discipline to technology.”

He gives kudos to the recent White Paper from MovieLabs and its member studios, The Evolution of Media Creation, which outlines a vision for the future of media creation in 2030.  “Everything will sit in a cloud and all processes will be virtualized,” says Stinehour.  “Much of what is talked about is already technically possible.  The big obstacle is the business model and how to justify the investment so we can move ahead and build the architecture.”

He sees moving technologies to virtual environments as the most impactful trend.  “I believe virtualization will unlock budgets from a business standpoint.  We’ll finally be able to measure and quantify what is good and what is better.  Engineering teams who go to internal finance teams and say they need to make investments because to do so is the future do not present a compelling reason for a financial person.  It’s been exceptionally difficult to demonstrate ROI with legacy infrastructure.  But virtualization, moving to cloud environments will give almost real-time information to help decisionmakers better optimize and communicate value.  It’s still early, but we can see the future taking shape.”

Apart from amortizing the cost of the machines, the current appliance world offers “no visibility into the cost of operations,” Stinehour continues.  “In the cloud you’re hit in the face with that information.  The decisionmaker gets timely information to make better decisions instead of guessing about operations.

“We – as an industry – also need to change the culture of secrecy.  Where

we’ve had no information because of a history of secrecy we can now shine a light and talk about these things out loud.”

 

Mark Turner
President
Entertainment Technologists Inc.
Los Angeles, California

With boutique consulting company Entertainment Technologists specializing in the M&E pipeline and the opportunities from change in that space, Mark Turner is looking at real-time engine technologies and how they will merge with film and television production workflows.

“This will happen; there are early signs of it now,” he says.  “There is real-time rendering using game engines in different parts of the pipeline, but we don’t have a whole pipeline yet.  This change has implications where it becomes mainstream for the way we make content and the way audiences view content since there’s the opportunity for content to adapt to the audience.”

Turner is convinced that “we can’t keep doing things as we always used to – it’s a fundamental issue of efficiency.  We can’t keep making more and more content because there are fundamental resource constraints, for example there isn’t an endless pool of VFX artists.  So the only opportunity is to make the process more efficient: to make more with the same resources or make those resources more efficient.”

He’s looking forward to harnessing real-time engines and GPUs to develop emerging pipelines.  “GPUs have just become powerful enough to do real-time ray tracing,” he notes.  “‘The Lion King’ used ray tracing for complex digital environments, but it took 24 hours per frame to render with CPUs.  The new Microsoft Xbox with a ray-tracing GPU will render 60fps in a quality close to the 24 hours required with a CPU farm.  Within five years we’ll be able to do cinema-level fidelity in a real-time engine – that’s the fun thing for me!”

Turner says simple animated television content is made in real-time engines now, and VFX scenes for motion pictures, such as “Rogue One: A Star Wars Story,” have been rendered with real-time engines.  “It’s happening in pieces, but the entire pipeline is a ways off,” he says.  “The good news is that technologies are routinely swapped out by companies anyway, so adopting real-time engines will just be the next step.”

We use non-personally-identifiable cookies to analyze our traffic and enhance your HPA site experience. By using our website, you consent to the placement of these cookies. Learn More »

Pin It on Pinterest