Limited rooms still available at HPA Tech Retreat hotel - REGISTER NOW!

2025-2026 HPA Awards Innovation & Technology Nominee Descriptions

 

INNOVATION IN PRE-PRODUCTION

Bria AI – GenAi Attribution Technology

The core feature of the presented technology is Bria’s patented GenAI Attribution Technology. This system traces the training data influencing each AI-generated output and automatically compensates contributors based on their real impact. This transforms generative AI from an ethical liability into a transparent, auditable, and economically sustainable foundation for creative industries.

The impact is multifaceted: it resolves the ethical and legal barriers preventing the media industry from embracing AI at scale, offers full indemnification for customers, and creates a sustainable creator economy where data partners receive transparent, usage-based payments. This approach ensures fairness, consistency, and transparency, positioning adopters ahead of compliance requirements and fostering a creative renaissance built on partnership and fair compensation.

RivetAI, Inc. – RivetAI

RivetAI is transforming how studios manage pre-production by bringing intelligence, transparency, and precision to one of the most complex phases of filmmaking—so creatives can focus on the art. Built by filmmakers and engineers, RivetAI acts like a Copilot for production operations: it does not generate scripts or replace creative judgment. Instead, it automates the administrative heavy-lift: scheduling, budgeting, script-breakdown logistics, and coverage workflows, freeing teams to spend more time on story, performances, and craft.

The platform integrates data-driven insights directly into day-to-day decisions and links real-time information across the production ecosystem. Actor availability feeds shooting calendars with instant re-optimization when schedules change. Tax-incentive data across U.S. states and international jurisdictions is embedded into budgeting tools to guide smarter location planning. Dynamic scheduling and cost modeling allow rapid adaptation to creative or logistical shifts, reducing uncertainty and protecting both creative intent and financial resources. For enterprise clients, RivetAI supports on-premise and private-cloud deployments that meet stringent security and compliance requirements, ensuring studios maintain full ownership of their content and processes.

Under the hood, RivetAI’s technology is powered by its patent portfolio. RivetAI is the only company applying autoencoders to pre-production budgeting and scheduling, using these models to surface constraints, and recommend cost- and time-efficient scenarios without compromising creative choices. By unifying these capabilities into a single, intelligent platform, RivetAI represents “AI for Good” in film: elevating human creativity, eliminating wasteful workflows, and setting a new standard for a faster, fairer, and more resilient path from script to shoot.

Yamdu – AI Script Breakdown and Management Add-On

Yamdu moves pre-production to the cloud. Yamdu fosters interoperability by offering an API supporting MovieLabs’ OMC. Narrative elements are distinguished from production elements. This enables Yamdu to process complex correlations using unique identifiers for all objects.

From script to scheduling, filmmakers collect data. For most departments (e.g. costume, props, …) this means manual script-tagging across many script versions. Which requires hours of reading, comparing, and tagging for dozens of crew members each time a new script is being published.

Yamdu has already proven in recent case studies, that it can reduce pre-production time by 30% and more due to intelligent automation (e.g. for ARD Germany). Now, this process gets enhanced by AI with the goal to cut prep time in half. Yamdu has integrated AI into key processes like automated script breakdown across all departments incl. customizable breakdown categories and prompts. Auto-generated script changes provide a scene-by-scene overview on all changes between two script versions. Plus, there is an auto-generate synopsis feature.

The real innovation is how Yamdu integrates AI into existing production workflows in a meaningful and respectful way: Creative departments, UPMs, LPs, 1st ADs, etc. stay in full control while AI is helping them to eliminate redundant reading time or manual script breakdowns. Yamdu offers an intuitive interface that clearly distinguishes between AI-based suggestions and human-based decisions.

Yamdu’s AI tools have been activated for paying Yamdu customers in summer 2025 and the feature is publicly available since fall. The feedback so far: Yamdu AI is a game-changer!

INNOVATION IN PRODUCTION & CAPTURE

American Society of Cinematographers – Media Hash List

ASC Media Hash List (ASC MHL)

The ASC Media Hash List (ASC MHL) defines the industry’s first open, verifiable chain-of-custody for production media, ensuring every copy of every file is complete, correct, and accountable from camera through post to archive.

Built by the ASC Motion Imaging Technology Council (MITC), ASC MHL replaces ad-hoc checksum reports with a structured XML-based manifest and linked history that record each copy and verification event. This portable, append-only audit trail travels with the media, enabling any compliant tool to reproduce or extend another’s verification without ambiguity.

Already deployed on hundreds of productions, ASC MHL has dramatically reduced time lost to data-integrity investigations. Major studios such as Netflix and HBO have adopted it as their checksum manifest standard, while leading vendors—including RED, ARRI, Pomfort, YoYotta, OffShoot, and Imagine Products—have implemented native support.

Its open-source reference implementation guarantees consistent behavior across systems and provides an accessible foundation for continued innovation. ASC MHL supports multiple hash algorithms, scales from on-set to cloud to archive, and preserves end-to-end traceability—even across hybrid workflows.

The result is a transparent, interoperable framework that transforms checksum verification from fragmented “plumbing” into a trusted production-infrastructure layer. By uniting camera manufacturers, software developers, and studios around a common integrity model, ASC MHL establishes a new global standard for trustworthy media movement—one that replaces uncertainty with proof, manual re-verification with automation, and isolated reports with an enduring, auditable history of every file’s journey.

Creamsource – Slyyd Lighting Control App

**Slyyd: Reinventing Lighting Control for Modern Production**

Slyyd is a transformative **app-first, multi-manufacturer lighting control platform** built for the realities of today’s fast-moving production environments. It reimagines decades-old workflows as **touch-focused, visual interface** for cinematographers, content creators, and virtual production teams. Launched March 2025, Slyyd transforms lighting control from command-line complexity into visual, touch-first creativity.

At its core is a **device-independent color engine** that translates creative intent into precise, consistent chromatic output across a wide range of fixture types (RGB, RGBW, RGBACL, and more). Users can select colors visually, by CIE xy coordinates, or from gel libraries, and Slyyd automatically matches those looks across mixed fixtures with accuracy and speed.

The **LookBook** replaces cue numbers with visual lighting scenes, while the **Scratchpad** enables real-time experimentation without syntax or programming overhead. Projects can be shared instantly via AirDrop, iCloud, or Dropbox, enabling seamless collaboration on set. Built entirely on Apple’s native SDKs, Slyyd offers **low-latency control**, **secure local data**, and **frictionless updates**, eliminating costly hardware refresh cycles.

In under a year, Slyyd has been adopted in **over 60 countries**, empowering crews to light faster, smarter, and more creatively. By removing friction between imagination and execution, Slyyd represents a fundamental shift in how professional lighting control is conceived, delivered, and shared — **a modern tool for the mobile, connected, creative future of production.**

Lighting control has entered the app age. Slyyd defined it.

Méduse Inc. – Safe Guns Phase Synced Flash-Gun System

Safe Guns are specialized non-firing prop guns that mimic intense muzzle flashes with embedded precision timed LED strobes, triggered by the actors wielding them.

The flashes overcome “tearing/banding” artifacts of traditional flashes hitting rolling shutters. All the guns are wirelessly connected into a controlbox that alters the timings of the flashes in milliseconds to be phase adjusted into alignment with the rolling shutters of multiple cinema cameras, mimicking timecode synced genlock.

Additional features include support for 8 channels, haptic recoil action, rapid automatic fire, digital sound effects sent to the sound department, and slaved triggering to off-camera strobe lighting (also phase synced).

First used on the HBO series THE PENGUIN, they allowed us to shoot dozens of action scenes filled with simulated muzzle flashes with no tearing artifacts, even across multiple cameras. Directors were delighted by the realistic reactions of the actors flinching from the intense lighting. DOP’s loved the added visual cues, not having to imagine what the scene would look like after VFX. AD’s were able to instantly inspect the devices for safety. Props, being responsible for their handling and distribution, integrated them easily into their workflows.

In post, editors were enabled to creatively edit with visible flashes, and VFX teams were enthralled by the robust footage, now being able to comp-in muzzle flashes on plates already filled with rich interactive light, achieving high quality results with substantial reductions in cost and time.

The system is now currently being used in other productions such as Marvel’s DAREDEVIL.

INNOVATION IN VFX, VIRTUAL PRODUCTION & ANIMATION

Foundry – Nuke Stage

Nuke Stage redefines virtual production by unifying in-camera VFX with post-production compositing. It enables VFX artists to drive LED volumes and real-time environments using the same tools and color pipelines they rely on in post, eliminating translation errors and duplicated workflows across departments.

Unlike traditional virtual production systems adapted from gaming or broadcast, Nuke Stage was engineered specifically for filmmaking. It leverages established industry standards (OpenUSD, OpenEXR, and OpenColorIO) to maintain full data fidelity from pre-production through final pixel. Its node-based compositing environment provides granular control of 2.5D image-based environments, depth, and parallax in real time, allowing artists to light, composite, and adjust shots interactively on set.

The system records every frame of camera and scene metadata through the Metadata Vault, ensuring accurate reconstruction of shots downstream and seamless iteration between set and post. This eliminates one of the largest inefficiencies in virtual production: the disconnect between what is captured on stage and what is delivered in VFX.
By merging real-time and offline workflows into a single creative pipeline, Nuke Stage reduces setup complexity, increases predictability, and lowers cost barriers for production. It gives VFX teams direct creative control on set and enables filmmakers to visualize, capture, and deliver final-quality imagery within a unified, standards-based environment.

Nerfstudio and Industrial Light & Magic – Nerfstudio

Nerfstudio is an open-source, modular framework for developing and working with Neural Radiance Fields (NeRFs) and Gaussian Splats (GS). It provides a streamlined, end-to-end workflow for turning real-world images or videos into 3D scene reconstructions. The framework includes tools for data ingestion, real-time visualization in a web viewer, and exporting results as videos, point clouds, or meshes. Its design focuses on flexibility and ease of use, making it simple for researchers, artists, and developers to prototype, evaluate, and integrate radiance field-based methods into their projects.

Our contribution to Nerfstudio includes EXR 16/32bit support for training and rendering, high dynamic range calculation, and world scale conversion from VFX scale in feet to NeRF scale. Additionally, we convert camera data, including lens distortion, from Nuke to NeRF transform.json files and vice versa.
Thanks to our work, we can now have Nerfstudio integrated into any VFX pipeline for a better user experience.

Volinga and XGRIDS – Virtual Production Pipeline

Volinga and XGRIDS have joined forces to streamline the process of bringing real-world environments into Unreal Engine through an advanced, production-ready 3D Gaussian Splatting (3DGS) workflow. The partnership integrates XGRIDS’ SLAM LiDAR capture technology, known for its accuracy, to-scale spatial precision, and ease of use, with the Volinga Plugin for Unreal Engine, which enables direct import, real-time rendering, and advanced creative control of 3DGS assets.

This end-to-end pipeline dramatically reduces the time, cost, and complexity traditionally associated with 3D environment creation. With XGRIDS hardware, users can capture photoreal environments in minutes—no specialized training required. Those 3DGS outputs can then be imported directly into Unreal Engine via Volinga’s plugin, complete with support for re-lighting, full ACES color management, and seamless integration with nDisplay, VCam, Depth of Field, and other virtual production tools.

The result is a fast, flexible, and highly accurate workflow for professionals across virtual production, broadcast, VFX, previs, and immersive experiences. By combining intuitive capture with powerful rendering and integration tools, the Volinga × XGRIDS pipeline bridges the gap between reality capture and real-time storytelling—empowering creators to deliver photoreal environments faster than ever before.

This collaboration marks a major step forward for studios adopting 3D Gaussian Splatting in professional pipelines, transforming how real-world data becomes shoot-ready digital environments in Unreal Engine.

INNOVATION IN POST-PRODUCTION

Adobe – Generative Extend in Adobe Premiere

Generative Extend in Adobe Premiere Pro is the first generative AI feature built directly into a non-linear editor (NLE), designed to solve one of editing’s most common pain points: needing just a few more frames or extra audio to make a cut feel natural. Powered by Adobe Firefly, Generative Extend lets editors seamlessly add new frames or ambient audio to the beginning or end of a clip without leaving the timeline. Instead of duplicating frames, freezing shots, or relying on awkward workarounds that take them out of their creative flow, editors can now simply drag a clip’s edge to generate realistic motion and sound that blend perfectly with the surrounding content. The generated portion is clearly labeled, supports up to two seconds of visual extension and ten seconds of audio. Generative Extend uses Firefly’s responsibly trained models with embedded Content Credentials to ensure transparency and authenticity. Because the generation happens in the background, editors can keep working without interruption. This innovation marks a breakthrough moment for professional video editing: bringing generative AI natively into the NLE for the first time and eliminating a long-standing creative friction point, so editors can stay focused on pacing, emotion, and storytelling instead of technical limitations.

Flawless – TrueSync

TrueSync is Flawless’ proprietary facial performance editing system that enables visual dubbing, the ability to translate on-screen dialogue into new languages while maintaining the actor’s original performance, emotion, and timing. The technology combines three core breakthroughs: neural rendering, 3D performance mapping, and audio-driven facial activation (ADFA). Together, these components generate photorealistic, frame-accurate lip synchronization across languages without retraining, reshoots, or 3D scanning.

TrueSync also introduces a dialogue removal process that disentangles emotion from speech, allowing writers and performers to adapt faithfully to on-screen performances without visual constraints. This approach gives creative teams the freedom to write, perform, and mix with greater authenticity while maintaining the integrity of the filmmaker’s intent.

The system operates within ACES-compliant, 16-bit, lossless color pipelines at up to 8K resolution, delivering frame-accurate renders under any lighting or environmental condition. It integrates directly into existing post-production workflows, enabling scalability for global localization. The system has redefined post-production localization pipelines by allowing editorial, sound, and finishing teams to deliver multilingual versions from a single master without creative compromise.

TrueSync powered Watch the Skies (originally UFO Sweden), the first visually dubbed feature to premiere theatrically in the United States and later on Prime Video. With its patented pipeline and collaboration across major studios and streaming partners, TrueSync defines a new era of post-production localization that combines scientific precision with artistic integrity.

Storj – Production Cloud

The Storj Production Cloud redefines post-production by merging globally distributed object storage, real-time file access, and on-demand compute into a single, open platform for the media and entertainment (M&E) industry.

It reimagines media infrastructure as a unified, cloud-native mesh seemingly purpose-built for distributed workflows. It enables studios, post facilities, and independent creators to collaborate globally with local-like production-ready responsiveness, delivering consistently superior performance at a fraction of the cost, without the complexity, lock-in, or unpredictable billing of traditional hyperscalers.

By combining globally consistent performance, high-availability storage, metadata-aware object storage access, and proximity-optimized compute, Storj Production Cloud transforms fragmented workflows into scalable, low-latency pipelines.

Teams can ingest, review, transcode, edit, and collaborate on content without operational overhead, which helps accelerate time to create, minimizing egress, and enabling real-time iteration from anywhere. Designed for 24/7 production environments, the platform offers frictionless scalability, intuitive operation, and predictable economics that eliminate complexity while scaling effortlessly with creative demand.

INNOVATION IN DISTRIBUTION & AUDIENCE EXPERIENCE

Cineverse – CINESEARCH

CineSearch: The AI-Powered Search & Discovery Guide for Film & Television

Audiences today spend five times more time searching for something to watch than watching. CineSearch changes that.

Built on advanced AI and our proprietary data source CineCore, CineSearch redefines how audiences discover content — delivering intuitive, explainable, and personalized recommendations that go far beyond simple keywords.

The CineCore Advantage
At the heart of CineSearch is CineCore, a proprietary, domain-specific data set encompassing over 75,000 titles with enriched metadata across 500 English speaking streaming services. CineCore fuses traditional metadata with enriched contextual intelligence — including audience behavior, reviews, box-office performance, and AI-generated thematic insights. The result is a truly multidimensional understanding of content: theme, mood, emotion, narrative structure, and tone.

Rich Contextual Intelligence

CineSearch’s knowledge graph provides unparalleled depth:

  • Historical TV ratings and box-office data
  • Major awards and festival honors
  • Dynamic, real-time ratings and performance trends
  • Hundreds of curated “Best Of” and genre lists

This contextual layer allows users to explore content by creative DNA — not just cast names or genres.

Explainable AI (Q Points)
Each recommendation includes an easy-to-understand “Why this title?” explanation.
This transparent approach builds user trust by connecting every suggestion to measurable factors such as story structure, audience sentiment, or stylistic similarity.

Truly Personalized Discovery
CineSearch recognizes the individuality of every viewer.

SyncWords Inc – LiveCore + Kobe Muxer

The Kobe Muxer is a Java-based MPEG-TS multiplexer that embeds GenAI outputs—captions, translations, and dubbed audio—directly into live transport streams for real-time delivery. This eliminates secondary ingest and maintains frame-accurate synchronization across audio, text, and video.
System Performance
End-to-End Latency: Reduced to 30 seconds or less, from live speech to translated output.

Operational Stability: Verified 24 / 7 operation over a year of continuous broadcast streaming with > 99.99 % uptime.

Cross-Platform Compatibility: Fully interoperable with AWS Elemental MediaLive, MediaConnect, MediaPackage, and major CDN / OTT players.

Multilingual Synchronization: Accurate timing alignment of subtitles, dubbed audio, and sign-language layers across all languages.

Optimized Readability: Automated segmentation of text for best readability per language and caption style (pop-on / roll-up).

Adaptive Audio Ingest: Voice-separation models isolate commentary in noisy sports venues, enhancing dubbing and translation quality.

Elastic Scalability: Kubernetes-based deployment launches hundreds of live channels within minutes, with load balancing and automatic fail-over.

Technical Excellence Summary
This architecture merges broadcast engineering precision with GenAI flexibility, achieving sub-minute latency, standards compliance, and cloud scalability. It proves that multilingual accessibility can be delivered live, reliably, and at global scale.

V-Nova Studios – V-Nova PresenZ

V-Nova PresenZ is a groundbreaking volumetric technology transforming the future of cinematic virtual reality. Its Lumiere Award-winning 6 Degrees of Freedom (6DoF) pre-rendered format enables audiences to step inside and live within photorealistic, Hollywood-grade worlds, delivering true cinematic immersion without motion sickness.

Unlike traditional 3DoF VR formats or real-time 3D engines, V-Nova PresenZ achieves pre-rendered volumetric experiences at over 90fps, combining the fidelity of offline ray-traced visuals with the freedom of natural movement. Seamlessly integrating with existing CG and VFX pipelines, it allows creators to produce, or even remaster, volumetric films using familiar tools—removing the barriers between traditional filmmaking and immersive XR storytelling.

V-Nova PresenZ’s impact goes far beyond technology, reshaping how audiences experience stories. With Sharkarma, Weightless, and Construct VR, its power spans entertainment, music, and education. Already available in PCVR via the ImmersiX App on SteamVR, PresenZ is about to reach a whole new audience, a global movie-lovers market, through pixel streaming — coming soon to major devices like Apple Vision Pro and Meta headsets, directly from their stores, no PC required.

Praised by industry leaders such as Vicki Dobbs Beck (Lucasfilm/ILM Immersive) and validated by media outlets like Road to VR, V-Nova PresenZ is as a credible, production-ready foundation for the future of immersive cinema. By bridging storytelling and story-living, it empowers creators and viewers alike — establishing a new creative and commercial paradigm for virtual reality entertainment.

We use non-personally-identifiable cookies to analyze our traffic and enhance your HPA site experience. By using our website, you consent to the placement of these cookies. Learn More »

Pin It on Pinterest