European Research Leaders Have Joined Forces to Define the Next Generation of Video Coding
Europe is taking an early and coordinated step toward defining the next major global video coding standard. Nokia, Ericsson, and Fraunhofer Heinrich Hertz Institute (HHI) have formed a research partnership aimed at addressing the growing technical challenges of video compression — a domain that underpins nearly all digital media and will become even more critical with the rise of immersive communication, cloud gaming, and future 6G-enabled applications.
Fraunhofer HHI, which has played a central role in the development of the H.26x family of international video coding standards, sees this collaboration as both timely and necessary. Today’s compression technologies are approaching the limits of what purely incremental improvements can deliver. Emerging use cases, combined with growing device diversity and bandwidth demands, require a new architectural leap in video coding.
To understand how this collaboration came about and what it aims to achieve, MoveTheNeedle.news spoke with Dr. Detlev Marpe, Head of the Department of Video Communication and Applications and Head of Patents & Licensing at Fraunhofer HHI.
Building on Europe’s Longstanding Leadership in Video Compression
The partnership between Ericsson, Nokia, and Fraunhofer HHI was established around a shared commitment to open, collaborative standardization — a hallmark of successful video codec development and a consistent strength of Europe’s research ecosystem.
“We all share the same vision of creating standards in an open and collaborative way,” Dr. Marpe explains. “This has been proven to lead to very successful video coding standards. International standards like H.264/AVC and H.265/HEVC already make up the backbone of digital video today.”
The organizations involved have each contributed significantly to the H.26x lineage deployed in virtually every digital media device. As Dr. Marpe notes:
“All three parties have a renowned track record of having contributed significantly to the H.26x video coding technology that is currently being used in every digital media device. Joining these forces is a unique opportunity to tackle today’s challenges of video compression.”
By coordinating early — well ahead of the formal standardization cycle — the partners aim to accelerate research into tools, architectures, and complexity-performance trade-offs that could shape the next global standard.
“Nowadays, it is increasingly challenging to advance video compression beyond current state-of-the-art standards like H.266/VVC,” Dr. Marpe says. “This is why we decided to start a collaborative initiative between leading European players in this field at a very early stage.”
Why a New Compression Standard Is Needed
H.266/VVC (Versatile Video Coding), finalized in 2020, offers major efficiency gains over previous generations. Yet global demand for video — especially high-resolution streaming, real-time communication, and emerging interactive media — continues to outpace available bandwidth. The result: a clear need for a next-generation architecture with significantly better performance and manageable complexity.
A recent joint industry workshop held by ITU-T and ISO/IEC highlighted one requirement in particular: reducing encoder complexity.
“One key requirement expressed was manageable encoder complexity,” Dr. Marpe explains. “This means that a new standard should still provide notable compression gains in resource-constraint environments, for example in real-time communications and cloud gaming on mobile devices and networks.”
Early research from the three partners suggests that meaningful improvements are already within reach.
“For instance, at the same encoder runtime of the H.266/VVC reference model, we already showed over 20% bit-rate reduction over H.266/VVC,” Dr. Marpe says. “In addition, we are exploiting more and more data-driven technologies by, e.g., incorporating neural networks.”
These data-driven methods — including neural prediction models and advanced post-processing — are expected to play a far larger role in the next-generation video standard.
Preparing for Immersive and Volumetric Media
Future codecs must support not only traditional video but also a rapidly expanding range of immersive media formats: 360° video, augmented and virtual reality, and new volumetric representations.
“It has been influencing already previous generations of video coding standards,” Dr. Marpe says. “In H.265/HEVC, we introduced tile-based streaming for 360-degree video, while H.266/VVC supports subpictures and several tools for low-latency, efficient processing of VR/AR video.”
Fraunhofer HHI researchers are now exploring how emerging rendering techniques — including Gaussian splatting — can be integrated into next-generation compression workflows:
“Nowadays, with emerging 3D rendering techniques like Gaussian splatting, we are working to incorporate these promising algorithms into the video compression pipeline.”
This evolution reflects a broader industry shift: video codecs increasingly need to interact with 3D graphics engines and hybrid visual representations, not just compress camera-captured frames.
The 6G Transition: Scaling Video Across Diverse Devices
While 5G introduced bandwidth and latency suitable for cloud gaming, XR, and remote operations, 6G networks are expected to scale these capabilities dramatically — while connecting a far broader range of devices.
“6G is expected to deliver these capabilities at massive scale,” Dr. Marpe says. “At the same time, 6G will dramatically increase the diversity and number of connected devices – from high-end smartphones to lightweight XR wearables and low-cost mobile sensors.”
This diversity increases the need for an adaptable codec that performs efficiently across a spectrum of hardware, including constrained devices.
“Since digital video will remain the dominant driver of bandwidth consumption, the next generation of video compression must support this heterogeneous device landscape and its diverse use cases by striking a balance between high compression efficiency, versatility, support for ultra-low latency, and hardware-friendly design,” Dr. Marpe explains.
Energy efficiency, real-time performance, and scalability will therefore be core design requirements.
AI as a Core Component, Not an Experiment
AI is set to become a foundational element of next-generation video codecs. Machine learning is already present in several components of H.266/VVC, but its influence is expected to expand significantly.
“Even for the development of the current H.266/VVC standard, we already made quite extensive use of machine learning,” Dr. Marpe notes.
Future standards may incorporate neural filters directly into the decoding pipeline:
“For the next generation, enhancing compressed video pictures with neural network-based filters is a promising technique which we also employed in one of our joint submissions,” Dr. Marpe says.
As modern mobile devices increasingly include NPUs (Neural Processing Units), these neural tools can run efficiently:
“Whenever a device is equipped with a neural processing unit (NPU), such a filter can be executed on each picture without increasing the CPU load for software implementations or added circuitry for hardware implementations.”
This marks a deeper convergence between video coding and on-device AI acceleration.
Target Applications: Cloud Gaming as a Primary Driver
While improved compression will benefit mobile video, XR, and industrial applications, cloud gaming is emerging as one of the most influential drivers for next-generation video standards due to its high bandwidth demands and ultra-low latency requirements.
“Mobile video, XR and industrial use cases always benefit from greater compression,” Dr. Marpe says. “However, cloud gaming is one area that is in focus of a next generation standard.”
He adds:
“Streaming of gaming video is ever increasing and accounting for a huge chunk of internet traffic, also on mobile devices and networks. This requires ultra-low latency processing and transmission but game engines that create this type of video content, can also provide knowledge about the video which is typically not available for camera-captured content. This knowledge can be exploited for XR applications as well.”
This synergy between real-time rendering and compression represents one of the clearest opportunities for efficiency gains.
Timelines: From Research to Standardization
While preliminary research from the partnership is promising, the multi-year standardization process is only beginning.
“Our joint proof-of-concept submissions have shown increased compression efficiency over H.266/VVC at competitive complexity points,” Dr. Marpe says.
A formal phase of international standardization is approaching:
“The agreed timeline of the standardization committee is to start the collaborative standardization phase after launching a Call for Proposals in July 2026 and evaluating the responses in January 2027. Typically, it takes about 3 years to develop and finalize the standard.”
This suggests that early demonstrations will appear around 2030, with commercial implementations following soon after.
“Until then,” Dr. Marpe notes, “H.266/VVC is there to satisfy the need for increased compression in multiple application scenarios.”
So while the next generation of video compression is still several years away, the research that will define it has already begun.