<?xml version='1.0' encoding='UTF-8'?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/" xmlns:blogger="http://schemas.google.com/blogger/2008" xmlns:georss="http://www.georss.org/georss" xmlns:gd="http://schemas.google.com/g/2005" xmlns:thr="http://purl.org/syndication/thread/1.0" version="2.0"><channel><atom:id>tag:blogger.com,1999:blog-7461761178036895729</atom:id><lastBuildDate>Sat, 04 Apr 2026 09:12:05 +0000</lastBuildDate><category>MPEG</category><category>cdgathena</category><category>dash</category><category>athena</category><category>call for papers</category><category>cdg</category><category>mpeg-dash</category><category>multimedia</category><category>conference</category><category>qomex</category><category>workshop</category><category>qoe</category><category>ACM Multimedia</category><category>mmt</category><category>hevc</category><category>mmsys</category><category>CfP</category><category>bitmovin</category><category>mpeg-v</category><category>p2p</category><category>computingnow</category><category>jobs</category><category>streaming</category><category>p2p-next</category><category>press release</category><category>qualinet</category><category>w3c</category><category>MPEG-21</category><category>Universal Multimedia Access</category><category>dash-if</category><category>qos</category><category>3d video coding</category><category>cmaf</category><category>http streaming of mpeg media</category><category>hvc</category><category>icme</category><category>MXM</category><category>Multimedia Communication</category><category>RoSE</category><category>3d audio</category><category>advanced iptv terminal</category><category>immersive</category><category>svc</category><category>vvc</category><category>adaptive media streaming</category><category>high-performance video coding</category><category>ietf</category><category>omnidirectional video streaming</category><category>special session</category><category>tutorial</category><category>Adaptation</category><category>award</category><category>bitdash</category><category>mpeg media transport</category><category>pcc</category><category>video coding</category><category>OMAF</category><category>bitcodin</category><category>future video coding</category><category>high efficiency video coding</category><category>mpeg-i</category><category>packet video workshop</category><category>phd</category><category>video streaming</category><category>Multimedia Grand Challenge</category><category>ait</category><category>av1</category><category>evc</category><category>future internet</category><category>interactive</category><category>isobmff</category><category>mpeg-g</category><category>standardization</category><category>stcsn</category><category>wiamis</category><category>Leonardo Chiariglione</category><category>alpen-adria-universität</category><category>aomedia</category><category>avc</category><category>cdvs</category><category>communication</category><category>interview</category><category>journal</category><category>m2ts</category><category>mpeg-vr</category><category>multimedia metadata</category><category>open source</category><category>sensory information</category><category>special issue</category><category>World Standards Day</category><category>acm</category><category>api</category><category>architecture</category><category>call for contributions</category><category>deadline extension</category><category>emmy</category><category>eumob2008</category><category>green MPEG</category><category>icip</category><category>image analysis</category><category>iptv</category><category>iso base media file format</category><category>middleware</category><category>modern media transport</category><category>mpeg-u</category><category>multimedia management</category><category>nbmp</category><category>quality of experience</category><category>royalty free mpeg codec</category><category>semantics</category><category>symposium</category><category>user interface</category><category>video</category><category>video coding standardization</category><category>web</category><category>widget</category><category>3dac</category><category>3dv</category><category>ICT ALICANTE</category><category>ITU-T</category><category>aau</category><category>applications</category><category>araf</category><category>bifs</category><category>blog</category><category>call for evidence</category><category>call for proposals</category><category>cdva</category><category>compact descriptors for visual search</category><category>cross-layer optimization</category><category>database</category><category>dataset</category><category>hdr</category><category>ictalicante</category><category>ieee jsac</category><category>immersive experience</category><category>internet video coding</category><category>jvt</category><category>klagenfurt</category><category>lcevc</category><category>live video streaming</category><category>multimedia networking</category><category>multimedia over wireless</category><category>multimedia semantics</category><category>multimedia-aware networking</category><category>mvc</category><category>networked media</category><category>neural network compression</category><category>nossdav</category><category>pcs</category><category>point cloud</category><category>rich media</category><category>screen content</category><category>social media computing</category><category>social networks</category><category>summer school</category><category>survey</category><category>tv</category><category>usac</category><category>virtual reality</category><category>web video coding</category><category>HAS</category><category>MHV</category><category>Multimedia Framework</category><category>Nokia</category><category>STreaming Day</category><category>alto</category><category>audio</category><category>augmented reality</category><category>call for participation</category><category>conext</category><category>demonstration</category><category>digital item</category><category>digital television</category><category>doctoral student</category><category>draft cfp</category><category>enthrone</category><category>eu project</category><category>feedburner</category><category>file format</category><category>future media internet</category><category>genome compression</category><category>guidelines</category><category>ieee computer</category><category>image media quality</category><category>image processing</category><category>imex</category><category>iphone</category><category>ismw2009</category><category>isp-p2p collaboration</category><category>itec</category><category>ivc</category><category>jpeg</category><category>keynote</category><category>master thesis</category><category>media resource</category><category>miaf</category><category>miv</category><category>mmm2012</category><category>mmve</category><category>mobile computing</category><category>mobile services</category><category>mobimedia</category><category>more</category><category>movid</category><category>mpeg modern transport</category><category>mpeg-h</category><category>multimedia modeling</category><category>multimodal interaction</category><category>ndvc</category><category>network-based media processing</category><category>newslet</category><category>ontology</category><category>p4p</category><category>phd student</category><category>reference software</category><category>requirements</category><category>rss filter</category><category>search</category><category>signal processing</category><category>temu2012</category><category>ued</category><category>uri</category><category>vcip2021</category><category>versatile video coding</category><category>visual communication</category><category>visual search</category><category>wcg</category><category>web2.0</category><category>webvc</category><category>white paper</category><category>www</category><category>yahoo pipes</category><category>2008</category><category>360-degree video</category><category>3dof+</category><category>3dvc</category><category>4k</category><category>6DoF</category><category>8k</category><category>CeWe</category><category>Google</category><category>HP</category><category>Human-Centered Multimedia</category><category>MOQ</category><category>Overview</category><category>PCC-DASH</category><category>Radvision</category><category>Segments</category><category>UI</category><category>VoD</category><category>Yahoo</category><category>academic track</category><category>adaptation decision-taking</category><category>adaptive progressive transport</category><category>archive</category><category>atsc</category><category>bachelor thesis</category><category>best paper</category><category>best practices</category><category>binary xml</category><category>bookmarks</category><category>china</category><category>cloud computing</category><category>computer architecture</category><category>computer science</category><category>computer vision</category><category>content-centric</category><category>context and objectives</category><category>control information avatar information</category><category>cool</category><category>data compression conference</category><category>dco</category><category>depth maps</category><category>developer&#39;s day</category><category>device independence</category><category>distributed multimedia systems</category><category>distributed systems</category><category>dmp</category><category>dvb</category><category>economics</category><category>emma</category><category>emotion</category><category>end-to-end</category><category>engineering</category><category>essential video coding</category><category>eumob2009</category><category>euro2008</category><category>eusipco2012</category><category>euvip</category><category>exi</category><category>exploration</category><category>extended cfp</category><category>football</category><category>ftf</category><category>ftv</category><category>gaia</category><category>gist</category><category>google calender</category><category>green streaming</category><category>hardware optimizations</category><category>hci</category><category>hfr</category><category>high profile</category><category>hls</category><category>holography</category><category>howto</category><category>iTunesLP</category><category>icmr</category><category>icn</category><category>ict fp7 call4</category><category>idms</category><category>ieee mipr</category><category>impact factor</category><category>industry session</category><category>intermedia</category><category>internet-qoe</category><category>iomt</category><category>jct-vc</category><category>jm</category><category>jqvim</category><category>kolloquium</category><category>low-latency</category><category>mane</category><category>markup</category><category>media context and control</category><category>media fragments</category><category>media object</category><category>media orchestration</category><category>media sync</category><category>miot</category><category>mmsy</category><category>mmsys2020</category><category>mmsys2022</category><category>mobile devices</category><category>mobile networks</category><category>mobile storytelling</category><category>mobile visual search</category><category>mobile web</category><category>mp3</category><category>mpaf</category><category>mpdi</category><category>mpeg-2 systems</category><category>mpeg-5</category><category>mpeg-7</category><category>mpeg-m</category><category>mpqf</category><category>multimedia delivery</category><category>multimedia education</category><category>multimedia signal processing</category><category>nem summit</category><category>networking</category><category>next generation network</category><category>nextshare</category><category>nsis</category><category>obs</category><category>off topic</category><category>open access</category><category>open issues</category><category>open positions</category><category>packet loss</category><category>panel</category><category>payload format</category><category>personalization</category><category>pervasive communities</category><category>picture coding</category><category>predoc scientist</category><category>processing</category><category>professional archival application format</category><category>program</category><category>programming languages</category><category>protocol</category><category>ps3</category><category>psaf</category><category>publications</category><category>qcman</category><category>quality of sensory experience</category><category>quase</category><category>reconfigurable video coding</category><category>ricoh theta s</category><category>roadmap</category><category>rsvp</category><category>rtp</category><category>scalable video coding</category><category>schematron</category><category>schulzrinne</category><category>scientific publishing 2.0</category><category>self-organizing</category><category>semantic web</category><category>sensory experience</category><category>service modeling language</category><category>sigmm records</category><category>signalling</category><category>slideshare</category><category>sling media</category><category>smil</category><category>soccer</category><category>social computing column</category><category>social network tools</category><category>social sensor</category><category>social signal processing</category><category>software</category><category>supplementary information</category><category>sustainability</category><category>system information</category><category>tag ambiguity</category><category>technik live</category><category>temu2008</category><category>test material</category><category>tewi</category><category>timed text</category><category>tomccap</category><category>top ten</category><category>topics</category><category>tvx</category><category>use cases</category><category>video browser showdown</category><category>video complexity</category><category>video developer report</category><category>video signatures</category><category>virtuel environments</category><category>visnet 2</category><category>visual signatures</category><category>vlc</category><category>vp9</category><category>vqeg</category><category>vrif</category><category>wearable</category><category>wimtv</category><category>wireless networks</category><category>wisma</category><category>working draft</category><category>xfruits</category><category>xian</category><category>xml schema</category><category>youtube</category><title>Multimedia Communication</title><description>Topics of this blog are related to multimedia communication. In particular, streaming of multimedia content within heterogeneous environments enabling Universal Multimedia Experience (UME).</description><link>http://multimediacommunication.blogspot.com/</link><managingEditor>noreply@blogger.com (Unknown)</managingEditor><generator>Blogger</generator><openSearch:totalResults>572</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-6039232116016374838</guid><pubDate>Fri, 27 Mar 2026 21:27:00 +0000</pubDate><atom:updated>2026-03-27T22:27:31.440+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">green streaming</category><category domain="http://www.blogger.com/atom/ns#">streaming</category><category domain="http://www.blogger.com/atom/ns#">sustainability</category><category domain="http://www.blogger.com/atom/ns#">video coding</category><title>Sustainability in Video Encoding and Streaming</title><description>&lt;p&gt;&lt;/p&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;b&gt;Sustainability in Video Encoding and Streaming:&lt;/b&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;b&gt;Energy-Efficient Techniques and Metrics&lt;/b&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span style=&quot;text-align: left;&quot;&gt;&lt;b&gt;Workshop on Media Energy Consumption Measurement and Exposure&lt;/b&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span style=&quot;text-align: left;&quot;&gt;[&lt;a href=&quot;https://www.5g-mag.com/post/19-03-2026-workshop-on-media-energy-consumption-measurement-and-exposure&quot; target=&quot;_blank&quot;&gt;Workshop URL&lt;/a&gt;] [&lt;a href=&quot;https://www.slideshare.net/slideshow/sustainability-in-video-encoding-and-streaming-energy-efficient-techniques-and-metrics/286597481&quot; target=&quot;_blank&quot;&gt;Slides&lt;/a&gt;] [&lt;a href=&quot;https://drive.google.com/file/d/1uJEBJ1TAtT1i5Wo-DfT1wWxIc2fxYMIG/view?usp=sharing&quot;&gt;PDF&lt;/a&gt;]&lt;/span&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Presenter&lt;/b&gt;: Christian Timmerer (Alpen-Adria-Universität Klagenfurt)&lt;/p&gt;&lt;p style=&quot;text-align: justify;&quot;&gt;&lt;b&gt;Abstract&lt;/b&gt;: The presentation discusses the increasing environmental impact of video streaming and highlights the urgent need for more sustainable approaches across the entire streaming pipeline. Video traffic dominates internet usage and contributes significantly to global greenhouse gas emissions, while the demand for higher quality content continues to drive up computational complexity and energy consumption in encoding, delivery, and playback.&lt;/p&gt;&lt;p style=&quot;text-align: justify;&quot;&gt;A central insight is that there is a strong trade-off between video quality and energy consumption, where small reductions in quality can lead to substantial energy savings. By introducing energy as an explicit optimization objective, techniques such as content-aware encoding, energy-aware bitrate ladder construction, and real-time optimization for live streaming can significantly reduce energy usage while maintaining nearly the same perceptual quality.&lt;/p&gt;&lt;p style=&quot;text-align: justify;&quot;&gt;The work also emphasizes the role of adaptive bitrate algorithms that incorporate energy consumption alongside traditional quality and buffer-based metrics. These approaches demonstrate that it is possible to simultaneously improve user experience and reduce energy consumption, indicating that sustainability and performance can be aligned rather than conflicting goals.&lt;/p&gt;&lt;p style=&quot;text-align: justify;&quot;&gt;To enable such optimizations, the presentation introduces a range of metrics and models, including video complexity measures, quality prediction models, and machine learning-based approaches for estimating encoding and decoding energy as well as CO₂ emissions. These tools support more informed, data-driven decisions across the full streaming workflow from encoding to playback.&lt;/p&gt;&lt;p style=&quot;text-align: justify;&quot;&gt;Another important theme is end-to-end optimization, where energy efficiency depends on the combined behavior of encoding strategies, bitrate selection, and client-side adaptation. Industry efforts confirm the practical relevance of these approaches and highlight the importance of collaboration and real-world validation.&lt;/p&gt;&lt;p style=&quot;text-align: justify;&quot;&gt;Despite promising results, several challenges remain, including difficulties in measuring and benchmarking energy consumption, the lack of standardized methodologies, and the limited integration of energy considerations into existing workflows. Overall, the presentation argues that energy consumption should become a first-class optimization target in video streaming systems, similar to established quality metrics, to enable truly sustainable media delivery.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Keywords&lt;/b&gt;: sustainable streaming, energy-aware encoding, adaptive bitrate streaming, green multimedia, video compression, bitrate ladder optimization, QoE optimization, energy-quality tradeoff, video complexity analysis, CO2 footprint, energy modeling, machine learning for video, end-to-end optimization, eco-efficient streaming, real-time streaming optimization&lt;/p&gt;</description><link>http://multimediacommunication.blogspot.com/2026/03/sustainability-in-video-encoding-and.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-7670059252354899943</guid><pubDate>Fri, 20 Feb 2026 09:34:00 +0000</pubDate><atom:updated>2026-02-20T10:34:13.782+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">MPEG</category><title>MPEG news: a report from the 153rd meeting</title><description>&lt;p style=&quot;text-align: right;&quot;&gt;This version of the blog post is also available at &lt;a href=&quot;https://records.sigmm.org/2026/02/02/mpeg-column-153rd-mpeg-meeting/&quot; target=&quot;_blank&quot;&gt;ACM SIGMM Records&lt;/a&gt;&lt;/p&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi154DQ7yxU2ibuU45z55ovtlqWdQTPsAvQDbVt2FB0yW_8j4LvJOgimZGeMYDoYJMm__W0WsIGuZNq85QXqXZDwd_znAmJvJgoH-gok5OF2nIz_RezZReB5RO2bEFJCAYra23HbhX42zV3-IzpF-Z8d_z-ArP8FxvC1ea7KCKe1QpadmI3XlGskOi8S3c/s2048/MPEG_RGB_1000px-2048x711.png&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi154DQ7yxU2ibuU45z55ovtlqWdQTPsAvQDbVt2FB0yW_8j4LvJOgimZGeMYDoYJMm__W0WsIGuZNq85QXqXZDwd_znAmJvJgoH-gok5OF2nIz_RezZReB5RO2bEFJCAYra23HbhX42zV3-IzpF-Z8d_z-ArP8FxvC1ea7KCKe1QpadmI3XlGskOi8S3c/s320/MPEG_RGB_1000px-2048x711.png&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://multimediacommunication.blogspot.com/2013/04/mpeg-news-archive.html&quot;&gt;MPEG News Archive&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;The 153rd MPEG meeting took place online from January 19-23, 2026. The official MPEG press release can be found &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-153/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;. This report highlights key outcomes from the meeting, with a focus on research directions relevant to the ACM SIGMM community:&lt;div style=&quot;text-align: left;&quot;&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;MPEG Roadmap&lt;/li&gt;&lt;li&gt;Exploration on MPEG Gaussian Splat Coding (GSC)&lt;/li&gt;&lt;li&gt;MPEG Immersive Video 2nd edition (new white paper)&lt;/li&gt;&lt;/ul&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;MPEG Roadmap&lt;/h2&gt;&lt;/div&gt;&lt;p style=&quot;text-align: left;&quot;&gt;MPEG released an updated roadmap showing continued convergence of immersive and “beyond video” media with deployment-ready systems work. Near-term priorities include 6DoF experiences (MPEG Immersive Video v2 and 6DoF audio), volumetric representations (dynamic meshes, solid point clouds, LiDAR, and emerging Gaussian splat coding), and “coding for machines,” which treats visual and audio signals as inputs to downstream analytics rather than only for human consumption.&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/a/AVvXsEiQqWrRp3kMtwDAHgy3obCefXMF8BpVoIWK10alviP4KOuUTehoKuO1clmgOGg8TA2eYVps83wGg3rSZGW4SiZYczcvrbLWmASEkt84Ds9WgnfXOuS-Se5lyL0J1Mqr0K2A4v6f8IWoN5fHUq9mPRqTbn8GBXrRwfxfEoQ_AzpsA7T_qucig42ld5DD05s&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img alt=&quot;&quot; data-original-height=&quot;405&quot; data-original-width=&quot;720&quot; height=&quot;225&quot; src=&quot;https://blogger.googleusercontent.com/img/a/AVvXsEiQqWrRp3kMtwDAHgy3obCefXMF8BpVoIWK10alviP4KOuUTehoKuO1clmgOGg8TA2eYVps83wGg3rSZGW4SiZYczcvrbLWmASEkt84Ds9WgnfXOuS-Se5lyL0J1Mqr0K2A4v6f8IWoN5fHUq9mPRqTbn8GBXrRwfxfEoQ_AzpsA7T_qucig42ld5DD05s=w400-h225&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;b&gt;Research aspects&lt;/b&gt;: The most promising research opportunities sit at the intersections: renderer and device-aware rate-distortion-complexity optimization for volumetric content; adaptive streaming and packaging evolution (e.g., MPEG-DASH / CMAF) for interactive 6DoF services under tight latency constraints; and cross-cutting themes such as media authenticity and provenance, green and energy metadata, and exploration threads on neural-network-based compression and compression of neural networks that foreshadow AI-native multimedia pipelines.&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;MPEG Gaussian Splat Coding (GSC)&lt;/h2&gt;&lt;p style=&quot;text-align: left;&quot;&gt;Gaussian Splat Coding (GSC) is MPEG’s effort to standardize how 3D Gaussian Splatting content, scenes represented as sparse “Gaussian splats” with geometry plus rich attributes (scale and rotation, opacity, and spherical-harmonics appearance for view-dependent rendering), is encoded, decoded, and evaluated so it can be exchanged and rendered consistently across platforms. The main motivation is interoperability for immersive media pipelines: enabling reproducible results, shared benchmarks, and comparable rate-distortion-complexity trade-offs for use cases spanning telepresence and immersive replay to mobile XR and digital twins, while retaining the visual strengths that made 3DGS attractive compared to heavier neural scene representations.&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;The work remains in an exploration phase, coordinated across ISO/IEC JTC 1/SC 29 groups WG 4 (MPEG Video Coding) and WG 7 (MPEG Coding for 3D Graphics and Haptics) through Joint Exploration Experiments covering datasets and anchors, new coding tools, software (renderer and metrics), and Common Test Conditions (CTC). A notable systems thread is “lightweight GSC” for resource-constrained devices (single-frame, low-latency tracks using geometry-based and video-based pipelines with explicit time and memory targets), alongside an “early deployment” path via amendments to existing MPEG point-cloud codecs to more natively carry Gaussian-splat parameters. In parallel, MPEG is testing whether splat-specific tools can outperform straightforward mappings in quality, bitrate, and compute for real-time and streaming-centric scenarios.&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;b&gt;Research aspects&lt;/b&gt;: Relevant SIGMM directions include splat-aware compression tools and rate-distortion-complexity optimization (including tracked vs. non-tracked temporal prediction); QoE evaluation for 6DoF navigation (metrics for view and temporal consistency and splat-specific artifacts); decoder and renderer co-design for real-time and mobile lightweight profiles (progressive and LOD-friendly layouts, GPU-friendly decode); and networked delivery problems such as adaptive streaming, ROI and view-dependent transmission, and loss resilience for splat parameters. Additional opportunities include interoperability work on reproducible benchmarking, conformance testing, and practical packaging and signaling for deployment.&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;MPEG Immersive Video 2nd edition (white paper)&lt;/h2&gt;&lt;p style=&quot;text-align: left;&quot;&gt;The second edition of MPEG Immersive Video defines an interoperable bitstream and decoding process for efficient 6DoF immersive scene playback, supporting translational and rotational movement with motion parallax to reduce discomfort often associated with pure 3DoF viewing. The second edition primarily extends functionality (without changing the high-level bitstream structure), adding capabilities such as capture-device information, additional projection types, and support for Simple Multi-Plane Image (MPI), alongside tools that better support geometry and attribute handling and depth-related processing.&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;Architecturally, MIV ingests multiple (unordered) camera views with geometry (depth and occupancy) and attributes (e.g., texture), then reduces inter-view redundancy by extracting patches and packing them into 2D “atlases” that are compressed using conventional video codecs. MIV-specific metadata signals how to reconstruct views from the atlases. The standard is built as an extension of the common Visual Volumetric Video-based Coding (V3C) bitstream framework shared with V-PCC, with profiles that preserve backward compatibility while introducing a new profile for added second-edition functionality and a tailored profile for full-plane MPI delivery.&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;b&gt;Research aspects&lt;/b&gt;: Key SIGMM topics include systems-efficient 6DoF delivery (better view and patch selection and atlas packing under latency and bandwidth constraints); rate-distortion-complexity-QoE optimization that accounts for decode and render cost (especially on HMD and mobile) and motion-parallax comfort; adaptive delivery strategies (representation ladders, viewport and pose-driven bit allocation, robust packetization and error resilience for atlas video plus metadata); renderer-aware metrics and subjective protocols for multi-view temporal consistency; and deployment-oriented work such as profile and level tuning, codec-group choices (HEVC / VVC), conformance testing, and exploiting second-edition features (capture device info, depth tools, Simple MPI) for more reliable reconstruction and improved user experience.&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;Concluding Remarks&lt;/h2&gt;&lt;p style=&quot;text-align: left;&quot;&gt;The meeting outcomes highlight a clear shift toward immersive and AI-enabled media systems where compression, rendering, delivery, and evaluation must be co-designed. These developments offer timely opportunities for the ACM SIGMM community to contribute reproducible benchmarks, perceptual metrics, and end-to-end streaming and systems research that can directly influence emerging standards and deployments.&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;The 154th MPEG meeting will be held in Santa Eulària, Spain, from April 27 to May 1, 2026. Click &lt;a href=&quot;https://www.mpeg.org/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt; for more information about MPEG meetings and ongoing developments.&lt;/p&gt;</description><link>http://multimediacommunication.blogspot.com/2026/02/mpeg-news-report-from-153rd-meeting.html</link><author>noreply@blogger.com (Unknown)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi154DQ7yxU2ibuU45z55ovtlqWdQTPsAvQDbVt2FB0yW_8j4LvJOgimZGeMYDoYJMm__W0WsIGuZNq85QXqXZDwd_znAmJvJgoH-gok5OF2nIz_RezZReB5RO2bEFJCAYra23HbhX42zV3-IzpF-Z8d_z-ArP8FxvC1ea7KCKe1QpadmI3XlGskOi8S3c/s72-c/MPEG_RGB_1000px-2048x711.png" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-2058971402502553461</guid><pubDate>Wed, 18 Feb 2026 13:04:00 +0000</pubDate><atom:updated>2026-02-18T14:04:01.728+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">aau</category><category domain="http://www.blogger.com/atom/ns#">jobs</category><title>Professor of Information Systems Engineering (all genders welcome)</title><description>&lt;p style=&quot;text-align: center;&quot;&gt;Department of Informatics Systems&amp;nbsp;&amp;nbsp;&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;b&gt;Full professorships&amp;nbsp; | Full time&lt;/b&gt;&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;Application deadline:&amp;nbsp; 22 March 2026&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;Reference code: 43/02-PERS/26&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;URL: &lt;a href=&quot;https://jobs.aau.at/en/job/professor-of-information-systems-engineering-all-genders-welcome/&quot;&gt;https://jobs.aau.at/en/job/professor-of-information-systems-engineering-all-genders-welcome/&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Announcement&lt;/b&gt;&lt;/p&gt;&lt;p&gt;The University of Klagenfurt wants to attract more women for professorships.&lt;/p&gt;&lt;p&gt;We are pleased to announce the following open position at the Department of Informatics Systems, Faculty of Technical Sciences, in compliance with the provisions of § 98 (permanent) or § 98 (fixed-term, max. 6 years) of the Austrian Universities Act:&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;b&gt;Professor of Information Systems Engineering (all genders welcome)&lt;/b&gt;&lt;/p&gt;&lt;p&gt;This is a full-time position available from 1 October 2027. Depending on the candidate’s academic credentials, the employment contract can be concluded either as a permanent employment contract or as a fixed-term employment contract with the option of a permanent extension. The duration of fixed-term contracts is subject to negotiation.&lt;/p&gt;&lt;p&gt;With approximately 13,000 students, the University of Klagenfurt is a young, vibrant and innovative university, located at the intersection of Alpine and Mediterranean culture in an area that offers exceptionally high quality of life. As a public university pursuant to § 6 of the Austrian Universities Act, it receives federal funding. The university operates under the motto “Beyond Boundaries!”.&lt;/p&gt;&lt;p&gt;In accordance with its key strategic road map, the Development Plan, the university’s primary guiding principles and objectives include the pursuit of scientific excellence regarding the appointment of professors, favourable research conditions, a good faculty-student ratio, and the promotion of the development of early career researchers.&lt;/p&gt;&lt;p&gt;Information Systems Engineering focuses on the design, development, and management of large systems that connect people, data, and technology to support organizational goals. It combines principles of software engineering, data management, business processes, and emerging digital technologies to create solutions that enhance decision-making, optimize operations, and drive innovation.&lt;/p&gt;&lt;p&gt;We welcome applications addressing the engineering of Information Systems, in particular those focusing on designing, modelling, executing, verifying, and optimizing business processes. We are looking for a highly qualified and internationally visible scientist with high engagement in developing and sustaining an ambitious and innovative research and teaching programme. Candidates should also be interested in developing collaborations in the university’s Areas of Research Strength: Digitalisation and Health, Multiple Perspectives in Optimization, Networked and Autonomous Systems and/or the Cluster of Excellence “Bilateral AI”.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Your responsibilities – what awaits you&lt;/b&gt;&lt;/p&gt;&lt;div&gt;The duties of the position include:&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Representing the field of Information Systems Engineering in research and teaching&lt;/li&gt;&lt;li&gt;Teaching in relevant degree programmes at Bachelor’s, Master’s, and Doctoral level both in English and German, as well as supervision of student projects and academic theses&lt;/li&gt;&lt;li&gt;Advising and mentoring of students and early career researchers&lt;/li&gt;&lt;li&gt;Competitive research grant acquisition and management&lt;/li&gt;&lt;li&gt;Collaboration with academic and industry partners&lt;/li&gt;&lt;li&gt;Participation in university management&lt;/li&gt;&lt;li&gt;Participation in third mission and public relations activities&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Your profile&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Habilitation or equivalent qualification in Computer Science or a relevant neighbouring field&lt;/li&gt;&lt;li&gt;Excellent research track record in Information Systems Engineering&lt;/li&gt;&lt;li&gt;Experience in the acquisition and management of competitive third-party funded research projects of a relevant volume&lt;/li&gt;&lt;li&gt;Teaching competence and experience at university level&lt;/li&gt;&lt;li&gt;Experience in the (co-)supervision of academic theses&lt;/li&gt;&lt;li&gt;Fluency in English&lt;/li&gt;&lt;/ul&gt;&lt;b&gt;This distinguishes you additionally&lt;/b&gt;&lt;br /&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Interdisciplinary experience&lt;/li&gt;&lt;li&gt;Scientific dissemination skills&lt;/li&gt;&lt;li&gt;Engagement in academic administrative duties&lt;/li&gt;&lt;li&gt;Competence in leadership and teamwork&lt;/li&gt;&lt;li&gt;Competence in gender mainstreaming and diversity management&lt;/li&gt;&lt;/ul&gt;German language skills are not a formal prerequisite, but proficiency at level B2 is expected within two years.&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Why you will enjoy working with us&lt;/b&gt;&lt;/p&gt;&lt;p&gt;The salary is subject to negotiation. The minimum gross salary for the position at this level (salary group A1 for University Staff according to the Austrian Universities’ Collective Bargaining Agreement) is currently € 93,986 per year.&lt;/p&gt;&lt;p&gt;The university is committed to increasing the number of women among the faculty, particularly in high-level positions, and therefore specifically invites applications from qualified women. Among equally qualified candidates, women will receive preferential consideration.&lt;/p&gt;&lt;p&gt;People with disabilities or chronic diseases who meet the qualification criteria are explicitly invited to apply.&lt;/p&gt;&lt;p&gt;In accordance with the Austrian Income Tax Act, an attractive relocation tax allowance can be granted for the first five years in the case of appointments to professorships in Austria. The prerequisites are subject to examination on a case by case basis.&lt;/p&gt;&lt;p&gt;Please submit your application in English by e-mail to the University of Klagenfurt, Office of the Senate, attn. Mag.a (FH) Sabine Seebacher via &lt;a href=&quot;mailto:application_professorship@aau.at&quot;&gt;application_professorship@aau.at&lt;/a&gt; no later than 22 March, 2026, including:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;a mandatory principal part not exceeding five pages (&lt;a href=&quot;https://jobs.aau.at/wp-content/uploads/specimen_main_part_application_professorship.docx&quot;&gt;https://jobs.aau.at/wp-content/uploads/specimen_main_part_application_professorship.docx&lt;/a&gt;). The submission of the mandatory principal part constitutes a necessary condition for the validity of your application.&lt;/li&gt;&lt;li&gt;one single PDF including:&lt;/li&gt;&lt;ul&gt;&lt;li&gt;a letter of motivation&lt;/li&gt;&lt;li&gt;a detailed scientific CV&lt;/li&gt;&lt;li&gt;a comprehensive list of publications, talks, and of all courses taught&lt;/li&gt;&lt;li&gt;a list of projects that you acquired as a PI or co-PI, including the amount of funding that was attributed to you&lt;/li&gt;&lt;li&gt;a research statement&lt;/li&gt;&lt;li&gt;a teaching statement&lt;/li&gt;&lt;li&gt;supplementary documents where applicable (e.g., course evaluations)&lt;/li&gt;&lt;li&gt;links to publicly available versions of your three most important publications within the scope of this professorship&lt;/li&gt;&lt;/ul&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;For general information, please refer to the general information provided at &lt;a href=&quot;https://jobs.aau.at/en/the-university-as-employer/&quot;&gt;https://jobs.aau.at/en/the-university-as-employer/&lt;/a&gt;. For specific information about the position, please contact Prof. Dr. Martin Pinzger (Tel.: +43 463 2700 3513; &lt;a href=&quot;mailto:martin.pinzger@aau.at&quot;&gt;martin.pinzger@aau.at&lt;/a&gt;).&lt;/p&gt;</description><link>http://multimediacommunication.blogspot.com/2026/02/professor-of-information-systems.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-1254663095287986347</guid><pubDate>Fri, 28 Nov 2025 12:47:00 +0000</pubDate><atom:updated>2025-11-28T13:58:22.866+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">MPEG</category><title>MPEG news: a report from the 152nd meeting</title><description>&lt;p&gt;&amp;nbsp;This version of the blog post is also available at ACM SIGMM Records&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi154DQ7yxU2ibuU45z55ovtlqWdQTPsAvQDbVt2FB0yW_8j4LvJOgimZGeMYDoYJMm__W0WsIGuZNq85QXqXZDwd_znAmJvJgoH-gok5OF2nIz_RezZReB5RO2bEFJCAYra23HbhX42zV3-IzpF-Z8d_z-ArP8FxvC1ea7KCKe1QpadmI3XlGskOi8S3c/s2048/MPEG_RGB_1000px-2048x711.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;711&quot; data-original-width=&quot;2048&quot; height=&quot;111&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi154DQ7yxU2ibuU45z55ovtlqWdQTPsAvQDbVt2FB0yW_8j4LvJOgimZGeMYDoYJMm__W0WsIGuZNq85QXqXZDwd_znAmJvJgoH-gok5OF2nIz_RezZReB5RO2bEFJCAYra23HbhX42zV3-IzpF-Z8d_z-ArP8FxvC1ea7KCKe1QpadmI3XlGskOi8S3c/s320/MPEG_RGB_1000px-2048x711.png&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://multimediacommunication.blogspot.com/2013/04/mpeg-news-archive.html&quot;&gt;MPEG News Archive&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;The 152nd MPEG meeting took place in Geneva, Switzerland, from October 7 to October 11, 2025. The official MPEG press release can be found &lt;a href=&quot;https://www.mpeg.org/152nd-meeting-of-mpeg/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;. This column highlights key points from the meeting, amended with research aspects relevant to the ACM SIGMM community:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;MPEG Systems received an Emmy® Award for the Common Media Application Format (CMAF). A separate press release regarding this achievement is available &lt;a href=&quot;https://www.mpeg.org/mpeg-systems-wins-an-emmy-for-cmaf/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;JVET ratified new editions of VSEI, VVC, and HEVC&lt;/li&gt;&lt;li&gt;The fourth edition of Visual Volumetric Video-based Coding (V3C and V-PCC) has been finalized&lt;/li&gt;&lt;li&gt;Responses to the call for evidence on video compression with capability beyond VVC successfully evaluated&lt;/li&gt;&lt;/ul&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;MPEG Systems received an Emmy® Award for the Common Media Application Format (CMAF)&lt;/h2&gt;&lt;p&gt;On September 18, 2025, the National Academy of Television Arts &amp;amp;amp; Sciences (NATAS) announced that the MPEG Systems Working Group (ISO/IEC JTC 1/SC 29/WG 3) had been selected as a recipient of a Technology &amp;amp;amp; Engineering Emmy® Award for standardizing the Common Media Application Format (CMAF). But what is CMAF? CMAF (ISO/IEC 23000-19) is a media format standard designed to simplify and unify video streaming workflows across different delivery protocols and devices. Here’s a structured overview. Before CMAF, streaming services often had to produce multiple container formats, i.e., (i) ISO Base Media File Format (ISOBMFF) for MPEG-DASH and MPEG-2 Transport Stream (TS) for Apple HLS. This duplication resulted in additional encoding, packaging, and storage costs. I wrote a blog post about this some time ago &lt;a href=&quot;https://bitmovin.com/blog/what-is-cmaf-threat-opportunity/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;. CMAF’s main goal is to define a single, standardized segmented media format usable by both HLS and DASH, enabling “encode once, package once, deliver everywhere.”&lt;/p&gt;&lt;p&gt;The core concept of CMAF is that it is based on ISOBMFF, the foundation for MP4. Each CMAF stream consists of a CMAF header, CMAF media segments, and CMAF track files (a logical sequence of segments for one stream, e.g., video or audio). CMAF enables low-latency streaming by allowing progressive segment transfer, adopting chunked transfer encoding via CMAF chunks. CMAF defines interoperable profiles for codecs and presentation types for video, audio, and subtitles. Thanks to its compatibility with and adoption within existing streaming standards, CMAF bridges the gaps between DASH and HLS, creating a unified ecosystem.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Research aspects&lt;/b&gt;&amp;nbsp;include – but are not limited to – low-latency tuning (segment/chunk size trade-offs, HTTP/3, QUIC), Quality of Experience (QoE) impact of chunk-based adaptation, synchronization of live and interactive CMAF streams, edge-assisted CMAF caching and prediction, and interoperability testing and compliance tools.&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;JVET ratified new editions of VSEI, VVC, and HEVC&lt;/h2&gt;&lt;p&gt;At its 40th meeting, the Joint Video Experts Team (JVET, ISO/IEC JTC 1/SC 29/WG 5) concluded the standardization work on the next editions of three key video coding standards, advancing them to the Final Draft International Standard (FDIS) stage. Corresponding twin-text versions have also been submitted to ITU-T for consent procedures. The finalized standards include:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Versatile Supplemental Enhancement Information (VSEI) — ISO/IEC 23002-7 | ITU-T Rec. H.274&lt;/li&gt;&lt;li&gt;Versatile Video Coding (VVC) — ISO/IEC 23090-3 | ITU-T Rec. H.266&lt;/li&gt;&lt;li&gt;High Efficiency Video Coding (HEVC) — ISO/IEC 23008-2 | ITU-T Rec. H.265&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The primary focus of these new editions is the extension and refinement of Supplemental Enhancement Information (SEI) messages, which provide metadata and auxiliary data to support advanced processing, interpretation, and quality management of coded video streams.&lt;/p&gt;&lt;p&gt;The updated VSEI specification introduces both new and refined SEI message types supporting advanced use cases:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;AI-driven processing: Extensions for neural-network-based post-filtering and film grain synthesis offer standardized signalling for machine learning components in decoding and rendering pipelines.&lt;/li&gt;&lt;li&gt;Semantic and multimodal content: New SEI messages describe infrared, X-ray, and other modality indicators, region packing, and object mask encoding; creating interoperability points for multimodal fusion and object-aware compression research.&lt;/li&gt;&lt;li&gt;Pipeline optimization: Messages defining processing order and post-processing nesting support research on joint encoder-decoder optimization and edge-cloud coordination in streaming architectures.&lt;/li&gt;&lt;li&gt;Authenticity and generative media: A new set of messages supports digital signature embedding and generative-AI-based face encoding, raising questions for the SIGMM community about trust, authenticity, and ethical AI in media pipelines.&lt;/li&gt;&lt;li&gt;Metadata and interpretability: New SEIs for text description, image format metadata, and AI usage restriction requests could facilitate research into explainable media, human-AI interaction, and regulatory compliance in multimedia systems.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;All VSEI features are fully compatible with the new VVC edition, and most are also supported in HEVC. The new HEVC edition further refines its multi-view profiles, enabling more robust 3D and immersive video use cases.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Research aspects&lt;/b&gt;&amp;nbsp;of these new standard’s editions can be summarized as follows: &lt;i&gt;(i)&lt;/i&gt;&amp;nbsp;Define new standardized interfaces between neural post-processing and conventional video coding, fostering reproducible and interoperable research on learned enhancement models. &lt;i&gt;(ii)&lt;/i&gt;&amp;nbsp;Encourage exploration of metadata-driven adaptation and QoE optimization using SEI-based signals in streaming systems. &lt;i&gt;(iii)&lt;/i&gt;&amp;nbsp;Open possibilities for cross-layer system research, connecting compression, transport, and AI-based decision layers. &lt;i&gt;(iv)&lt;/i&gt;&amp;nbsp;Introduce a formal foundation for authenticity verification, content provenance, and AI-generated media signalling, relevant to current debates on trustworthy multimedia.&lt;/p&gt;&lt;p&gt;These updates highlight how ongoing MPEG/ITU standardization is evolving toward a more AI-aware, multimodal, and semantically rich media ecosystem, providing fertile ground for experimental and applied research in multimedia systems, coding, and intelligent media delivery.&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;The fourth edition of Visual Volumetric Video-based Coding (V3C and V-PCC) has been finalized&lt;/h2&gt;&lt;p&gt;MPEG Coding of 3D Graphics and Haptics (ISO/IEC JTC 1/SC 29/WG7) has advanced MPEG-I Part 5 – Visual Volumetric Video-based Coding (V3C and V-PCC) to the Final Draft International Standard (FDIS) stage, marking its fourth edition. This revision introduces major updates to the Video-based Coding of Volumetric Content (V3C) framework, particularly enabling support for an additional bitstream instance: V-DMC (Video-based Dynamic Mesh Compression).&lt;/p&gt;&lt;p&gt;Previously, V3C served as the structural foundation for V-PCC (Video-based Point Cloud Compression) and MIV (MPEG Immersive Video). The new edition extends this flexibility by allowing V-DMC integration, reinforcing V3C as a generic, extensible framework for volumetric and 3D video coding. All instances follow a shared principle, i.e., using conventional 2D video codecs (e.g., HEVC, VVC) for projection-based compression, complemented by specialized tools for mapping, geometry, and metadata handling.&lt;/p&gt;&lt;p&gt;While V-PCC remains co-specified within Part 5, MIV (Part 12) and V-DMC (Part 29) are standardized separately. The progression to FDIS confirms the technical maturity and architectural stability of the framework.&lt;/p&gt;&lt;p&gt;This evolution opens &lt;b&gt;new research directions&lt;/b&gt;&amp;nbsp;as follows: &lt;i&gt;(i)&lt;/i&gt;&amp;nbsp;Unified 3D content representation, enabling comparative evaluation of point cloud, mesh, and view-based methods under one coding architecture. &lt;i&gt;(ii)&lt;/i&gt;&amp;nbsp;Efficient use of 2D codecs for 3D media, raising questions on mapping optimization, distortion modeling, and geometry-texture compression. &lt;i&gt;(iii)&lt;/i&gt;&amp;nbsp;Dynamic and interactive volumetric streaming, relevant to AR/VR, telepresence, and immersive communication research.&lt;/p&gt;&lt;p&gt;The fourth edition of MPEG-I Part 5 thus positions V3C as a cornerstone for future volumetric, AI-assisted, and immersive video systems, bridging standardization and cutting-edge multimedia research.&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;Responses to the call for evidence on video compression with capability beyond VVC successfully evaluated&lt;/h2&gt;&lt;p&gt;The Joint Video Experts Team (JVET, ISO/IEC JTC 1/SC 29/WG 5) has completed the evaluation of submissions to its Call for Evidence (CfE) on video compression with capability beyond VVC. The CfE investigated coding technologies that may surpass the performance of the current Versatile Video Coding (VVC) standard in compression efficiency, computational complexity, and extended functionality.&lt;/p&gt;&lt;p&gt;A total of five submissions were assessed, complemented by ECM16 reference encodings and VTM anchor sequences with multiple runtime variants. The evaluation addressed both compression capability and encoding runtime, as well as low-latency and error-resilience features. All technologies were derived from VTM, ECM, or NNVC frameworks, featuring modified encoder configurations and coding tools rather than entirely new architectures.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Key Findings&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;In the compression capability test, 76 out of 120 test cases showed at least one submission with a non-overlapping confidence interval compared to the VTM anchor. Several methods outperformed ECM16 in visual quality and achieved notable compression gains at lower complexity. Neural-network-based approaches demonstrated clear perceptual improvements, particularly for 8K HDR content, while gains were smaller for gaming scenarios.&lt;/li&gt;&lt;li&gt;In the encoding runtime test, significant improvements were observed even under strict complexity constraints: 37 of 60 test points (at both 1× and 0.2× runtime) showed statistically significant benefits over VTM. Some submissions achieved faster encoding than VTM, with only a 35% increase in decoder runtime.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Research Relevance and Outlook&lt;/b&gt;&lt;/p&gt;&lt;p&gt;The CfE results illustrate a maturing convergence between model-based and data-driven video coding, raising research questions highly relevant for the ACM SIGMM community:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;How can learned prediction and filtering networks be integrated into standard codecs while preserving interoperability and runtime control?&lt;/li&gt;&lt;li&gt;What methodologies can best evaluate perceptual quality beyond PSNR, especially for HDR and immersive content?&lt;/li&gt;&lt;li&gt;How can complexity-quality trade-offs be optimized for diverse hardware and latency requirements?&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Building on these outcomes, JVET is preparing a Call for Proposals (CfP) for the next-generation video coding standard, with a draft planned for early 2026 and evaluation through 2027. Upcoming activities include refining test material, adding Reference Picture Resampling (RPR), and forming a new ad hoc group on hardware implementation complexity.&lt;/p&gt;&lt;p&gt;For multimedia researchers, this CfE marks a pivotal step toward AI-assisted, complexity-adaptive, and perceptually optimized compression systems, which are considered a key frontier where codec standardization meets intelligent multimedia research.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;hr /&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The 153rd MPEG meeting will be held online from January 19 to January 23, 2026. Click &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-153&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt; for more information about MPEG meetings and their developments.&lt;/p&gt;</description><link>http://multimediacommunication.blogspot.com/2025/11/mpeg-news-report-from-152nd-meeting.html</link><author>noreply@blogger.com (Unknown)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi154DQ7yxU2ibuU45z55ovtlqWdQTPsAvQDbVt2FB0yW_8j4LvJOgimZGeMYDoYJMm__W0WsIGuZNq85QXqXZDwd_znAmJvJgoH-gok5OF2nIz_RezZReB5RO2bEFJCAYra23HbhX42zV3-IzpF-Z8d_z-ArP8FxvC1ea7KCKe1QpadmI3XlGskOi8S3c/s72-c/MPEG_RGB_1000px-2048x711.png" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-2344890541534183250</guid><pubDate>Tue, 14 Oct 2025 11:19:00 +0000</pubDate><atom:updated>2025-10-14T13:19:39.728+02:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">MPEG</category><title>Happy World Standards Day 2025!</title><description>&lt;p style=&quot;text-align: right;&quot;&gt;Celebrating innovation, interoperability, and collaboration through international standards.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiH4ZlrpPOZL6ZvN5uIPn2G3pEYurWPKDgpDh7s0UYona5lSZbRkylDYfhAG0zwFg5xs5_I-vb1BOXqjA2LniTT0K0sZhjAmxLzppMr-MsCrz_FR1hO3-o1eJhrlYh8I_aP4g4HLlpyJ7XZA8GTIjTQJnBTB-jJXAHDsh89H2iejgQXlE9EW6caYN5s_As/s2048/MPEG_RGB_1000px-2048x711.png&quot; imageanchor=&quot;1&quot; style=&quot;clear: right; float: right; margin-bottom: 1em; margin-left: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;711&quot; data-original-width=&quot;2048&quot; height=&quot;69&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiH4ZlrpPOZL6ZvN5uIPn2G3pEYurWPKDgpDh7s0UYona5lSZbRkylDYfhAG0zwFg5xs5_I-vb1BOXqjA2LniTT0K0sZhjAmxLzppMr-MsCrz_FR1hO3-o1eJhrlYh8I_aP4g4HLlpyJ7XZA8GTIjTQJnBTB-jJXAHDsh89H2iejgQXlE9EW6caYN5s_As/w200-h69/MPEG_RGB_1000px-2048x711.png&quot; width=&quot;200&quot; /&gt;&lt;/a&gt;&lt;/div&gt;Every year on October 14, we celebrate World Standards Day — honoring the collective efforts of experts and organizations worldwide who develop and maintain the standards that make modern digital life possible. For the Moving Picture Experts Group (MPEG), this day marks decades of work in defining the technologies that power media, streaming, and immersive experiences worldwide.&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;A Year of Progress and New Milestones&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Over the past year, MPEG and its working groups achieved remarkable progress across video, audio, systems, and AI-driven technologies — advancing the future of multimedia communication. Hot off the press, MPEG is proud to announce another Emmy® Technology &amp;amp; Engineering Award — this time for the Common Media Application Format (CMAF; ISO/IEC 23000-19), a landmark standard that brought long-awaited harmonization between DASH and HLS streaming formats (among others).&lt;/p&gt;&lt;p&gt;&lt;b&gt;Next Generation Video Coding Beyond VVC&lt;/b&gt;&lt;/p&gt;&lt;p&gt;The Joint Video Experts Team (JVET), a joint effort of ISO/IEC and ITU-T, launched a Call for Evidence exploring technologies that go beyond Versatile Video Coding (VVC).&lt;/p&gt;&lt;p&gt;The goal: to identify breakthroughs that significantly improve compression efficiency, runtime performance, and functionality — from HDR and 8K video to gaming and user-generated content. Depending on the results, a Call for Proposals (CfP) for the next generation of video coding may follow in 2026, opening the door to AI-enhanced compression.&lt;/p&gt;&lt;p&gt;The current plan foresees a draft CfP in January 2026, followed by the final CfP in July 2026 and submissions in November 2026, with evaluations scheduled for January 2027. The first version of the resulting standard is expected to be finalized within three years thereafter.&lt;/p&gt;&lt;p&gt;&lt;b&gt;MPEG-DASH (Sixth Edition)&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Adaptive streaming continues to evolve, and the sixth edition of MPEG-DASH (ISO/IEC 23009-1) marks a major step forward. New features include enhanced low-latency streaming, content steering across multiple CDNs, compact signaling for faster playback, and even support for interactive storylines — enabling richer, more dynamic media experiences. MPEG-DASH remains the foundation of scalable, interoperable video streaming used by billions of devices worldwide.&lt;/p&gt;&lt;p&gt;&lt;b&gt;AI and Machine-Oriented Coding&lt;/b&gt;&lt;/p&gt;&lt;p&gt;MPEG’s vision for Audio and Video Coding for Machines continues to take shape. The updated Call for Proposals on Audio Coding for Machines (ACoM) invites technologies for efficiently compressing audio and multi-dimensional signals — not only for human listening but also for machine learning and AI-driven analysis. In parallel, Video Coding for Machines (VCM) is being standardized to optimize visual data for computer vision and autonomous systems, reducing bitrate while preserving task-relevant features.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Open Font Format (Fifth Edition)&lt;/b&gt;&lt;/p&gt;&lt;p&gt;MPEG Systems (WG 3) reached the Final Draft International Standard (FDIS) stage for the fifth edition of the Open Font Format (ISO/IEC 14496-22). This major update removes previous technical constraints, supporting over 64K glyphs and the entire Unicode range in a single file — a leap toward more inclusive digital typography across languages and writing systems.&lt;/p&gt;&lt;p&gt;&lt;b&gt;3D and Volumetric Media Innovation&lt;/b&gt;&lt;/p&gt;&lt;p&gt;From Video-Based Dynamic Mesh Coding (V-DMC) to Low Latency Point Cloud Compression (L3C2), MPEG advanced two pivotal 3D graphics standards to final draft status. These technologies support real-time 3D content — from immersive AR/VR experiences to LiDAR-based perception in autonomous vehicles — enabling efficient, low-latency, and interoperable volumetric media.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Ensuring Media Authenticity&lt;/b&gt;&lt;/p&gt;&lt;p&gt;New amendments to MPEG Audio standards introduce mechanisms for Media Authenticity, allowing verification of content integrity and provenance across audio, video, and system layers. This step is essential for a trustworthy digital media ecosystem.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Genomics and AI Meet Multimedia&lt;/b&gt;&lt;/p&gt;&lt;p&gt;MPEG also looked beyond traditional media: the MPEG-G Genomics Hackathon, co-organized with partners such as Stanford Medicine, Philips, and Fudan University, challenges researchers to apply AI to microbiome data encoded in MPEG-G format. The goal: uncover new biomedical insights through standard-based, interoperable data compression.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Looking Ahead&lt;/b&gt;&lt;/p&gt;&lt;p&gt;From next-generation video compression and AI-enhanced codecs to trustworthy media and adaptive streaming, MPEG continues to define the building blocks of interoperable multimedia. As new technologies reshape how we experience and analyze content, standards ensure that innovation remains open, efficient, and globally accessible.&lt;/p&gt;&lt;p&gt;On this World Standards Day, we celebrate the dedication of all MPEG experts and contributors for shaping a smarter, more connected multimedia future.&lt;/p&gt;&lt;p&gt;Learn more at &lt;a href=&quot;http://www.mpeg.org&quot;&gt;www.mpeg.org&lt;/a&gt; and stay tuned for updates from the next MPEG meeting in early 2026.&lt;/p&gt;</description><link>http://multimediacommunication.blogspot.com/2025/10/happy-world-standards-day-2025.html</link><author>noreply@blogger.com (Unknown)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiH4ZlrpPOZL6ZvN5uIPn2G3pEYurWPKDgpDh7s0UYona5lSZbRkylDYfhAG0zwFg5xs5_I-vb1BOXqjA2LniTT0K0sZhjAmxLzppMr-MsCrz_FR1hO3-o1eJhrlYh8I_aP4g4HLlpyJ7XZA8GTIjTQJnBTB-jJXAHDsh89H2iejgQXlE9EW6caYN5s_As/s72-w200-h69-c/MPEG_RGB_1000px-2048x711.png" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-3873072818126103347</guid><pubDate>Wed, 16 Jul 2025 13:41:00 +0000</pubDate><atom:updated>2025-07-16T15:42:30.171+02:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">alpen-adria-universität</category><category domain="http://www.blogger.com/atom/ns#">jobs</category><title>Full Professor of Virtual and Augmented Reality (all genders welcome)</title><description>&lt;div style=&quot;text-align: right;&quot;&gt;&lt;span style=&quot;font-size: x-small;&quot;&gt;The official and legally binding job description is available &lt;a href=&quot;https://www.aau.at/wp-content/uploads/2025/07/Mitteilungsblatt-2024-2025-20.pdf&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;.&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: right;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;The University of Klagenfurt wants to attract more qualified women for professorships.&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;The University of Klagenfurt is pleased to announce the following open position in the Department of Information Technology (ITEC) within the Faculty of Technical Sciences, in compliance with the provisions of Art. 98 (open-ended) or Art. 99 (limited to 5 years) of the Austrian Universities Act:&lt;/div&gt;&lt;br /&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;Full Professor of Virtual and Augmented Reality (all genders welcome)&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;This is a full-time position. Whether the position will be implemented in compliance with the provisions of Art. 98 Austrian Universities Act (open-ended) or Art. 99 of the Austrian Universities Act (limited to 5 years) will be decided in the course of the appointment procedure.&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;The University of Klagenfurt is a young, vibrant, and innovative university, located at the intersection of Alpine and Mediterranean culture in an area that offers an exceptionally high quality of life. As a public university pursuant to Art. 6 of the Austrian Universities Act, it receives federal funding. The Times Higher Education (THE) Young University Rankings 2021 ranked it among the 50 best young universities in the world. The university operates under the motto “Beyond Boundaries!”.&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;In accordance with its key strategic road map, the development plan, the university’s primary guiding principles and objectives include the pursuit of scientific excellence regarding the appointment of professors, favourable research conditions, a good faculty-student ratio, and the promotion of the development of young scientists.&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;The professorship will be embedded in the Department of Information Technology (ITEC; &lt;a href=&quot;https://itec.aau.at/&quot;&gt;https://itec.aau.at/&lt;/a&gt;) within the Faculty of Technical Sciences (&lt;a href=&quot;https://www.aau.at/en/tewi&quot;&gt;https://www.aau.at/en/tewi&lt;/a&gt;), which focuses on distributed multimedia systems, including multimedia coding, transmission, and quality of experience, AI-based multimedia analysis, game studies and engineering, as well as distributed cloud and edge computing. The department and faculty provide a vivid, friendly, and research-oriented environment. We are looking for a highly qualified and internationally recognized scientist with high engagement in developing and sustaining an ambitious and innovative research programme.&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Virtual and Augmented Reality (VR/AR) are broad research fields addressing both theoretical and application-driven questions. This position offers an opportunity to focus on cutting-edge VR/AR research areas including – but not limited to – immersive media (e.g., 360° videos, 3D point clouds), AI for object recognition in VR/AR (e.g., in industry and medicine), educational and training applications, computer graphics, sensor technology, human-computer interaction, and efficient multimedia data transmission and cloud/edge processing.&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;The professor will be involved in teaching in a variety of degree programmes, including the Bachelor’s programmes “Applied Informatics” and “Robotics and Artificial Intelligence”, and the international Master’s programmes “Informatics” and “Game Studies and Engineering”.&lt;/div&gt;&lt;br /&gt;The duties of the position include:&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Representing the field of Virtual and Augmented Reality in research and teaching&lt;/li&gt;&lt;li&gt;Acquiring and managing competitive research funding&lt;/li&gt;&lt;li&gt;Collaborating with colleagues across the university and with industry partners&lt;/li&gt;&lt;li&gt;Teaching in relevant Bachelor’s, Master’s, and Doctoral programmes&lt;/li&gt;&lt;li&gt;Advising and mentoring students and early career researchers&lt;/li&gt;&lt;li&gt;Contributing to the long-term development of the department and its international standing&lt;/li&gt;&lt;li&gt;Advancing the department’s and faculty’s research priorities, with a commitment to interdisciplinary collaboration&lt;/li&gt;&lt;li&gt;Contributing to university governance and academic self-administration&lt;/li&gt;&lt;li&gt;Engaging in third mission activities and public outreach&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;div&gt;&lt;div&gt;Required qualifications:&lt;/div&gt;&lt;div&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Habilitation or equivalent qualification in a relevant field&lt;/li&gt;&lt;li&gt;Excellent research standing and publication record in Virtual and/or Augmented Reality, including theoretical and technical foundations&lt;/li&gt;&lt;li&gt;Experience in the acquisition of competitive third-party funded research projects of a relevant volume&lt;/li&gt;&lt;li&gt;Teaching experience at university level and didactic competence&lt;/li&gt;&lt;li&gt;Experience in the (co-)supervision of academic theses&lt;/li&gt;&lt;li&gt;Collaboration and social skills&lt;/li&gt;&lt;li&gt;Fluency in English&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div&gt;Desired qualifications:&lt;/div&gt;&lt;div&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Excellent scientific communication and dissemination skills&lt;/li&gt;&lt;li&gt;Interdisciplinary experience&lt;/li&gt;&lt;li&gt;Experience with academic management duties&lt;/li&gt;&lt;li&gt;Competence in leadership and management of teams&lt;/li&gt;&lt;li&gt;Competence in gender mainstreaming and diversity management&lt;/li&gt;&lt;li&gt;Fluency in German&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;German language skills are not a formal prerequisite, but proficiency at level B2 is expected within two years. The remit of the professorship requires that the successful candidate will establish Klagenfurt as primary place of work.&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;The university is committed to increasing the number of women among the faculty, particularly in high-level positions, and therefore specifically invites applications from qualified women. Among equally qualified candidates, women will receive preferential consideration. People with disabilities or chronic diseases who meet the qualification criteria are explicitly invited to apply.&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;The salary is subject to negotiation. The minimum gross salary for the position at this level (salary group A1 for faculty according to the Austrian Universities’ Collective Bargaining Agreement) is currently € 92,500 per year.&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;In accordance with the Austrian Income Tax Act an attractive relocation tax allowance can be granted for the first five years in the case of appointments to professorships in Austria. The prerequisites are subject to examination on a case-by-case basis.&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;div&gt;Please submit your application in English by e-mail to the University of Klagenfurt, Office of the Senate, attn. Mag.a (FH) Sabine Seebacher via &lt;a href=&quot;mailto:application_professorship@aau.at&quot;&gt;application_professorship@aau.at&lt;/a&gt; no later than September 28, 2025, including:&lt;/div&gt;&lt;div&gt;&lt;ul&gt;&lt;li style=&quot;text-align: left;&quot;&gt;a mandatory principal part not exceeding five pages &lt;a href=&quot;https://jobs.aau.at/wp-content/uploads/specimen_main_part_application_professorship.doc&quot;&gt;https://jobs.aau.at/wp-content/uploads/specimen_main_part_application_professorship.doc&lt;/a&gt;). The submission of the mandatory principal part mentioned above constitutes a necessary condition for the validity of your application.&lt;/li&gt;&lt;li&gt;one single PDF including:&lt;/li&gt;&lt;ul&gt;&lt;li style=&quot;text-align: left;&quot;&gt;a letter of motivation&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;a detailed scientific CV&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;a comprehensive list of publications, talks, and courses taught&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;a list of acquired third-party funded research projects, including role, funding organization, and amount of funding (in case of funding acquired within a consortium, please specify the amount attributed to you)&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;a compact research statement of up to two pages&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;supplementary documents, where applicable (e.g., course evaluations)&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;links to publicly available versions of your three most important publications within the scope of this professorship&lt;/li&gt;&lt;/ul&gt;&lt;/ul&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;div&gt;For general information, please refer to the general information on our website provided at &lt;a href=&quot;https://jobs.aau.at/en/the-university-as-employer/&quot;&gt;https://jobs.aau.at/en/the-university-as-employer/&lt;/a&gt;. For specific information about the position, please contact Prof. Dr. Christian Timmerer (&lt;a href=&quot;mailto:christian.timmerer@aau.at&quot;&gt;christian.timmerer@aau.at&lt;/a&gt;).&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description><link>http://multimediacommunication.blogspot.com/2025/07/full-professor-of-virtual-and-augmented.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-8347057281192022277</guid><pubDate>Wed, 18 Jun 2025 19:17:00 +0000</pubDate><atom:updated>2025-06-18T21:17:51.258+02:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">alpen-adria-universität</category><category domain="http://www.blogger.com/atom/ns#">jobs</category><title>Up to 4 Predoc Scientist Positions (all genders welcome)</title><description>&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; text-align: justify; vertical-align: baseline;&quot;&gt;The University of Klagenfurt, with approximately 1,700 employees and over 13,000 students, is located in the Alps-Adriatic region and consistently achieves excellent placements in rankings. The motto “per aspera ad astra” underscores our firm commitment to the pursuit of excellence in all activities in research, teaching, and university management. The principles of equality, diversity, health, sustainability, and compatibility of work and family life serve as the foundation for our work at the university.&lt;/p&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; text-align: justify; vertical-align: baseline;&quot;&gt;The University of Klagenfurt is in the process of establishing a&amp;nbsp;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Karl Popper Kolleg&lt;/strong&gt;&amp;nbsp;(graduate school) entitled “&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;FruitScope: A DroneScope for Smart Agriculture&lt;/strong&gt;”. The following positions are open for applicants at this school with an anticipated starting date of&amp;nbsp;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;October 1, 2025&lt;/strong&gt;:&lt;/p&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; text-align: center; vertical-align: baseline;&quot;&gt;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Up to 4 Predoc Scientist Positions (all genders welcome)&lt;/strong&gt;&lt;/p&gt;&lt;ul style=&quot;background: rgb(255, 255, 255); border: 0px; box-sizing: border-box; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; list-style: square; margin: 0px 0px 24px 1.5em; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Level of employment&lt;/strong&gt;: 75 % (30 hours per week) each&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Minimum salary&lt;/strong&gt;: € 39,005.40 per annum (gross); classification according to collective bargaining agreement: B1&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Limited to&lt;/strong&gt;: 3 years&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Application deadline&lt;/strong&gt;: August 20, 2025&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Reference code&lt;/strong&gt;: 338/25&lt;/li&gt;&lt;/ul&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Tasks and responsibilities&lt;/strong&gt;:&lt;/p&gt;&lt;ul style=&quot;background: rgb(255, 255, 255); border: 0px; box-sizing: border-box; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; list-style: square; margin: 0px 0px 24px 1.5em; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Independent research and scientific qualification within the Karl Popper Kolleg FruitScope with the aim to acquire the Doctoral Degree in Technical Sciences&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Peer-reviewed publication of scientific results in journals and at conferences&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Team work and student mentoring&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Active participation in public relations activities&lt;/li&gt;&lt;/ul&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; text-align: justify; vertical-align: baseline;&quot;&gt;This graduate school seeks to push the current bounds of state-of-the-art in navigation, coordination, sensing, and communication of multi agent unmanned aerial vehicles (UAVs). The groups of the involved faculty publish in international top journals and conference proceedings. Successful applicants will be encouraged and supported to publish and present their work in such journals and proceedings and will have the opportunity to cooperate with our world-renowned international partners in science and industry. We currently cooperate with partners worldwide, mainly in the USA/Canada and Europe. We specifically encourage close and open collaboration with our peers both internationally and at the University and support international exchanges with the universities and research institutions affiliated to the graduate school (e.g., ETH Zurich, MIT, CMU, NASA, UofT, U-Mich, UPenn, Georgia Tech). Our young research groups provide a dynamic, familiar, and friendly attitude and thus a collaborative and inspiring work environment with very modern infrastructure (e.g., one of the largest indoor drone halls in Europe), which is continuously updated and upgraded (e.g., soon, with one of the largest outdoor drone test fields in the world).&lt;/p&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Prerequisites for the appointment&lt;/strong&gt;:&lt;/p&gt;&lt;ul style=&quot;background: rgb(255, 255, 255); border: 0px; box-sizing: border-box; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; list-style: square; margin: 0px 0px 24px 1.5em; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Completed Master’s or Diploma degree in electrical engineering, information and communication engineering, mechanical engineering, computer science or related fields. This requirement has an extended deadline and must be fulfilled two weeks before the starting date at the latest; hence, the last possible deadline for meeting this requirement is&amp;nbsp;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;September 17, 2025&lt;/strong&gt;.&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Proven knowledge and experience in at least one of the following areas: mobile robotics, wireless communications or sensing, multimedia communication, signal processing for communications, or machine learning&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Proven programming skills in at least one of the following languages: Matlab, C/C++, Java, Python, ROS or similar&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Fluency in English (both written and spoken)&lt;/li&gt;&lt;/ul&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Additional desired qualifications&lt;/strong&gt;:&lt;/p&gt;&lt;ul style=&quot;background: rgb(255, 255, 255); border: 0px; box-sizing: border-box; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; list-style: square; margin: 0px 0px 24px 1.5em; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Good knowledge of cooperative software development (e.g., with GIT)&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;First scientific publication (apart from Master’s or Diploma thesis) in the area of mobile robotics, wireless sensing, or multimedia communication technology&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Relevant international or practical experience&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Good scientific communication and presentation skills&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;German language skills or willingness to acquire German language skills within the first two years of service&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Social skills and ability to work independently&lt;/li&gt;&lt;/ul&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Our offer&lt;/strong&gt;:&lt;/p&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; text-align: justify; vertical-align: baseline;&quot;&gt;The employment contract is concluded for the position as predoc scientist and stipulates a starting salary of € 2,786.10 gross per month (14 times a year; previous experience deemed relevant to the job can be recognized).&lt;/p&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;The University of Klagenfurt also offers&lt;/strong&gt;:&lt;/p&gt;&lt;ul style=&quot;background: rgb(255, 255, 255); border: 0px; box-sizing: border-box; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; list-style: square; margin: 0px 0px 24px 1.5em; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Personal and professional advanced training courses, management and career coaching, including bespoke training for women in science&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Numerous attractive additional benefits, see also&amp;nbsp;&lt;a href=&quot;https://jobs.aau.at/en/the-university-as-employer/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;https://jobs.aau.at/en/the-university-as-employer/&lt;/a&gt;&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Diversity- and family-friendly university culture&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;The opportunity to live and work in the attractive Alps-Adriatic region with a wide range of leisure activities in the spheres of culture, nature and sports&lt;/li&gt;&lt;/ul&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;The application&lt;/strong&gt;:&lt;/p&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;If you are interested in this position, please apply in English providing the following documents:&lt;/p&gt;&lt;ul style=&quot;background: rgb(255, 255, 255); border: 0px; box-sizing: border-box; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; list-style: square; margin: 0px 0px 24px 1.5em; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Letter of application explaining the motivation and including a statement of interest in research (indicating an idea for the research for your own doctoral degree)&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Curriculum vitae (please do not include a photo)&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Copies of degree certificates (Bachelor and Master)&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Copies of official transcripts (Bachelor and Master) containing a list of all courses and grades&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Master’s thesis. If the thesis is not available, the candidate should provide a draft or an explanation.&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;If an applicant has not received the Master’s degree by the application deadline, the applicant should provide a declaration, written either by a supervisor or by the candidate themselves, on the feasibility of finishing the Master’s degree before September 17, 2025.&lt;/li&gt;&lt;/ul&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; text-align: justify; vertical-align: baseline;&quot;&gt;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;To apply, please select the position with the reference code 338/25 in the category “Scientific Staff” using the link “Apply for this position” in the job portal at&lt;/strong&gt;&amp;nbsp;&lt;a href=&quot;https://jobs.aau.at/en/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;https://jobs.aau.at/en/&lt;/a&gt;.&lt;/p&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; text-align: justify; vertical-align: baseline;&quot;&gt;Candidates must provide proof that they meet the required qualifications by&amp;nbsp;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;August 20, 2025, at the latest&lt;/strong&gt;. However, candidates who fulfil the required qualifications&amp;nbsp;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;but do not yet possess the required Master’s degree can apply&lt;/strong&gt;, provided they are able to meet this requirement at least&amp;nbsp;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;two weeks before the starting date&lt;/strong&gt;. Therefore, the latest possible deadline for meeting this requirement is&amp;nbsp;&lt;strong style=&quot;background: transparent; border: 0px; font-weight: bold; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;September 17, 2025&lt;/strong&gt;.&lt;/p&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; text-align: justify; vertical-align: baseline;&quot;&gt;General information about the university as an employer can be found at&amp;nbsp;&lt;a href=&quot;https://jobs.aau.at/en/the-university-as-employer/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;https://jobs.aau.at/en/the-university-as-employer/&lt;/a&gt;. At the University of Klagenfurt, recruitment and staff matters are accompanied not only by the authority responsible for the recruitment procedure but also by the&amp;nbsp;&lt;a href=&quot;https://www.aau.at/en/university/organisation/representations-commissioners/equal-opportunities-working-group/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Equal Opportunities Working Group&lt;/a&gt;&amp;nbsp;and, if applicable, by the&amp;nbsp;&lt;a href=&quot;https://www.aau.at/en/university/organisation/administration-and-management/integrated-study/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Representative for Disabled Persons&lt;/a&gt;.&lt;/p&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;For further information on this specific vacancy, please contact:&lt;/p&gt;&lt;ul style=&quot;background: rgb(255, 255, 255); border: 0px; box-sizing: border-box; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; list-style: square; margin: 0px 0px 24px 1.5em; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Prof Dr. Stephan Weiss, +43 463 2700 3571,&amp;nbsp;&lt;a href=&quot;mailto:Stephan.Weiss@aau.at&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Stephan.Weiss@aau.at&lt;/a&gt;&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Prof Dr. Christian Bettstetter, +43 463 2700 3640,&amp;nbsp;&lt;a href=&quot;mailto:Christian.Bettstetter@aau.at&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Christian.Bettstetter@aau.at&lt;/a&gt;&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Prof Dr. Bernhard Rinner, +43 463 2700 3671,&amp;nbsp;&lt;a href=&quot;mailto:Bernhard.Rinner@aau.at&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Bernhard.Rinner@aau.at&lt;/a&gt;&lt;/li&gt;&lt;li style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Prof Dr. Christian Timmerer +43 463 2700 3621,&amp;nbsp;&lt;a href=&quot;mailto:Christian.Timmerer@aau.at&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Christian.Timmerer@aau.at&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; text-align: justify; vertical-align: baseline;&quot;&gt;The University of Klagenfurt aims to increase the proportion of women and therefore specifically invites qualified women to apply for the position. Where the qualification is equivalent, women will be given preferential consideration.&lt;/p&gt;&lt;p&gt;&lt;span style=&quot;background-color: white; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; text-align: justify;&quot;&gt;People with disabilities or chronic diseases, who fulfil the requirements, are particularly encouraged to apply. Travel and&amp;nbsp; accommodation costs incurred during the application process will not be refunded. Under exceptional circumstances online hearings may be possible. Translations into other languages serve informational purposes only. Solely the version advertised in the University Bulletin (&lt;/span&gt;&lt;a href=&quot;https://www.aau.at/en/university/services-contact/university-bulletin/&quot; style=&quot;background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; color: #0066cc; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px; padding: 0px; text-align: justify; vertical-align: baseline;&quot;&gt;Mitteilungsblatt&lt;/a&gt;&lt;span style=&quot;background-color: white; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; text-align: justify;&quot;&gt;) shall be legally binding.&lt;/span&gt;&amp;nbsp;&lt;/p&gt;</description><link>http://multimediacommunication.blogspot.com/2025/06/up-to-4-predoc-scientist-positions-all.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-6683959076946221809</guid><pubDate>Fri, 09 May 2025 12:42:00 +0000</pubDate><atom:updated>2025-05-12T09:29:50.939+02:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">MPEG</category><category domain="http://www.blogger.com/atom/ns#">press release</category><title>MPEG news: a report from the 150th meeting</title><description>&lt;div style=&quot;text-align: right;&quot;&gt;&lt;span style=&quot;font-size: x-small;&quot;&gt;This version of the blog post is also available at &lt;a href=&quot;https://records.sigmm.org/2025/05/07/mpeg-column-150th-mpeg-meeting-virtual-online/&quot; target=&quot;_blank&quot;&gt;ACM SIGMM Records&lt;/a&gt;.&lt;/span&gt;&lt;/div&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHeax1p_CysQZDHK7D4Zm2y3RrdBy46D5lJ-_4klCc5jFAeEJMLJ5PDeP3VFCbfmJJsQdTmz6PJL65rddcqFYFsR3xFmRe14tg9uH4CdCjd-rZ_hTX1YS399SZpghF6qpAzpV8oZI1nHM24EUbIMmhKPIHrG8gz-qQfY-WyW8E2g6mOu2aTmKyHuBiNnY/s1200/MPEG-Logo-1.png&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHeax1p_CysQZDHK7D4Zm2y3RrdBy46D5lJ-_4klCc5jFAeEJMLJ5PDeP3VFCbfmJJsQdTmz6PJL65rddcqFYFsR3xFmRe14tg9uH4CdCjd-rZ_hTX1YS399SZpghF6qpAzpV8oZI1nHM24EUbIMmhKPIHrG8gz-qQfY-WyW8E2g6mOu2aTmKyHuBiNnY/s320/MPEG-Logo-1.png&quot; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://multimediacommunication.blogspot.com/2013/04/mpeg-news-archive.html&quot;&gt;MPEG News Archive&lt;/a&gt;&lt;/p&gt; The 150th MPEG meeting was held online from 31 March to 04 April 2025. The official press release can be found &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-150/&quot;&gt;here&lt;/a&gt;. This blog post provides the following highlights:&lt;div&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;&lt;b&gt;Requirements&lt;/b&gt;: MPEG-AI strategy and white paper on MPEG technologies for metaverse&lt;/li&gt;&lt;li&gt;&lt;b&gt;JVET&lt;/b&gt;: Draft Joint Call for Evidence on video compression with capability beyond Versatile Video Coding (VVC)&lt;/li&gt;&lt;li&gt;&lt;b&gt;Video&lt;/b&gt;: Gaussian splat coding and video coding for machines&lt;/li&gt;&lt;li&gt;&lt;b&gt;Audio&lt;/b&gt;: Audio coding for machines&lt;/li&gt;&lt;li&gt;&lt;b&gt;3DGH&lt;/b&gt;: 3D Gaussian splat coding&lt;/li&gt;&lt;/ul&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;MPEG-AI Strategy &lt;/h2&gt;The MPEG-AI strategy envisions a future where AI and neural networks are deeply integrated into multimedia coding and processing, enabling transformative improvements in how digital content is created, compressed, analyzed, and delivered. By positioning AI at the core of multimedia systems, MPEG-AI seeks to enhance both content representation and intelligent analysis. This approach supports applications ranging from adaptive streaming and immersive media to machine-centric use cases like autonomous vehicles and smart cities. AI is employed to optimize coding efficiency, generate intelligent descriptors, and facilitate seamless interaction between content and AI systems. The strategy builds on foundational standards such as ISO/IEC 15938-13 (CDVS), 15938-15 (CDVA), and 15938-17 (Neural Network Coding), which collectively laid the groundwork for integrating AI into multimedia frameworks. &lt;br /&gt;&lt;br /&gt;Currently, MPEG is developing a family of standards under the ISO/IEC 23888 series that includes a vision document, machine-oriented video coding, and encoder optimization for AI analysis. Future work focuses on feature coding for machines and AI-based point cloud compression to support high-efficiency 3D and visual data handling. These efforts reflect a paradigm shift from human-centric media consumption to systems that also serve intelligent machine agents. MPEG-AI maintains compatibility with traditional media processing while enabling scalable, secure, and privacy-conscious AI deployments. Through this initiative, MPEG aims to define the future of multimedia as an intelligent, adaptable ecosystem capable of supporting complex, real-time, and immersive digital experiences. &lt;br /&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;MPEG White Paper on Metaverse Technologies &lt;/h2&gt;The MPEG white paper on metaverse technologies (cf. &lt;a href=&quot;https://www.mpeg.org/whitepapers/&quot;&gt;MPEG white papers&lt;/a&gt;) outlines the pivotal role of MPEG standards in enabling immersive, interoperable, and high-quality virtual experiences that define the emerging metaverse. It identifies core metaverse parameters – real-time operation, 3D experience, interactivity, persistence, and social engagement – and maps them to MPEG’s longstanding and evolving technical contributions. From early efforts like MPEG-4’s Binary Format for Scenes (BIFS) and Animation Framework eXtension (AFX) to MPEG-V’s sensory integration, and the advanced MPEG-I suite, these standards underpin critical features such as scene representation, dynamic 3D asset compression, immersive audio, avatar animation, and real-time streaming. Key technologies like point cloud compression (V-PCC, G-PCC), immersive video (MIV), and dynamic mesh coding (V-DMC) demonstrate MPEG’s capacity to support realistic, responsive, and adaptive virtual environments. Recent efforts include neural network compression for learned scene representations (e.g., NeRFs), haptic coding formats, and scene description enhancements, all geared toward richer user engagement and broader device interoperability. &lt;br /&gt;&lt;br /&gt;The document highlights five major metaverse use cases – virtual environments, immersive entertainment, virtual commerce, remote collaboration, and digital twins – all supported by MPEG innovations. It emphasizes the foundational role of MPEG-I standards (e.g., Parts 12, 14, 29, 39) for synchronizing immersive content, representing avatars, and orchestrating complex 3D scenes across platforms. Future challenges identified include ensuring interoperability across systems, advancing compression methods for AI-assisted scenarios, and embedding security and privacy protections. With decades of multimedia expertise and a future-focused standards roadmap, MPEG positions itself as a key enabler of the metaverse – ensuring that emerging virtual ecosystems are scalable, immersive, and universally accessible​. &lt;br /&gt;&lt;br /&gt;The MPEG white paper on metaverse technologies highlights several research opportunities, including efficient compression of dynamic 3D content (e.g., point clouds, meshes, neural representations), synchronization of immersive audio and haptics, real-time adaptive streaming, and scene orchestration. It also points to challenges in standardizing interoperable avatar formats, AI-enhanced media representation, and ensuring seamless user experiences across devices. Additional research directions include neural network compression, cross-platform media rendering, and developing perceptual metrics for immersive Quality of Experience (QoE).&lt;/div&gt;&lt;div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;Draft Joint Call for Evidence (CfE) on Video Compression beyond Versatile Video Coding (VVC) &lt;/h2&gt;The latest JVET AHG report on ECM software development (AHG6), documented as &lt;a href=&quot;https://jvet-experts.org/doc_end_user/current_document.php?id=15389&quot;&gt;JVET-AL0006&lt;/a&gt;, shows promising results. Specifically, in the “Overall” row and “Y” column, there is a 27.06% improvement in coding efficiency compared to VVC, as shown in the figure below.&lt;/div&gt;&lt;div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;img height=&quot;200&quot; src=&quot;https://records.sigmm.org/wp-content/uploads/2025/04/ECM16_vs_VTM_Random_Access-1024x513.png&quot; width=&quot;400&quot; /&gt;&lt;/div&gt;The Draft Joint Call for Evidence (CfE) on video compression beyond VVC (Versatile Video Coding), identified as document &lt;a href=&quot;https://jvet-experts.org/doc_end_user/current_document.php?id=15684&quot;&gt;JVET-AL2026 | N 355&lt;/a&gt;, is being developed to explore new advancements in video compression. The CfE seeks evidence in three main areas: (a) improved compression efficiency and associated trade-offs, (b) encoding under runtime constraints, and (c) enhanced performance in additional functionalities. This initiative aims to evaluate whether new techniques can significantly outperform the current state-of-the-art VVC standard in both compression and practical deployment aspects. &lt;br /&gt;&lt;br /&gt;The visual testing will be carried out across seven categories, including various combinations of resolution, dynamic range, and use cases: SDR Random Access UHD/4K, SDR Random Access HD, SDR Low Bitrate HD, HDR Random Access 4K, HDR Random Access Cropped 8K, Gaming Low Bitrate HD, and UGC (User-Generated Content) Random Access HD. Sequences and rate points for testing have already been defined and agreed upon. For a fair comparison, rate-matched anchors using VTM (VVC Test Model) and ECM (Enhanced Compression Model) will be generated, with new configurations to enable reduced run-time evaluations. A dry-run of the visual tests is planned during the upcoming Daejeon meeting, with ECM and VTM as reference anchors, and the CfE welcomes additional submissions. Following this dry-run, the final Call for Evidence is expected to be issued in July, with responses due in October. &lt;br /&gt;&lt;br /&gt;The Draft Joint Call for Evidence (CfE) on video compression beyond VVC invites research into next-generation video coding techniques that offer improved compression efficiency, reduced encoding complexity under runtime constraints, and enhanced functionalities such as scalability or perceptual quality. Key research aspects include optimizing the trade-off between bitrate and visual fidelity, developing fast encoding methods suitable for constrained devices, and advancing performance in emerging use cases like HDR, 8K, gaming, and user-generated content. &lt;br /&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;3D Gaussian Splat Coding &lt;/h2&gt;Gaussian splatting is a real-time radiance field rendering method that represents a scene using 3D Gaussians. Each Gaussian has parameters like position, scale, color, opacity, and orientation, and together they approximate how light interacts with surfaces in a scene. Instead of ray marching (as in NeRF), it renders images by splatting the Gaussians onto a 2D image plane and blending them using a rasterization pipeline, which is GPU-friendly and much faster. Developed by &lt;a href=&quot;https://dl.acm.org/doi/10.1145/3592433&quot;&gt;Kerbl et al. (2023)&lt;/a&gt; it is capable of real-time rendering (60+ fps) and outperforms previous NeRF-based methods in speed and visual quality. Gaussian splat coding refers to the compression and streaming of 3D Gaussian representations for efficient storage and transmission. It&#39;s an active research area and under standardization consideration in MPEG. &lt;br /&gt;&lt;br /&gt;MPEG technical requirements working group together with MPEG video working group started an exploration on Gaussian splat coding and the MPEG coding of 3D graphics and haptics (3DGH) working group addresses 3D Gaussian splat coding, respectively. Draft Gaussian splat coding use cases and requirements are &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-150/&quot;&gt;available&lt;/a&gt; and various joint exploration experiments (JEEs) are conducted between meetings. &lt;br /&gt;&lt;br /&gt;(3D) Gaussian splat coding is actively researched in academia, also in the context of streaming, e.g., like in “&lt;a href=&quot;https://arxiv.org/abs/2408.14823&quot;&gt;LapisGS: Layered Progressive 3D Gaussian Splatting for Adaptive Streaming&lt;/a&gt;” or “&lt;a href=&quot;https://dl.acm.org/doi/10.1145/3712676.3714445&quot;&gt;LTS: A DASH Streaming System for Dynamic Multi-Layer 3D Gaussian Splatting Scenes&lt;/a&gt;”. The research aspects of 3D Gaussian splat coding and streaming span a wide range of areas across computer graphics, compression, machine learning, and systems for real-time immersive media. In particular, on efficiently representing and transmitting Gaussian-based neural scene representations for real-time rendering. Key areas include compression of Gaussian parameters (position, scale, color, opacity), perceptual and geometry-aware optimizations, and neural compression techniques such as learned latent coding. Streaming challenges involve adaptive, view-dependent delivery, level-of-detail management, and low-latency rendering on edge or mobile devices. Additional research directions include standardizing file formats, integrating with scene graphs, and ensuring interoperability with existing 3D and immersive media frameworks. &lt;br /&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;MPEG Audio and Video Coding for Machines &lt;/h2&gt;The Call for Proposals on Audio Coding for Machines (ACoM), issued by the MPEG audio coding working group, aims to develop a standard for efficiently compressing audio, multi-dimensional signals (e.g., medical data), or extracted features for use in machine-driven applications. The standard targets use cases such as connected vehicles, audio surveillance, diagnostics, health monitoring, and smart cities, where vast data streams must be transmitted, stored, and processed with low latency and high fidelity. The ACoM system is designed in two phases: the first focusing on near-lossless compression of audio and metadata to facilitate training of machine learning models, and the second expanding to lossy compression of features optimized for specific applications. The goal is to support hybrid consumption – by machines and, where needed, humans – while ensuring interoperability, low delay, and efficient use of storage and bandwidth. &lt;br /&gt;&lt;br /&gt;The CfP outlines technical requirements, submission guidelines, and evaluation metrics. Participants must provide decoders compatible with Linux/x86 systems, demonstrate performance through objective metrics like compression ratio, encoder/decoder runtime, and memory usage, and undergo a mandatory cross-checking process. Selected proposals will contribute to a reference model and working draft of the standard. Proponents must register by August 1, 2025, with submissions due in September, and evaluation taking place in October. The selection process emphasizes lossless reproduction, metadata fidelity, and significant improvements over a baseline codec, with a path to merge top-performing technologies into a unified solution for standardization. &lt;br /&gt;&lt;br /&gt;Research aspects of Audio Coding for Machines (ACoM) include developing efficient compression techniques for audio and multi-dimensional data that preserve key features for machine learning tasks, optimizing encoding for low-latency and resource-constrained environments, and designing hybrid formats suitable for both machine and human consumption. Additional research areas involve creating interoperable feature representations, enhancing metadata handling for context-aware processing, evaluating trade-offs between lossless and lossy compression, and integrating machine-optimized codecs into real-world applications like surveillance, diagnostics, and smart systems. &lt;br /&gt;&lt;br /&gt;The MPEG video coding working group approved the committee draft (CD) for ISO/IEC 23888-2 video coding for machines (VCM). VCM aims to encode visual content in a way that maximizes machine task performance, such as computer vision, scene understanding, autonomous driving, smart surveillance, robotics and IoT. Instead of preserving photorealistic quality, VCM seeks to retain features and structures important for machines, possibly at much lower bitrates than traditional video codecs. The CD introduces several new tools and enhancements aimed at improving machine-centric video processing efficiency. These include updates to spatial resampling, such as the signaling of the inner decoded picture size to better support scalable inference. For temporal resampling, the CD enables adaptive resampling ratios and introduces pre- and post-filters within the temporal resampler to maintain task-relevant temporal features. In the filtering domain, it adopts bit depth truncation techniques – integrating bit depth shifting, luma enhancement, and chroma reconstruction – to optimize both signaling efficiency and cross-platform interoperability. Luma enhancement is further refined through an integer-based implementation for luma distribution parameters, while chroma reconstruction is stabilized across different hardware platforms. Additionally, the CD proposes removing the neural network-based in-loop filter (NNLF) to simplify the pipeline. Finally, in terms of bitstream structure, it adopts a flattened structure with new signaling methods to support efficient random access and better coordination with system layers, aligning with the low-latency, high-accuracy needs of machine-driven applications. &lt;br /&gt;&lt;br /&gt;Research in VCM focuses on optimizing video representation for downstream machine tasks, exploring task-driven compression techniques that prioritize inference accuracy over perceptual quality. Key areas include joint video and feature coding, adaptive resampling methods tailored to machine perception, learning-based filter design, and bitstream structuring for efficient decoding and random access. Other important directions involve balancing bitrate and task accuracy, enhancing robustness across platforms, and integrating machine-in-the-loop optimization to co-design codecs with AI inference pipelines. &lt;br /&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;Concluding Remarks &lt;/h2&gt;The 150th MPEG meeting marks significant progress across AI-enhanced media, immersive technologies, and machine-oriented coding. With ongoing work on MPEG-AI, metaverse standards, next-gen video compression, Gaussian splat representation, and machine-friendly audio and video coding, MPEG continues to shape the future of interoperable, intelligent, and adaptive multimedia systems. The research opportunities and standardization efforts outlined in this meeting provide a strong foundation for innovations that support real-time, efficient, and cross-platform media experiences for both human and machine consumption. &lt;br /&gt;&lt;br /&gt;The 151st MPEG meeting will be held in Daejeon, Korea, from 30 June to 04 July 2025. Click &lt;a href=&quot;https://www.mpeg.org/&quot;&gt;here&lt;/a&gt; for more information about MPEG meetings and their developments.&lt;div&gt;
&lt;!--/wp:list--&gt;&lt;/div&gt;&lt;/div&gt;</description><link>http://multimediacommunication.blogspot.com/2025/05/mpeg-news-report-from-150th-meeting.html</link><author>noreply@blogger.com (Unknown)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHeax1p_CysQZDHK7D4Zm2y3RrdBy46D5lJ-_4klCc5jFAeEJMLJ5PDeP3VFCbfmJJsQdTmz6PJL65rddcqFYFsR3xFmRe14tg9uH4CdCjd-rZ_hTX1YS399SZpghF6qpAzpV8oZI1nHM24EUbIMmhKPIHrG8gz-qQfY-WyW8E2g6mOu2aTmKyHuBiNnY/s72-c/MPEG-Logo-1.png" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-255101422885123930</guid><pubDate>Fri, 14 Mar 2025 08:40:00 +0000</pubDate><atom:updated>2025-03-14T09:43:07.439+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">MPEG</category><category domain="http://www.blogger.com/atom/ns#">press release</category><title>MPEG news: a report from the 149th meeting</title><description>&lt;p style=&quot;text-align: right;&quot;&gt;&lt;span style=&quot;font-size: x-small;&quot;&gt;This blog post is based on the MPEG press release and has been modified/updated here to focus on and highlight research aspects. This version of the blog post will also be posted at ACM SIGMM Records.&lt;/span&gt;&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHeax1p_CysQZDHK7D4Zm2y3RrdBy46D5lJ-_4klCc5jFAeEJMLJ5PDeP3VFCbfmJJsQdTmz6PJL65rddcqFYFsR3xFmRe14tg9uH4CdCjd-rZ_hTX1YS399SZpghF6qpAzpV8oZI1nHM24EUbIMmhKPIHrG8gz-qQfY-WyW8E2g6mOu2aTmKyHuBiNnY/s1200/MPEG-Logo-1.png&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHeax1p_CysQZDHK7D4Zm2y3RrdBy46D5lJ-_4klCc5jFAeEJMLJ5PDeP3VFCbfmJJsQdTmz6PJL65rddcqFYFsR3xFmRe14tg9uH4CdCjd-rZ_hTX1YS399SZpghF6qpAzpV8oZI1nHM24EUbIMmhKPIHrG8gz-qQfY-WyW8E2g6mOu2aTmKyHuBiNnY/s320/MPEG-Logo-1.png&quot; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://multimediacommunication.blogspot.com/2013/04/mpeg-news-archive.html&quot;&gt;MPEG News Archive&lt;/a&gt;&lt;/p&gt;&lt;p&gt;The 149th MPEG meeting took place in Geneva, Switzerland, from January 20 to 24, 2025. The official press release can be found &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-149/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;. MPEG promoted three standards (among others) to Final Draft International Standard (FDIS), driving innovation in next-generation, immersive audio and video coding, and adaptive streaming:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;&lt;b&gt;MPEG-I Immersive Audio&lt;/b&gt; enables realistic 3D audio with six degrees of freedom (6DoF).&lt;/li&gt;&lt;li&gt;&lt;b&gt;MPEG Immersive Video (Second Edition)&lt;/b&gt; introduces advanced coding tools for volumetric video.&lt;/li&gt;&lt;li&gt;&lt;b&gt;MPEG-DASH (Sixth Edition)&lt;/b&gt; enhances low-latency streaming, content steering, and interactive media.&lt;/li&gt;&lt;/ul&gt;&lt;div&gt;&lt;div&gt;This blog post focuses on these new standards/editions based on the press release and amended with research aspect relevant for the ACM SIGMM community.&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;MPEG-I Immersive Audio&lt;/h2&gt;&lt;div&gt;At the 149th MPEG meeting, MPEG Audio Coding (WG 6) promoted ISO/IEC 23090-4 MPEG-I immersive audio to Final Draft International Standard (FDIS), marking a major milestone in the development of next-generation audio technology.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;MPEG-I immersive audio is a groundbreaking standard designed for the compact and highly realistic representation of spatial sound. Tailored for Metaverse applications, including Virtual, Augmented, and Mixed Reality (VR/AR/MR), it enables seamless real-time rendering of interactive 3D audio with six degrees of freedom (6DoF). Users can not only turn their heads in any direction (pitch/yaw/roll) but also move freely through virtual environments (x/y/z), creating an unparalleled sense of immersion.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;True to MPEG’s legacy, this standard is optimized for efficient distribution – even over networks with severe bitrate constraints. Unlike proprietary VR/AR audio solutions, MPEG-I Immersive Audio ensures broad interoperability, long-term stability, and suitability for both streaming and downloadable content. It also natively integrates MPEG-H 3D Audio for high-quality compression.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The standard models a wide range of real-world acoustic effects to enhance realism. It captures detailed sound source properties (e.g., level, point sources, extended sources, directivity characteristics, and Doppler effects) as well as complex environmental interactions (e.g., reflections, reverberation, diffraction, and both total and partial occlusion). Additionally, it supports diverse acoustic environments, including outdoor spaces, multiroom scenes with connecting portals, and areas with dynamic openings such as doors and windows. Its rendering engine balances computational efficiency with high-quality output, making it suitable for a variety of applications.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Further reinforcing its impact, the upcoming ISO/IEC 23090-34 Immersive audio reference software will fully implement MPEG-I immersive audio in a real-time framework. This interactive 6DoF experience will facilitate industry adoption and accelerate innovation in immersive audio. The reference software is expected to reach FDIS status by April 2025.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;With MPEG-I immersive audio, MPEG continues to set the standard for the future of interactive and spatial audio, paving the way for more immersive digital experiences.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;Research aspects&lt;/b&gt;: Research can focus on optimizing the streaming and compression of MPEG-I immersive audio for constrained networks, ensuring efficient delivery without compromising spatial accuracy. Another key area is improving real-time 6DoF audio rendering by balancing computational efficiency and perceptual realism, particularly in modeling complex acoustic effects like occlusions, reflections, and Doppler shifts for interactive VR/AR/MR applications.&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;MPEG Immersive Video (Second Edition)&lt;/h2&gt;&lt;div&gt;At the 149th MPEG meeting, MPEG Video Coding (WG 4) advanced the second edition of ISO/IEC 23090-12 MPEG immersive video (MIV) to Final Draft International Standard (FDIS), marking a significant step forward in immersive video technology.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;MIV enables the efficient compression, storage, and distribution of immersive video content, where multiple real or virtual cameras capture a 3D scene. Designed for next-generation applications, the standard supports playback with six degrees of freedom (6DoF), allowing users to not only change their viewing orientation (pitch/yaw/roll) but also move freely within the scene (x/y/z). By leveraging strong hardware support for widely used video formats, MPEG immersive video provides a highly flexible framework for multi-view video plus depth (MVD) and multi-plane image (MPI) video coding, making volumetric video more accessible and efficient.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;With the second edition, MPEG continues to expand the capabilities of MPEG immersive video, introducing a range of new technologies to enhance coding efficiency and support more advanced immersive experiences. Key additions include:&lt;/div&gt;&lt;div&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Geometry coding using luma and chroma planes, improving depth representation&lt;/li&gt;&lt;li&gt;Capture device information, enabling better reconstruction of the original scene&lt;/li&gt;&lt;li&gt;Patch margins and background views, optimizing scene composition&lt;/li&gt;&lt;li&gt;Static background atlases, reducing redundant data for stationary elements&lt;/li&gt;&lt;li&gt;Support for decoder-side depth estimation, enhancing depth accuracy&lt;/li&gt;&lt;li&gt;Chroma dynamic range modification, improving color fidelity&lt;/li&gt;&lt;li&gt;Piecewise linear normalized disparity quantization and linear depth quantization, refining depth precision&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div&gt;The second edition also introduces two new profiles: (1) MIV Simple MPI profile, allowing MPI content playback with a single 2D video decoder, and (2) MIV 2 profile, a superset of existing profiles that incorporates all newly added tools.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;With these advancements, MPEG immersive video continues to push the boundaries of immersive media, providing a robust and efficient solution for next-generation video applications.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;Research aspects&lt;/b&gt;: Possible research may explore advancements in MPEG immersive video to improve compression efficiency and real-time streaming while preserving depth accuracy and spatial quality. Another key area is enhancing 6DoF video rendering by leveraging new coding tools like decoder-side depth estimation and geometry coding, enabling more precise scene reconstruction and seamless user interaction in volumetric video applications.&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;MPEG-DASH (Sixth Edition)&lt;/h2&gt;&lt;div&gt;At the 149th MPEG meeting, MPEG Systems (WG 3) advanced the sixth edition of MPEG-DASH (ISO/IEC 23009-1 Media presentation description and segment formats) by promoting it to the Final Draft International Standard (FDIS), the final stage of standards development. This milestone underscores MPEG’s ongoing commitment to innovation and responsiveness to evolving market needs.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The sixth edition introduces several key enhancements to improve the flexibility and efficiency of MPEG-DASH:&lt;/div&gt;&lt;div&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Alternative media presentation support, enabling seamless switching between main and alternative streams&lt;/li&gt;&lt;li&gt;Content steering signaling across multiple CDNs, optimizing content delivery&lt;/li&gt;&lt;li&gt;Enhanced segment sequence addressing, improving low-latency streaming and faster tune-in&lt;/li&gt;&lt;li&gt;Compact duration signaling using patterns, reducing MPD overhead&lt;/li&gt;&lt;li&gt;Support for Common Media Client Data (CMCD), enabling better client-side analytics&lt;/li&gt;&lt;li&gt;Nonlinear playback for interactive storylines, expanding support for next-generation media experiences&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div&gt;With these advancements, MPEG-DASH continues to evolve as a robust and scalable solution for adaptive streaming, ensuring greater efficiency, flexibility, and enhanced user experiences across a wide range of applications.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;Research aspects&lt;/b&gt;: While advancing MPEG-DASH for more efficient and flexible adaptive streaming has been subject to research for a while, optimizing content delivery across multiple CDNs while minimizing latency and optimizing QoE remains an open issue. Another key area is enhancing interactivity and user experiences by leveraging new features like nonlinear playback for interactive storylines and improved client-side analytics through Common Media Client Data (CMCD).&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The 150th MPEG meeting will be held online from March 31 to April 04, 2025. Click &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-150/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt; for more information about MPEG meetings and their developments.&lt;/div&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;</description><link>http://multimediacommunication.blogspot.com/2025/03/mpeg-news-report-from-149th-meeting.html</link><author>noreply@blogger.com (Unknown)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHeax1p_CysQZDHK7D4Zm2y3RrdBy46D5lJ-_4klCc5jFAeEJMLJ5PDeP3VFCbfmJJsQdTmz6PJL65rddcqFYFsR3xFmRe14tg9uH4CdCjd-rZ_hTX1YS399SZpghF6qpAzpV8oZI1nHM24EUbIMmhKPIHrG8gz-qQfY-WyW8E2g6mOu2aTmKyHuBiNnY/s72-c/MPEG-Logo-1.png" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-2602464759678514459</guid><pubDate>Fri, 06 Dec 2024 14:14:00 +0000</pubDate><atom:updated>2025-03-14T09:35:00.495+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">MPEG</category><category domain="http://www.blogger.com/atom/ns#">press release</category><title>MPEG news: a report from the 148th meeting</title><description>&lt;p style=&quot;text-align: right;&quot;&gt;&lt;span style=&quot;font-size: x-small;&quot;&gt;This blog post is based on the MPEG press release and has been modified/updated here to focus on and highlight research aspects. This version of the blog post will also be posted at ACM SIGMM Records.&lt;/span&gt;&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHeax1p_CysQZDHK7D4Zm2y3RrdBy46D5lJ-_4klCc5jFAeEJMLJ5PDeP3VFCbfmJJsQdTmz6PJL65rddcqFYFsR3xFmRe14tg9uH4CdCjd-rZ_hTX1YS399SZpghF6qpAzpV8oZI1nHM24EUbIMmhKPIHrG8gz-qQfY-WyW8E2g6mOu2aTmKyHuBiNnY/s1200/MPEG-Logo-1.png&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHeax1p_CysQZDHK7D4Zm2y3RrdBy46D5lJ-_4klCc5jFAeEJMLJ5PDeP3VFCbfmJJsQdTmz6PJL65rddcqFYFsR3xFmRe14tg9uH4CdCjd-rZ_hTX1YS399SZpghF6qpAzpV8oZI1nHM24EUbIMmhKPIHrG8gz-qQfY-WyW8E2g6mOu2aTmKyHuBiNnY/s320/MPEG-Logo-1.png&quot; /&gt;&lt;/a&gt;&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://multimediacommunication.blogspot.com/2013/04/mpeg-news-archive.html&quot;&gt;MPEG News Archive&lt;/a&gt;&lt;/p&gt;&lt;p&gt;The 148th MPEG meeting took place in Kemer, Türkiye, from November 4 to 8, 2024. The official press release can be found &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-148/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt; and includes the following highlights:&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;!--wp:paragraph--&gt;

&lt;!--/wp:paragraph--&gt;

&lt;!--wp:list--&gt;

&lt;!--/wp:list--&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;strong&gt;Point Cloud Coding&lt;/strong&gt;: AI-based point cloud coding &amp;amp; enhanced G-PCC&lt;/li&gt;&lt;li&gt;&lt;strong&gt;MPEG Systems&lt;/strong&gt;: New Part of MPEG DASH for redundant encoding and packaging, reference software and conformance of ISOBMFF, and a new structural CMAF brand profile&lt;/li&gt;&lt;li&gt;&lt;strong&gt;Video Coding&lt;/strong&gt;: New part of MPEG-AI and 2nd edition of conformance and reference software for MPEG Immersive Video (MIV)&lt;/li&gt;&lt;li&gt;MPEG completes &lt;strong&gt;subjective quality testing for film grain synthesis&lt;/strong&gt; using the Film Grain Characteristics SEI message&lt;/li&gt;&lt;/ul&gt;&lt;div&gt;&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/a/AVvXsEj7c0P8ZRGVTHTR0GMo9Drc-VLMLSlWD4hms7MpsqC9g8NhKvkHvJkPUS4UT4bhqly6V1_HxdFv3OlLPELROCTtMkCGip9KzRrkWZrT5B-TV_aF0J1PID3d8Ptbb2KoXAB65ltsECMsylpWjJUuEeYsj18sqqKYGx0-O7XrPygjU3t8mnrWOe8oi8f2P2M&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img alt=&quot;148th MPEG Meeting, Kemer, Türkiye, November 4-8, 2024.&quot; data-original-height=&quot;576&quot; data-original-width=&quot;1024&quot; height=&quot;225&quot; src=&quot;https://blogger.googleusercontent.com/img/a/AVvXsEj7c0P8ZRGVTHTR0GMo9Drc-VLMLSlWD4hms7MpsqC9g8NhKvkHvJkPUS4UT4bhqly6V1_HxdFv3OlLPELROCTtMkCGip9KzRrkWZrT5B-TV_aF0J1PID3d8Ptbb2KoXAB65ltsECMsylpWjJUuEeYsj18sqqKYGx0-O7XrPygjU3t8mnrWOe8oi8f2P2M=w400-h225&quot; title=&quot;148th MPEG Meeting, Kemer, Türkiye, November 4-8, 2024.&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;&lt;span style=&quot;background-color: white; color: #555d66; font-family: &amp;quot;Noto Serif&amp;quot;; font-size: 13px; white-space-collapse: preserve;&quot;&gt;148th MPEG Meeting, Kemer, Türkiye, November 4-8, 2024.&lt;/span&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;!--wp:heading--&gt;
&lt;h2&gt;Point Cloud Coding&lt;/h2&gt;
&lt;!--/wp:heading--&gt;

&lt;!--wp:paragraph--&gt;
&lt;p&gt;At the 148&lt;sup&gt;th&lt;/sup&gt; MPEG meeting, &lt;em&gt;MPEG Coding of 3D Graphics and Haptics&lt;/em&gt; (WG 7) launched a new AI-based Point Cloud Coding standardization project. MPEG WG 7 reviewed six responses to a Call for Proposals (CfP) issued in April 2024 targeting  the full range of point cloud formats, from dense point clouds used in immersive applications to sparse point clouds generated by Light Detection and Ranging (LiDAR) sensors in autonomous driving. With bit depths ranging from 10 to 18 bits, the CfP called for solutions that could meet the precision requirements of these varied use cases.&lt;/p&gt;
&lt;!--/wp:paragraph--&gt;

&lt;!--wp:paragraph--&gt;
&lt;p&gt;Among the six reviewed proposals, the leading proposal distinguished itself with a hybrid coding strategy that integrates end-to-end learning-based geometry coding and traditional attribute coding. This proposal demonstrated exceptional adaptability, capable of efficiently encoding both dense point clouds for immersive experiences and sparse point clouds from LiDAR sensors. With its unified design, the system supports inter-prediction coding using a shared model with intra-coding, applicable across various bitrate requirements without retraining. Furthermore, the proposal offers flexible configurations for both lossy and lossless geometry coding.&lt;/p&gt;
&lt;!--/wp:paragraph--&gt;

&lt;!--wp:paragraph--&gt;
&lt;p&gt;Performance assessments highlighted the leading proposal’s effectiveness, with significant bitrate reductions compared to traditional codecs: a 47% reduction for dense, dynamic sequences in immersive applications and a 35% reduction for sparse dynamic sequences in LiDAR data. For combined geometry and attribute coding, it achieved a 40% bitrate reduction across both dense and sparse dynamic sequences, while subjective evaluations confirmed its superior visual quality over baseline codecs.&lt;/p&gt;
&lt;!--/wp:paragraph--&gt;

&lt;!--wp:paragraph--&gt;
&lt;p&gt;The leading proposal has been selected as the initial test model, which can be seen as a baseline implementation for future improvements and developments. Additionally, MPEG issued a working draft and common test conditions.&lt;/p&gt;
&lt;!--/wp:paragraph--&gt;

&lt;!--wp:paragraph--&gt;
&lt;p&gt;&lt;strong&gt;Research aspects&lt;/strong&gt;: The initial test model, like those for other codec test models, is typically available as open source. This enables both academia and industry to contribute to refining various elements of the upcoming AI-based Point Cloud Coding standard. Of particular interest is how training data and processes are incorporated into the standardization project and their impact on the final standard.&lt;/p&gt;
&lt;!--/wp:paragraph--&gt;

&lt;!--wp:paragraph--&gt;
&lt;p&gt;Another point cloud-related project is called Enhanced G-PCC, which introduces several advanced features to improve the compression and transmission of 3D point clouds. Notable enhancements include inter-frame coding, refined octree coding techniques, Trisoup surface coding for smoother geometry representation, and dynamic Optimal Binarization with Update On-the-fly (OBUF) modules. These updates provide higher compression efficiency while managing computational complexity and memory usage, making them particularly advantageous for real-time processing and high visual fidelity applications, such as LiDAR data for autonomous driving and dense point clouds for immersive media.&lt;/p&gt;
&lt;!--/wp:paragraph--&gt;

&lt;!--wp:paragraph--&gt;
&lt;p&gt;By adding this new part to MPEG-I, MPEG addresses the industry&#39;s growing demand for scalable, versatile 3D compression technology capable of handling both dense and sparse point clouds. Enhanced G-PCC provides a robust framework that meets the diverse needs of both current and emerging applications in 3D graphics and multimedia, solidifying its role as a vital component of modern multimedia systems.&lt;/p&gt;
&lt;!--/wp:paragraph--&gt;

&lt;!--wp:heading--&gt;
&lt;h2&gt;MPEG Systems Updates&lt;/h2&gt;
&lt;!--/wp:heading--&gt;

&lt;!--wp:paragraph--&gt;
&lt;p&gt;At its 148&lt;sup&gt;th&lt;/sup&gt; meeting, &lt;em&gt;MPEG Systems&lt;/em&gt; (WG 3) worked on the following aspects, among others:&lt;/p&gt;
&lt;!--/wp:paragraph--&gt;

&lt;!--wp:list--&gt;
&lt;ul&gt;&lt;li&gt;New Part of MPEG DASH for redundant encoding and packaging&lt;/li&gt;&lt;li&gt;Reference software and conformance of ISOBMFF&lt;/li&gt;&lt;li&gt;A new structural CMAF brand profile&lt;/li&gt;&lt;/ul&gt;
&lt;!--/wp:list--&gt;

&lt;!--wp:paragraph--&gt;
&lt;p&gt;The second edition of ISO/IEC 14496-32 (ISOBMFF) introduces updated reference software and conformance guidelines, and the new CMAF brand profile supports Multi-View High Efficiency Video Coding (MV-HEVC), which is compatible with devices like Apple Vision Pro and Meta Quest 3.&lt;/p&gt;
&lt;!--/wp:paragraph--&gt;

&lt;!--wp:paragraph--&gt;
&lt;p&gt;The new part of MPEG DASH, ISO/IEC 23009-9, addresses redundant encoding and packaging for segmented live media (REAP). The standard is designed for scenarios where redundant encoding and packaging are essential, such as 24/7 live media production and distribution in cloud-based workflows. It specifies formats for interchangeable live media ingest and stream announcements, as well as formats for generating interchangeable media presentation descriptions. Additionally, it provides failover support and mechanisms for reintegrating distributed components in the workflow, whether they involve file-based content, live inputs, or a combination of both.&lt;/p&gt;
&lt;!--/wp:paragraph--&gt;

&lt;!--wp:paragraph--&gt;
&lt;p&gt;&lt;strong&gt;Research aspects&lt;/strong&gt;: With the FDIS of MPEG DASH REAP available, the following topics offer potential for both academic and industry-driven research aligned with the standard&#39;s objectives (in no particular order or priority):&lt;/p&gt;
&lt;!--/wp:paragraph--&gt;

&lt;!--wp:list--&gt;
&lt;ul&gt;&lt;li&gt;&lt;em&gt;Optimization of redundant encoding and packaging&lt;/em&gt;: Investigate methods to minimize resource usage (e.g., computational power, storage, and bandwidth) in redundant encoding and packaging workflows. Explore trade-offs between redundancy levels and quality of service (QoS) in segmented live media scenarios.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Interoperability of live media Ingest formats&lt;/em&gt;: Evaluate the interoperability of the standard&#39;s formats with existing live media workflows and tools. Develop techniques for seamless integration with legacy systems and emerging cloud-based media workflows.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Failover mechanisms for cloud-based workflows&lt;/em&gt;: Study the reliability and latency of failover mechanisms in distributed live media workflows. Propose enhancements to the reintegration of failed components to maintain uninterrupted service.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Standardized stream announcements and descriptions&lt;/em&gt;: Analyze the efficiency and scalability of stream announcement formats in large-scale live streaming scenarios. Research methods for dynamically updating media presentation descriptions during live events.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Hybrid workflow support&lt;/em&gt;: Investigate the challenges and opportunities in combining file-based and live input workflows within the standard. Explore strategies for adaptive workflow transitions between live and on-demand content.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Cloud-based workflow scalability&lt;/em&gt;: Examine the scalability of the REAP standard in high-demand scenarios, such as global live event streaming. Study the impact of cloud-based distributed workflows on latency and synchronization.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Security and resilience&lt;/em&gt;: Research security challenges related to redundant encoding and packaging in cloud environments. Develop techniques to enhance the resilience of workflows against cyberattacks or system failures.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Performance metrics and quality assessment&lt;/em&gt;: Define performance metrics for evaluating the effectiveness of REAP in live media workflows. Explore objective and subjective quality assessment methods for media streams delivered using this standard.&lt;/li&gt;&lt;/ul&gt;
&lt;!--/wp:list--&gt;

&lt;!--wp:paragraph--&gt;
&lt;p&gt;The current/updated status of MPEG-DASH is shown in the figure below.&lt;/p&gt;&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBig-UhjLw3IlSH9vfK3IaeOjXgr1eGVG8pWlucw_xHr_XzIQwoUKoYQLDg7VkMRBbrDPxv80LLSXeV9oEFa1HjrvF4B_a6fYUwdYX3Yf8L-ZAHOgsXGONXhkV_fRrfKX6tW9cd7aOT189bbf1x5OUTsvqnngrDJ1cc-5DuqVzIqnf1m6-C3wcabXrMrI/s1024/MPEG-DASH-standard-status.png&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;576&quot; data-original-width=&quot;1024&quot; height=&quot;225&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBig-UhjLw3IlSH9vfK3IaeOjXgr1eGVG8pWlucw_xHr_XzIQwoUKoYQLDg7VkMRBbrDPxv80LLSXeV9oEFa1HjrvF4B_a6fYUwdYX3Yf8L-ZAHOgsXGONXhkV_fRrfKX6tW9cd7aOT189bbf1x5OUTsvqnngrDJ1cc-5DuqVzIqnf1m6-C3wcabXrMrI/w400-h225/MPEG-DASH-standard-status.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;MPEG-DASH status, November 2024.&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;h2&gt;Video Coding Updates&lt;/h2&gt;&lt;p&gt;In terms of video coding, two noteworthy updates are described here:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;Part 3 of MPEG-AI, ISO/IEC 23888-3 – Optimization of encoders and receiving systems for machine analysis of coded video content, reached Committee Draft Technical Report (CDTR) status&lt;/li&gt;&lt;li&gt;Second edition of conformance and reference software for MPEG Immersive Video (MIV). This draft includes verified and validated conformance bitstreams and encoding and decoding reference software based on version 22 of the Test model for MPEG immersive video (TMIV). The test model, objective metrics, and some other tools are publicly available at &lt;a href=&quot;https://gitlab.com/mpeg-i-visual&quot;&gt;https://gitlab.com/mpeg-i-visual&lt;/a&gt;.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;strong&gt;Part 3 of MPEG-AI, ISO/IEC 23888-3&lt;/strong&gt;: This new technical report on &quot;optimization of encoders and receiving systems for machine analysis of coded video content&quot; is based on software experiments conducted by JVET, focusing on optimizing non-normative elements such as preprocessing, encoder settings, and postprocessing. The research explored scenarios where video signals, decoded from bitstreams compliant with the latest video compression standard, ISO/IEC 23090-3 – Versatile Video Coding (VVC), are intended for input into machine vision systems rather than for human viewing. Compared to the JVET VVC reference software encoder, which was originally optimized for human consumption, significant bit rate reductions were achieved when machine vision task precision was used as the performance criterion.&lt;/p&gt;&lt;p&gt;The report will include an annex with example software implementations of these non-normative algorithmic elements, applicable to VVC or other video compression standards. Additionally, it will explore the potential use of existing supplemental enhancement information messages from ISO/IEC 23002-7 – Versatile supplemental enhancement information messages for coded video bitstreams – for embedding metadata useful in these contexts.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Research aspects&lt;/strong&gt;: (1) Focus on optimizing video encoding for machine vision tasks by refining preprocessing, encoder settings, and postprocessing to improve bit rate efficiency and task precision, compared to traditional approaches for human viewing. (2) Examine the use of metadata, specifically SEI messages from ISO/IEC 23002-7, to enhance machine analysis of compressed video, improving adaptability, performance, and interoperability.&lt;/p&gt;&lt;h2&gt;Subjective Quality Testing for Film Grain Synthesis&lt;/h2&gt;&lt;p&gt;At the  148&lt;sup&gt;th&lt;/sup&gt; MPEG meeting , the &lt;em&gt;MPEG Joint Video Experts Team (JVET) with ITU-T SG 16 (WG 5 / JVET)&lt;/em&gt; and &lt;em&gt;MPEG Visual Quality Assessment (AG 5)&lt;/em&gt; conducted a formal expert viewing experiment to assess the impact of film grain synthesis on the subjective quality of video content. This evaluation specifically focused on film grain synthesis controlled by the Film Grain Characteristics (FGC) supplemental enhancement information (SEI) message. The study aimed to demonstrate the capability of film grain synthesis to mask compression artifacts introduced by the underlying video coding schemes.&lt;/p&gt;&lt;p&gt;For the evaluation, FGC SEI messages were adapted to a diverse set of video sequences, including scans of original film material, digital camera noise, and synthetic film grain artificially applied to digitally captured video. The subjective performance of video reconstructed from VVC and HEVC bitstreams was compared with and without film grain synthesis. The results highlighted the effectiveness of film grain synthesis, showing a significant improvement in subjective quality and enabling bitrate savings of up to a factor of 10 for certain test points.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;This study opens several avenues for further research&lt;/strong&gt;:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;em&gt;Optimization of film grain synthesis techniques&lt;/em&gt;: Investigating how different grain synthesis methods affect the perceptual quality of video across a broader range of content and compression levels.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Compression artifact mitigation&lt;/em&gt;: Exploring the interaction between film grain synthesis and specific types of compression artifacts, with a focus on improving masking efficiency.&lt;/li&gt;&lt;li&gt;&lt;em&gt;Adaptation of FGC SEI messages&lt;/em&gt;: Developing advanced algorithms for tailoring FGC SEI messages to dynamically adapt to diverse video characteristics, including real-time encoding scenarios.&lt;/li&gt;&lt;li&gt;Bitrate savings analysis: Examining the trade-offs between bitrate savings and subjective quality across various coding standards and network conditions.&lt;/li&gt;&lt;/ul&gt;&lt;hr class=&quot;wp-block-separator&quot; /&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;!--wp:heading--&gt;

&lt;!--/wp:heading--&gt;

&lt;!--wp:paragraph--&gt;

&lt;!--/wp:paragraph--&gt;

&lt;!--wp:list--&gt;

&lt;!--/wp:list--&gt;

&lt;!--wp:paragraph--&gt;

&lt;!--/wp:paragraph--&gt;

&lt;!--wp:paragraph--&gt;

&lt;!--/wp:paragraph--&gt;

&lt;!--wp:paragraph--&gt;

&lt;!--/wp:paragraph--&gt;

&lt;!--wp:heading--&gt;

&lt;!--/wp:heading--&gt;

&lt;!--wp:paragraph--&gt;

&lt;!--/wp:paragraph--&gt;

&lt;!--wp:paragraph--&gt;

&lt;!--/wp:paragraph--&gt;

&lt;!--wp:paragraph--&gt;

&lt;!--/wp:paragraph--&gt;

&lt;!--wp:list--&gt;

&lt;!--/wp:list--&gt;

&lt;!--wp:separator--&gt;

&lt;!--/wp:separator--&gt;

&lt;!--wp:paragraph--&gt;

&lt;!--/wp:paragraph--&gt;&lt;/p&gt;&lt;p&gt;The 149th MPEG meeting will be held in Geneva, Switzerland from January 20-24, 2025. Click &lt;a href=&quot;https://www.mpeg.org/&quot;&gt;here&lt;/a&gt; for more information about MPEG meetings and their developments.&lt;/p&gt;&lt;/div&gt;</description><link>http://multimediacommunication.blogspot.com/2024/12/mpeg-news-report-from-147th-meeting.html</link><author>noreply@blogger.com (Unknown)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHeax1p_CysQZDHK7D4Zm2y3RrdBy46D5lJ-_4klCc5jFAeEJMLJ5PDeP3VFCbfmJJsQdTmz6PJL65rddcqFYFsR3xFmRe14tg9uH4CdCjd-rZ_hTX1YS399SZpghF6qpAzpV8oZI1nHM24EUbIMmhKPIHrG8gz-qQfY-WyW8E2g6mOu2aTmKyHuBiNnY/s72-c/MPEG-Logo-1.png" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-3996341931554834787</guid><pubDate>Mon, 14 Oct 2024 07:09:00 +0000</pubDate><atom:updated>2024-10-14T09:09:00.116+02:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">MPEG</category><category domain="http://www.blogger.com/atom/ns#">World Standards Day</category><title>Happy World Standards Day 2024</title><description>&lt;p&gt;As we celebrate &lt;b&gt;World Standards Day&lt;/b&gt;, it&#39;s important to recognize the monumental advancements the MPEG community has made over the past year. These achievements continue to influence multimedia standards worldwide, playing a crucial role in ensuring seamless, high-quality digital experiences.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ol style=&quot;text-align: left;&quot;&gt;&lt;li&gt;&lt;b&gt;ISO Base Media File Format (8th Edition)&lt;/b&gt;: This standard has been pivotal for media streaming applications, particularly for formats like &lt;b&gt;DASH&lt;/b&gt;&amp;nbsp;(Dynamic Adaptive Streaming over HTTP) and &lt;b&gt;CMAF&lt;/b&gt;&amp;nbsp;(Common Media Application Format). The latest update facilitates more seamless media switching and continuous presentation, optimizing the user experience across different devices.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Neural Network Compression (2nd Edition)&lt;/b&gt;: With AI technologies rapidly evolving, MPEG&#39;s neural network compression standard addresses the need for efficient storage and inference in multimedia systems. The second edition enhances reference software, providing robust tools for handling complex neural networks in applications such as image and video processing.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Low Latency, Low Complexity LiDAR Coding&lt;/b&gt;: As industries like &lt;b&gt;autonomous vehicles&lt;/b&gt;&amp;nbsp;and &lt;b&gt;smart cities&lt;/b&gt;&amp;nbsp;expand, this standard addresses the need for efficient and real-time processing of LiDAR data. The MPEG community has developed compression techniques that maintain low latency and complexity, enabling faster decision-making for autonomous systems.&lt;/li&gt;&lt;li&gt;&lt;b&gt;MPEG-DASH (6th Edition)&lt;/b&gt;: The 6th edition of &lt;b&gt;MPEG-DASH&lt;/b&gt;&amp;nbsp;brings exciting improvements in adaptive streaming. Key updates include support for new &lt;b&gt;CMCD parameters&lt;/b&gt;&amp;nbsp;for better content management and a &lt;b&gt;background mode&lt;/b&gt;&amp;nbsp;that allows players to receive updates without disrupting media playback. These advancements significantly enhance streaming efficiency and flexibility.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Video Coding for Machines (VCM)&lt;/b&gt;: A significant addition this year has been the introduction of &lt;b&gt;Video Coding for Machines&lt;/b&gt;. This emerging standard focuses on &lt;b&gt;machine vision applications&lt;/b&gt;, where efficient encoding and decoding are crucial for machine learning tasks such as object detection and recognition. This innovation caters to the increasing integration of machine-based analytics in multimedia systems.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Immersive Media and Volumetric Video&lt;/b&gt;: MPEG’s work on &lt;b&gt;volumetric video coding&lt;/b&gt;&amp;nbsp;and standards for immersive media continues to push the boundaries of AR/VR technologies. This ensures that immersive content can be delivered across various platforms with improved consistency and performance.&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;These highlights exemplify MPEG&#39;s commitment to fostering innovation through multimedia standards, shaping the future of digital content. On this &lt;b&gt;World Standards Day&lt;/b&gt;, let’s celebrate the efforts that keep the digital ecosystem thriving!&lt;/p&gt;</description><link>http://multimediacommunication.blogspot.com/2024/10/happy-world-standards-day-2024.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-5411296302815338867</guid><pubDate>Fri, 27 Sep 2024 11:52:00 +0000</pubDate><atom:updated>2024-10-21T11:50:20.519+02:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">CfP</category><category domain="http://www.blogger.com/atom/ns#">MHV</category><title>ACM Mile-High Video Conference 2025: Call for Contributions</title><description>&lt;p&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;caret-color: rgb(0, 0, 0); color: #010101; font-size: 18pt; font-weight: 700;&quot;&gt;&lt;/span&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/a/AVvXsEjFrFFbE_IrzJkGLgcN4XMduEimpZx0ijOAS1O7DjOzXtuWXxQaLLqxuPbIm57sG1GXa8RszlQMMCOdqdyPuWW9Kr4ggDvOd1Mt-heglQmISmv7r-BDcp7xwYBoIHB3Bhwpq6LhwH426Fd61A8gMNuCahu0IV3GyVH_aBK7oH34CMMiaD1mKJUIJIQSH2o&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img alt=&quot;&quot; data-original-height=&quot;304&quot; data-original-width=&quot;1090&quot; height=&quot;89&quot; src=&quot;https://blogger.googleusercontent.com/img/a/AVvXsEjFrFFbE_IrzJkGLgcN4XMduEimpZx0ijOAS1O7DjOzXtuWXxQaLLqxuPbIm57sG1GXa8RszlQMMCOdqdyPuWW9Kr4ggDvOd1Mt-heglQmISmv7r-BDcp7xwYBoIHB3Bhwpq6LhwH426Fd61A8gMNuCahu0IV3GyVH_aBK7oH34CMMiaD1mKJUIJIQSH2o&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;MHV 2025: ACM Mile-High Video Conference 2025&lt;br /&gt;Call for Contributions&lt;br /&gt;February 18-20, 2025, The Cable Center, Denver, Colorado&lt;br /&gt;&lt;a href=&quot;https://www.mile-high.video/&quot;&gt;https://www.mile-high.video/&lt;/a&gt;&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;&lt;div class=&quot;ydp105d4dc8yiv1573510755ydp5297be0cpasted-link&quot;&gt;&lt;span id=&quot;ydp105d4dc8yiv1573510755ydp5297be0cdocs-internal-guid-91b7588c-7fff-7b6d-75a8-5cdcc59852cf&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;caret-color: rgb(0, 0, 0); font-family: &amp;quot;Helvetica Neue&amp;quot;, Helvetica, Arial, sans-serif; font-size: 13px; line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; padding: 9pt 0pt 0pt; text-align: justify;&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; vertical-align: baseline;&quot;&gt;ACM Mile-High Video (MHV) is a&amp;nbsp;&lt;/span&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; font-style: italic; vertical-align: baseline;&quot;&gt;flagship&lt;/span&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; vertical-align: baseline;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; font-style: italic; vertical-align: baseline;&quot;&gt;industry-oriented technical conference&lt;/span&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; vertical-align: baseline;&quot;&gt;&amp;nbsp;in the area of video technologies, which has been successfully running in Denver, Colorado, starting from 2016. ACM MHV 2025 welcomes contributions from both industry and academia to share real-world problems and solutions as well as novel approaches and innovations from content production to consumption. ACM MHV 2025 will provide a unique opportunity to view the interplay of the industry and academia in the area of video technologies.&lt;/span&gt;&lt;/p&gt;&lt;p dir=&quot;ltr&quot; style=&quot;caret-color: rgb(0, 0, 0); font-family: &amp;quot;Helvetica Neue&amp;quot;, Helvetica, Arial, sans-serif; font-size: 13px; line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;br /&gt;&lt;/p&gt;&lt;p dir=&quot;ltr&quot; style=&quot;caret-color: rgb(0, 0, 0); font-family: &amp;quot;Helvetica Neue&amp;quot;, Helvetica, Arial, sans-serif; font-size: 13px; line-height: 1.44; margin-bottom: 0pt; margin-top: 0pt; text-align: justify;&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; font-weight: 700; vertical-align: baseline;&quot;&gt;ACM MHV contributions are solicited in, but not limited to, the following areas&lt;/span&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; vertical-align: baseline;&quot;&gt;:&lt;/span&gt;&lt;/p&gt;&lt;ul style=&quot;caret-color: rgb(0, 0, 0); font-family: &amp;quot;Helvetica Neue&amp;quot;, Helvetica, Arial, sans-serif; font-size: 13px; margin-bottom: 0px; margin-top: 0px;&quot;&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Content production, encoding, and packaging&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;ul style=&quot;margin-bottom: 0px; margin-top: 0px;&quot;&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Encoding for broadcast, mobile, and OTT (incl. using AI/ML in encoding),&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;New and emerging audio, image, and video codecs (incl. point cloud coding, light field coding, holography coding, etc.)&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Edge, network, and cloud-based coding&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Perceptually optimized objective quality metrics&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Quality assessment models and tools, and user experience studies&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Storage applications for video processing and streaming&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Accessibility&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;HDR&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Video workflows&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;ul style=&quot;margin-bottom: 0px; margin-top: 0px;&quot;&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Virtualized headends, cloud-based workflows for production and distribution&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Redundancy and resilience in content origination&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Ingest protocols&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Ad insertion&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Content delivery and security&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;ul style=&quot;margin-bottom: 0px; margin-top: 0px;&quot;&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Developments in transport protocols and new delivery paradigms&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Protection for OTT distribution and tools against piracy&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Analytics&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Streaming technologies&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;ul style=&quot;margin-bottom: 0px; margin-top: 0px;&quot;&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Adaptive streaming and transcoding&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Low latency&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Player, playback, and QoE developments&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Content discovery, promotion, and recommendation systems&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Protocol and Web API improvements and innovations for streaming video&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Industry trends&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;ul style=&quot;margin-bottom: 0px; margin-top: 0px;&quot;&gt;&lt;li dir=&quot;ltr&quot; style=&quot;font-family: Arial, sans-serif; font-size: 11pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; vertical-align: baseline;&quot;&gt;3D/XR video (NeRF, Gaussian splatting, etc.)&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Scalable and multi-view video coding deployments&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Video and audio coding for machines&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Cloud gaming and gaming streaming&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Provenance, content authentication, and deepfakes&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Energy management in video compression and streaming&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Edge computing tools and applications&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Hardware for content encoding, storage, and distribution&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Standards and interoperability&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;ul style=&quot;margin-bottom: 0px; margin-top: 0px;&quot;&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;New and developing standards in the media and delivery space&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: circle; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Interoperability guidelines&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/ul&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 10pt; text-align: justify;&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101;&quot;&gt;&lt;span style=&quot;font-size: 13.3333px;&quot;&gt;&lt;b&gt;Reasons for academics to submit to and attend the ACM Mile-High Video 2025 event:&lt;/b&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 10pt; text-align: justify;&quot;&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101;&quot;&gt;&lt;span style=&quot;font-size: 13.3333px;&quot;&gt;Networking with Industry Leaders: Engage with top minds from both academia and industry, fostering collaborations and partnerships in cutting-edge video technology.&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101;&quot;&gt;&lt;span style=&quot;font-size: 13.3333px;&quot;&gt;High Visibility: Present your research to a global audience, including key industry players and decision-makers.&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101;&quot;&gt;&lt;span style=&quot;font-size: 13.3333px;&quot;&gt;Innovative Content: Showcase your work in emerging areas such as metaverse, AI-driven video technologies, and advanced codecs.&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101;&quot;&gt;&lt;span style=&quot;font-size: 13.3333px;&quot;&gt;Research Impact: Influence the direction of next-gen video technology and its applications across industries.&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 10pt; text-align: justify;&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 13.3333px;&quot;&gt;Participating in ACM MHV 2025 is a prime opportunity to boost academic careers by connecting theoretical research with real-world innovations.&lt;/span&gt;&lt;/p&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 10pt; text-align: justify;&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101;&quot;&gt;&lt;span style=&quot;font-size: 13.3333px;&quot;&gt;&lt;b&gt;Persuasive reasons for industry professionals to submit and attend ACM Mile-High Video 2025:&lt;/b&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 10pt; text-align: justify;&quot;&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101;&quot;&gt;&lt;span style=&quot;font-size: 13.3333px;&quot;&gt;&lt;b&gt;Access to Cutting-Edge Research&lt;/b&gt;: Stay at the forefront of video technology with insights into innovations in streaming, codecs, and content delivery.&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101;&quot;&gt;&lt;span style=&quot;font-size: 13.3333px;&quot;&gt;&lt;b&gt;Collaborate with Academia&lt;/b&gt;: Engage with academic researchers to explore potential partnerships that can accelerate R&amp;amp;D and technological breakthroughs.&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101;&quot;&gt;&lt;span style=&quot;font-size: 13.3333px;&quot;&gt;&lt;b&gt;Showcase Industry Leadership&lt;/b&gt;: Present your company’s pioneering solutions, fostering brand recognition and thought leadership in the global video tech&amp;nbsp; community.&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101;&quot;&gt;&lt;span style=&quot;font-size: 13.3333px;&quot;&gt;&lt;b&gt;Talent and Recruitment&lt;/b&gt;: Discover emerging talent and ideas that could impact your organization’s future.&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 10pt; text-align: justify;&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 13.3333px;&quot;&gt;Participating drives industry growth through shared knowledge and collaboration!&lt;/span&gt;&lt;/p&gt;&lt;p dir=&quot;ltr&quot; style=&quot;caret-color: rgb(0, 0, 0); font-family: &amp;quot;Helvetica Neue&amp;quot;, Helvetica, Arial, sans-serif; font-size: 13px; line-height: 1.38; margin-bottom: 0pt; margin-top: 10pt; text-align: justify;&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; font-weight: 700; vertical-align: baseline;&quot;&gt;Prospective speakers are invited to submit an extended abstract (one page ~400 words + references) that will be peer-reviewed by the ACM MHV technical program committee (TPC) for relevance, timeliness, and technical correctness.&lt;/span&gt;&lt;/p&gt;&lt;p dir=&quot;ltr&quot; style=&quot;caret-color: rgb(0, 0, 0); font-family: &amp;quot;Helvetica Neue&amp;quot;, Helvetica, Arial, sans-serif; font-size: 13px; line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; padding: 10pt 0pt 0pt; text-align: justify;&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; vertical-align: baseline;&quot;&gt;The authors of the accepted extended abstracts will be invited to optionally submit a full-length paper (up to six pages + references) for possible inclusion into the conference proceedings. These papers must be original work (i.e., not published previously in a journal or conference) and will also be peer-reviewed by the ACM MHV TPC.&lt;/span&gt;&lt;/p&gt;&lt;p dir=&quot;ltr&quot; style=&quot;caret-color: rgb(0, 0, 0); font-family: &amp;quot;Helvetica Neue&amp;quot;, Helvetica, Arial, sans-serif; font-size: 13px; line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; padding: 10pt 0pt 0pt; text-align: justify;&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; vertical-align: baseline;&quot;&gt;Accepted extended abstracts and full-length papers will be presented at the ACM MHV conference, and, optionally, will be published at the conference proceedings within the&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;https://dl.acm.org/&quot; rel=&quot;nofollow&quot; style=&quot;color: #196ad4;&quot; target=&quot;_blank&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #1155cc; font-size: 10pt; vertical-align: baseline;&quot;&gt;ACM Digital Library&lt;/span&gt;&lt;/a&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; vertical-align: baseline;&quot;&gt;.&lt;/span&gt;&lt;/p&gt;&lt;p dir=&quot;ltr&quot; style=&quot;caret-color: rgb(0, 0, 0); font-family: &amp;quot;Helvetica Neue&amp;quot;, Helvetica, Arial, sans-serif; font-size: 13px; line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; padding: 10pt 0pt 0pt; text-align: justify;&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; vertical-align: baseline;&quot;&gt;All prospective ACM authors are subject to all&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;https://www.acm.org/publications/policies/toc&quot; rel=&quot;nofollow&quot; style=&quot;color: #196ad4;&quot; target=&quot;_blank&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;ACM Publications Policies&lt;/span&gt;&lt;/a&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; vertical-align: baseline;&quot;&gt;, including ACM&#39;s new&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;https://www.acm.org/publications/policies/research-involving-human-participants-and-subjects&quot; rel=&quot;nofollow&quot; style=&quot;color: #196ad4;&quot; target=&quot;_blank&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Publications Policy on Research Involving Human Participants and Subjects&lt;/span&gt;&lt;/a&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; vertical-align: baseline;&quot;&gt;.&lt;/span&gt;&lt;/p&gt;&lt;p dir=&quot;ltr&quot; style=&quot;caret-color: rgb(0, 0, 0); font-family: &amp;quot;Helvetica Neue&amp;quot;, Helvetica, Arial, sans-serif; font-size: 13px; line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; padding: 10pt 0pt 0pt; text-align: justify;&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; font-style: italic; font-weight: 700; vertical-align: baseline;&quot;&gt;How to Submit an Extended Abstract&lt;/span&gt;&lt;/p&gt;&lt;p dir=&quot;ltr&quot; style=&quot;caret-color: rgb(0, 0, 0); font-family: &amp;quot;Helvetica Neue&amp;quot;, Helvetica, Arial, sans-serif; font-size: 13px; line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; padding: 10pt 0pt 0pt; text-align: justify;&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; vertical-align: baseline;&quot;&gt;Prospective authors are invited to submit one-page extended abstracts&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;https://mhv25.hotcrp.com/&quot; rel=&quot;nofollow&quot; style=&quot;color: #196ad4;&quot; target=&quot;_blank&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #1155cc; font-size: 10pt; vertical-align: baseline;&quot;&gt;here&lt;/span&gt;&lt;/a&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; font-style: italic; vertical-align: baseline;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;&lt;p dir=&quot;ltr&quot; style=&quot;caret-color: rgb(0, 0, 0); font-family: &amp;quot;Helvetica Neue&amp;quot;, Helvetica, Arial, sans-serif; font-size: 13px; line-height: 1.38; margin-bottom: 6pt; margin-top: 0pt; padding: 10pt 0pt 0pt;&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; font-style: italic; font-weight: 700; vertical-align: baseline;&quot;&gt;Important Dates&lt;/span&gt;&lt;/p&gt;&lt;ul style=&quot;caret-color: rgb(0, 0, 0); font-family: &amp;quot;Helvetica Neue&amp;quot;, Helvetica, Arial, sans-serif; font-size: 13px; margin-bottom: 0px; margin-top: 0px;&quot;&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 14pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; font-weight: 700; vertical-align: baseline;&quot;&gt;Extended abstract submission deadline: Oct. 31, 2024 AoE (firm deadline)&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;color: #1d2228; font-size: 10pt; vertical-align: baseline;&quot;&gt;Notification of extended abstract acceptance: Nov. 27, 2024 AoE&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;color: #1d2228; font-size: 10pt; vertical-align: baseline;&quot;&gt;Optional: Full-length paper submission deadline: Dec. 16, 2024 AoE&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;color: #1d2228; font-size: 10pt; vertical-align: baseline;&quot;&gt;Notification of full-length paper acceptance: Jan. 22, 2025 AoE&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 10pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;color: #1d2228; font-size: 10pt; vertical-align: baseline;&quot;&gt;Camera-ready submission (&lt;/span&gt;&lt;span style=&quot;color: #1d2228; font-size: 10pt; font-style: italic; vertical-align: baseline;&quot;&gt;extended abstracts/full-length papers&lt;/span&gt;&lt;span style=&quot;color: #1d2228; font-size: 10pt; vertical-align: baseline;&quot;&gt;) deadline: Jan. 31, 2025 AoE&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p dir=&quot;ltr&quot; style=&quot;caret-color: rgb(0, 0, 0); font-family: &amp;quot;Helvetica Neue&amp;quot;, Helvetica, Arial, sans-serif; font-size: 13px; line-height: 1.38; margin-bottom: 6pt; margin-top: 12pt;&quot;&gt;&lt;span face=&quot;Roboto, sans-serif&quot; style=&quot;color: #010101; font-size: 10pt; font-style: italic; font-weight: 700; vertical-align: baseline;&quot;&gt;ACM MHV 2025 Program Chairs&lt;/span&gt;&lt;/p&gt;&lt;ul style=&quot;caret-color: rgb(0, 0, 0); font-family: &amp;quot;Helvetica Neue&amp;quot;, Helvetica, Arial, sans-serif; font-size: 13px; margin-bottom: 0px; margin-top: 0px;&quot;&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 14pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Christian Timmerer (AAU; christian.timmerer AT&lt;span class=&quot;Apple-converted-space&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;http://aau.at/&quot;&gt;aau.at&lt;/a&gt;)&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Dan Grois (AnyAI; dgrois AT&lt;span class=&quot;Apple-converted-space&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;http://acm.org/&quot;&gt;acm.org&lt;/a&gt;)&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Gwendal Simon (Synamedia; gsimon AT&lt;span class=&quot;Apple-converted-space&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;http://synamedia.com/&quot;&gt;synamedia.com&lt;/a&gt;)&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Jill Boyce (Nokia; jill.boyce AT&lt;span class=&quot;Apple-converted-space&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;http://nokia.com/&quot;&gt;nokia.com&lt;/a&gt;)&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;li dir=&quot;ltr&quot; style=&quot;color: #010101; font-family: Roboto, sans-serif; font-size: 10pt; list-style-type: disc; vertical-align: baseline;&quot;&gt;&lt;p dir=&quot;ltr&quot; style=&quot;line-height: 1.2; margin-bottom: 0pt; margin-top: 0pt;&quot;&gt;&lt;span style=&quot;font-size: 10pt; vertical-align: baseline;&quot;&gt;Yuriy Reznik (Brightcove; yreznik AT&lt;span class=&quot;Apple-converted-space&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;http://brightcove.com/&quot;&gt;brightcove.com&lt;/a&gt;)&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/span&gt;&lt;/div&gt;</description><link>http://multimediacommunication.blogspot.com/2024/09/acm-mile-high-video-conference-2025.html</link><author>noreply@blogger.com (Unknown)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/a/AVvXsEjFrFFbE_IrzJkGLgcN4XMduEimpZx0ijOAS1O7DjOzXtuWXxQaLLqxuPbIm57sG1GXa8RszlQMMCOdqdyPuWW9Kr4ggDvOd1Mt-heglQmISmv7r-BDcp7xwYBoIHB3Bhwpq6LhwH426Fd61A8gMNuCahu0IV3GyVH_aBK7oH34CMMiaD1mKJUIJIQSH2o=s72-c" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-70533731590871020</guid><pubDate>Mon, 16 Sep 2024 07:10:00 +0000</pubDate><atom:updated>2024-09-16T09:10:45.808+02:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">MPEG</category><category domain="http://www.blogger.com/atom/ns#">press release</category><title>MPEG news: a report from the 147th meeting</title><description>&lt;div style=&quot;text-align: right;&quot;&gt;&lt;span style=&quot;font-size: x-small;&quot;&gt;This blog post is based on the MPEG press release and has been modified/updated here to focus on and highlight research aspects. This version of the blog post will also be posted at ACM SIGMM Records.&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHeax1p_CysQZDHK7D4Zm2y3RrdBy46D5lJ-_4klCc5jFAeEJMLJ5PDeP3VFCbfmJJsQdTmz6PJL65rddcqFYFsR3xFmRe14tg9uH4CdCjd-rZ_hTX1YS399SZpghF6qpAzpV8oZI1nHM24EUbIMmhKPIHrG8gz-qQfY-WyW8E2g6mOu2aTmKyHuBiNnY/s1200/MPEG-Logo-1.png&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHeax1p_CysQZDHK7D4Zm2y3RrdBy46D5lJ-_4klCc5jFAeEJMLJ5PDeP3VFCbfmJJsQdTmz6PJL65rddcqFYFsR3xFmRe14tg9uH4CdCjd-rZ_hTX1YS399SZpghF6qpAzpV8oZI1nHM24EUbIMmhKPIHrG8gz-qQfY-WyW8E2g6mOu2aTmKyHuBiNnY/s320/MPEG-Logo-1.png&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;div&gt;&lt;a href=&quot;https://multimediacommunication.blogspot.com/2013/04/mpeg-news-archive.html&quot;&gt;MPEG News Archive&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;div&gt;The 147th MPEG meeting was held in Sapporo, Japan from 15-19 July 2024, and the official press release can be found &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-147/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;. It comprises the following highlights:&lt;/div&gt;&lt;div&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;&lt;b&gt;ISO Base Media File Format&lt;/b&gt;*: The 8th edition was promoted to Final Draft International Standard, supporting seamless media presentation for DASH and CMAF.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Syntactic Description Language&lt;/b&gt;: Finalized as an independent standard for MPEG-4 syntax.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Low-Overhead Image File Format&lt;/b&gt;*: First milestone achieved for small image handling improvements.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Neural Network Compression&lt;/b&gt;*: Second edition for conformance and reference software promoted.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Internet of Media Things (IoMT)&lt;/b&gt;: Progress made on reference software for distributed media tasks.&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;div&gt;* … covered in this blog post and expanded with possible research aspects.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;8th edition of ISO Base Media File Format&lt;/h2&gt;&lt;div&gt;The ever-growing expansion of the ISO/IEC 14496-12 ISO base media file format (ISOBMFF) application area has continuously brought new technologies to the standards. During the last couple of years, MPEG Systems (WG 3) has received new technologies on ISOBMFF for more seamless support of ISO/IEC 23009 Dynamic Adaptive Streaming over HTTP (DASH) and ISO/IEC 23000-19 Common Media Application Format (CMAF) leading to the development of the 8th edition of ISO/IEC14496-12.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The new edition of the standard includes new technologies to explicitly indicate the set of tracks representing various versions of the media presentation of a single media for seamless switching and continuous presentation. Such technologies will enable more efficient processing of the ISOBMFF formatted files for DASH manifest or CMAF Fragments.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;Research aspects&lt;/b&gt;: The central research aspect of the 8th edition of ISOBMFF, which “will enable more efficient processing,” will undoubtedly be its evaluation compared to the state-of-the-art. Standards typically define a format, but how to use it is left open to implementers. Therefore, the implementation is a crucial aspect and will allow for a comparison of performance. One such implementation of ISOBMFF is GPAC, which most likely will be among the first to implement these new features.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;Low-Overhead Image File Format&lt;/h2&gt;&lt;div&gt;ISO/IEC 23008-12 image format specification defines generic structures for storing image items and sequences based on ISO/IEC 14496-12 ISO base media file format (ISOBMFF). As it allows the use of various high-performance video compression standards for a single image or a series of images, it has been adopted by the market quickly. However, it was challenging to use it for very small-sized images such as icons or emojis. While the initial design of the standard was versatile and useful for a wide range of applications, the size of headers becomes an overhead for applications with tiny images. Thus, Amendment 3 of ISO/IEC 23008-12 low-overhead image file format aims to address this use case by adding a new compact box for storing metadata instead of the ‘Meta’ box to lower the size of the overhead.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;Research aspects&lt;/b&gt;: The issue regarding header sizes of ISOBMFF for small files or low bitrate (in the case of video streaming) was known for some time. Therefore, amendments in these directions are appreciated while further performance evaluations are needed to confirm design choices made at this initial step of standardization.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;Neural Network Compression&lt;/h2&gt;&lt;div&gt;An increasing number of artificial intelligence applications based on artificial neural networks, such as edge-based multimedia content processing, content-adaptive video post-processing filters, or federated training, need to exchange updates of neural networks (e.g., after training on additional data or fine-tuning to specific content). For this purpose, MPEG developed a second edition of the standard for coding of neural networks for multimedia content description and analysis (NNC, ISO/IEC 15938-17, published in 2024), adding syntax for differential coding of neural network parameters as well as new coding tools. Trained models can be compressed to at least 10-20% for several architectures, even below 3%, of their original size without performance loss. Higher compression rates are possible at moderate performance degradation. In a distributed training scenario, a model update after a training iteration can be represented at 1% or less of the base model size on average without sacrificing the classification performance of the neural network.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;In order to facilitate the implementation of the standard, the accompanying standard ISO/IEC 15938-18 has been updated to cover the second edition of ISO/IEC 15938-17. This standard provides a reference software for encoding and decoding NNC bitstreams, as well as a set of conformance guidelines and reference bitstreams for testing of decoder implementations. The software covers the functionalities of both editions of the standard, and can be configured to test different combinations of coding tools specified by the standard.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;Research aspects&lt;/b&gt;: The reference software for NNC, together with the reference software for audio/video codecs, are vital tools for building complex multimedia systems and its (baseline) evaluation with respect to compression efficiency only (not speed). This is because reference software is usually designed for functionality (i.e., compression in this case) and not performance.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The 148th MPEG meeting will be held in Kemer, Türkiye, from November 04-08, 2024. Click &lt;a href=&quot;https://www.mpeg.org/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt; for more information about MPEG meetings and their developments.&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description><link>http://multimediacommunication.blogspot.com/2024/09/mpeg-news-report-from-147th-meeting.html</link><author>noreply@blogger.com (Unknown)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHeax1p_CysQZDHK7D4Zm2y3RrdBy46D5lJ-_4klCc5jFAeEJMLJ5PDeP3VFCbfmJJsQdTmz6PJL65rddcqFYFsR3xFmRe14tg9uH4CdCjd-rZ_hTX1YS399SZpghF6qpAzpV8oZI1nHM24EUbIMmhKPIHrG8gz-qQfY-WyW8E2g6mOu2aTmKyHuBiNnY/s72-c/MPEG-Logo-1.png" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-7133750297171936396</guid><pubDate>Wed, 07 Aug 2024 09:20:00 +0000</pubDate><atom:updated>2024-08-07T11:20:00.114+02:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">athena</category><category domain="http://www.blogger.com/atom/ns#">jobs</category><title>University assistant predoctoral (all genders welcome) (in German: Universitätsassistent:in)</title><description>&lt;p&gt;The University of Klagenfurt, with approximately 1,500 employees and over 12,000 students, is located in the Alps-Adriatic region and consistently achieves excellent placements in rankings. The motto “per aspera ad astra” underscores our firm commitment to the pursuit of excellence in all research, teaching, and university management activities. The principles of equality, diversity, health, sustainability, and compatibility of work and family life serve as the foundation for our work at the university.&lt;/p&gt;&lt;p&gt;The University of Klagenfurt is pleased to announce the following open position at the Department of Information Technology at the Faculty of Technical Sciences with an expected starting date of November 4, 2024:&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;b&gt;University assistant predoctoral (all genders welcome) (in German: Universitätsassistent:in)&lt;/b&gt;&lt;/p&gt;&lt;p&gt;within the Ada Lovelace Programme (project title: &lt;b&gt;Streaming of Holographic Content and its Impact on the Quality of Experience&lt;/b&gt;).&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Level of employment: 100 % (40 hours/week)&lt;/li&gt;&lt;li&gt;Minimum salary: € 50,103.20 per annum (gross); Classification according to collective agreement: B1&amp;nbsp;&lt;/li&gt;&lt;li&gt;Contract duration: 4 years&lt;/li&gt;&lt;li&gt;Application deadline: by September 11, 2024&lt;/li&gt;&lt;li&gt;Reference code: 348/24&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Tasks and responsibilities&lt;/b&gt;:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Autonomous scientific work, including the publication of research articles in the fields of coding and streaming of holographic content, Quality of Experience (QoE), and behavioural sciences&lt;/li&gt;&lt;li&gt;Conducting independent scientific research with the aim of submitting a dissertation and acquiring a doctoral degree in technical sciences&lt;/li&gt;&lt;li&gt;Teaching exercises and lab courses (e.g., in the computer science Bachelor’s or/and Master’s programme)&lt;/li&gt;&lt;li&gt;Participating in research projects of the department, especially within the Ada Lovelace Programme (Streaming of Holographic Content and its Impact on the Quality of Experience)&lt;/li&gt;&lt;li&gt;Mentoring students&lt;/li&gt;&lt;li&gt;Assisting in public relations activities, science to public communication, and extra-curricular events of the department and the faculty&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Prerequisites for the appointment&lt;/b&gt;:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Completed Diploma or Master’s degree from a recognized university in the field of computer science, information and communications engineering, electrical engineering, or related fields. The completion of this degree must be fulfilled no later than two weeks before the starting date; hence, the last possible deadline for meeting this requirement is October 20, 2024&lt;/li&gt;&lt;li&gt;Strong background in one or more of the following fields: multimedia systems (i.e., video/holographic content coding/streaming, Quality of Experience) and empirical research methods (i.e., statistical methods, interdisciplinary research with behavioural sciences)&lt;/li&gt;&lt;li&gt;Fluent in written and spoken English&lt;/li&gt;&lt;li&gt;Programming experience in multimedia systems&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Additional desired qualifications&lt;/b&gt;:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Experience with scientific publications or presentations&lt;/li&gt;&lt;li&gt;Experience in interdisciplinary research projects, ideally in the behavioural sciences, as the project involves empirical research&lt;/li&gt;&lt;li&gt;Excellent ability to work with teams&lt;/li&gt;&lt;li&gt;Scientific curiosity and enthusiasm for research in multimedia systems and empirical research&lt;/li&gt;&lt;/ul&gt;The doctoral student will be co-supervised by Christian Timmerer, Heather Foran, and Hadi Amirpour.&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Our offer&lt;/b&gt;:&lt;/p&gt;&lt;p&gt;This position serves the purposes of the vocational and scientific education of graduates of Master’s or Diploma degree programmes and sets the goal of completing a Doctoral degree / a Ph.D. in Technical Sciences. Therefore, applications by persons who have already completed a subject-specific doctoral degree or a subject-relevant Ph.D. program cannot be considered.&amp;nbsp;&lt;/p&gt;&lt;p&gt;The employment contract is concluded for the position of university assistant (predoctoral) and stipulates a starting salary of € 3,578.80 gross per month (14 times a year; previous experience deemed relevant to the job can be recognized in accordance with the collective agreement).&amp;nbsp;&lt;/p&gt;&lt;p&gt;&lt;b&gt;The University of Klagenfurt also offers&lt;/b&gt;:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Personal and professional advanced training courses, management, and career coaching&lt;/li&gt;&lt;li&gt;Numerous attractive additional benefits, see also &lt;a href=&quot;https://jobs.aau.at/en/the-university-as-employer/&quot;&gt;https://jobs.aau.at/en/the-university-as-employer/&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Diversity- and family-friendly university culture&lt;/li&gt;&lt;li&gt;The opportunity to live and work in the attractive Alps-Adriatic region with a wide range of leisure activities in the spheres of culture, nature, and sports&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;The application&lt;/b&gt;:&lt;/p&gt;&lt;p&gt;If you are interested in this position, please apply in English by providing the following documents:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Letter of application/cover letter including motivation statement for the given position&lt;/li&gt;&lt;li&gt;Curriculum vitae (with clear information about the degrees, including date/place/grade, the experience acquired, the thesis title, the list of publications (if any), and any other relevant information)&lt;/li&gt;&lt;li&gt;Copy of the degree certificates and transcripts of the courses&lt;/li&gt;&lt;li&gt;Any certificates that can prove the fulfilment of the required and additional qualifications listed above (e.g., the submission of the final thesis if required by the degree programme, copy of publications, programming skills certificates, language skills certificates, etc.)&lt;/li&gt;&lt;li&gt;Final thesis or other study-related written work (like seminar reports) or excerpts thereof&lt;/li&gt;&lt;li&gt;If an applicant has not received the Diploma or Master’s degree by the application deadline, the applicant should provide a declaration, written either by a supervisor or by the candidate themselves, on the feasibility of finishing the Diploma or Master’s degree by October 30, 2024 at the latest.&amp;nbsp;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;To apply, please select the position with the &lt;b&gt;reference code 348/24&lt;/b&gt; in the category “&lt;b&gt;Scientific Staff&lt;/b&gt;” using the link “Apply for this position” in the job portal at &lt;a href=&quot;http://jobs.aau.at/en/&quot;&gt;jobs.aau.at/en/&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;Candidates must furnish proof that they meet the required qualifications by October 20, 2024 at the latest.&lt;/p&gt;&lt;p&gt;For further information on this specific vacancy, please contact Univ.-Prof. DI Dr. Christian Timmerer (christian.timmerer@aau.at). General information about the university as an employer can be found at &lt;b&gt;https://jobs.aau.at/en/the-university-as-employer/&lt;/b&gt;. At the University of Klagenfurt, recruitment and staff matters are accompanied not only by the authority responsible for the recruitment procedure but also by the Equal Opportunities Working Group and, if necessary, by the Representative for Disabled Persons.&lt;/p&gt;&lt;p&gt;The University of Klagenfurt aims to increase the proportion of women and, therefore, invites explicitly qualified women to apply for the position. Where the qualification is equivalent, women will be given preferential consideration.&amp;nbsp;&lt;/p&gt;&lt;p&gt;People with disabilities or chronic diseases, who fulfill the requirements, are particularly encouraged to apply.&amp;nbsp;&lt;/p&gt;&lt;p&gt;Travel and accommodation costs incurred during the application process will not be refunded. Translations into other languages shall serve informational purposes only. Solely the version advertised in the University Bulletin (Mitteilungsblatt) shall be legally binding.&lt;/p&gt;</description><link>http://multimediacommunication.blogspot.com/2024/08/university-assistant-predoctoral-all.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-7311249151210107719</guid><pubDate>Mon, 15 Jul 2024 06:41:00 +0000</pubDate><atom:updated>2024-07-15T21:02:47.883+02:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">athena</category><title>Successful 5-year Evaluation of Christian Doppler Laboratory ATHENA</title><description>&lt;p&gt;&lt;span style=&quot;background-color: white; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px;&quot;&gt;The Christian Doppler (CD) Laboratory&lt;/span&gt;&lt;span style=&quot;background-color: white; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;https://athena.itec.aau.at/&quot; style=&quot;background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; color: #0066cc; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;ATHENA&lt;/a&gt;&lt;span style=&quot;background-color: white; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;background-color: white; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px;&quot;&gt;was established in October 2019 to tackle current and future research and deployment challenges of HTTP Adaptive Streaming (HAS) and emerging streaming methods. The goal of CD laboratories is to conduct application-oriented basic research, promote collaboration between universities and companies, and facilitate technology transfer. They are funded through a public-private partnership between companies and the&lt;/span&gt;&lt;span style=&quot;background-color: white; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;https://www.cdg.ac.at/en/&quot; style=&quot;background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; color: #0066cc; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Christian Doppler Research Association&lt;/a&gt;&lt;span style=&quot;background-color: white; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px;&quot;&gt;, which is funded by the Ministry for Digital and Economic Affairs and the National Foundation for Research, Technology, and Development (Nationalstiftung für Forschung, Technologie und Entwicklung (FTE)). ATHENA is supported by&lt;/span&gt;&lt;span style=&quot;background-color: white; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;https://bitmovin.com/&quot; style=&quot;background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; border: 0px; color: #0066cc; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Bitmovin&lt;/a&gt;&lt;span style=&quot;background-color: white; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;background-color: white; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px;&quot;&gt;as a company partner.&lt;/span&gt;&lt;/p&gt;&lt;p class=&quot;p1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;The CD laboratories have a duration of seven years and undergo rigorous scientific review after two and five years. This spring, the CD lab ATHENA completed its 5-year evaluation, and we have just received official notification from the CDG that we have successfully passed the review. Consequently, it is time to briefly outline the main achievements during this second phase (i.e., years 2 to 5) of the CD lab ATHENA.&lt;/p&gt;&lt;p class=&quot;p1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;Before exploring the achievements, it’s important to highlight the ongoing relevance of research in video streaming, given its dominance in today’s Internet usage. The January 2024&amp;nbsp;&lt;a href=&quot;https://www.sandvine.com/phenomena&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Sandvine Internet Phenomena&lt;/a&gt;&amp;nbsp;report revealed that video streaming accounts for 68% of fixed/wired Internet traffic and 64% for mobile Internet traffic. Specifically, Video on Demand (VoD) represents 54% of fixed/wired and 57% of mobile traffic, while live streaming contributes to 14% of fixed/wired and 7% of mobile traffic. The major services in this domain include YouTube and Netflix, each commanding more than 10% of the overall Internet traffic, with TikTok, Amazon Prime, and Disney+ also playing significant roles.&lt;/p&gt;&lt;p class=&quot;p1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;ATHENA is structured into four work packages, each with distinct objectives as detailed below:&lt;/p&gt;&lt;ol class=&quot;ol1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; box-sizing: border-box; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; list-style-image: initial; list-style-position: initial; margin: 0px 0px 24px 1.5em; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Content provisioning: Primarily involves video encoding for HAS, quality-aware encoding, learning-based encoding, and multi-codec HAS.&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Content delivery: Addresses HAS issues by utilizing edge computing, exchanging information between CDN/SDN and clients, providing network assistance for clients, and evaluating corresponding utilities.&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Content consumption: Focuses on bitrate adaptation schemes, playback improvements, context and user awareness, and studies on Quality of Experience (QoE).&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;End-to-end aspects: Offers a comprehensive view of application and transport layer enhancements, Quality of Experience (QoE) models, low-latency HAS, and learning-based HAS.&lt;/li&gt;&lt;/ol&gt;&lt;p class=&quot;p1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;During the 2nd phase of ATHENA’s work, we achieved significant results, including&amp;nbsp;&lt;a href=&quot;https://athena.itec.aau.at/publications/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;publications&lt;/a&gt;&amp;nbsp;in respected academic journals and conferences. Specifically, our publications were featured in key&amp;nbsp;&lt;a href=&quot;https://scholar.google.com/citations?view_op=top_venues&amp;amp;hl=en&amp;amp;vq=eng_multimedia&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;multimedia&lt;/a&gt;,&amp;nbsp;&lt;a href=&quot;https://scholar.google.com/citations?view_op=top_venues&amp;amp;hl=en&amp;amp;vq=eng_signalprocessing&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;signal processing&lt;/a&gt;,&amp;nbsp;&lt;a href=&quot;https://scholar.google.com/citations?view_op=top_venues&amp;amp;hl=en&amp;amp;vq=eng_computernetworkswirelesscommunication&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;computer networks &amp;amp; wireless communication&lt;/a&gt;, and&amp;nbsp;&lt;a href=&quot;https://scholar.google.com/citations?view_op=top_venues&amp;amp;hl=en&amp;amp;vq=eng_computingsystems&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;computing systems&lt;/a&gt;&amp;nbsp;venues, as categorized by Google Scholar under engineering and computer science. Some of the notable publications include IEEE Communications Surveys &amp;amp; Tutorials (impact factor: 35.6), IEEE Transactions on Image Processing (10.6), IEEE Internet of Things Journal (10.6), IEEE Transactions on Circuits and Systems for Video Technology (8.4), and IEEE Transactions on Multimedia (7.3).&lt;/p&gt;&lt;p class=&quot;p1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;Furthermore, we focused on technology transfer by submitting 16 invention disclosures, resulting in 13 patent applications (including provisionals). Collaborating with our company partner, we obtained&amp;nbsp;&lt;a href=&quot;https://athena.itec.aau.at/?s=patent&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;6 granted patents&lt;/a&gt;. Additionally, we’re pleased to report on the progress of our spin-off projects, as well as the funding secured for two FFG-funded projects named&amp;nbsp;&lt;a href=&quot;https://athena.itec.aau.at/apollo/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;APOLLO&lt;/a&gt;&amp;nbsp;and&amp;nbsp;&lt;a href=&quot;https://athena.itec.aau.at/gaia/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;GAIA&lt;/a&gt;, and an EU Horizon Europe-funded innovation action called&amp;nbsp;&lt;a href=&quot;https://athena.itec.aau.at/spirit/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;SPIRIT&lt;/a&gt;.&lt;/p&gt;&lt;p class=&quot;p1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;The ATHENA team was also active in organizing&amp;nbsp;&lt;a href=&quot;https://athena.itec.aau.at/events/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;scientific events&lt;/a&gt;&amp;nbsp;such as workshops, special sessions, and special issues at IEEE ICME, ACM MM, ACM MMSys, ACM CoNEXT, IEEE ICIP, PCS, and IEEE Network. We also contributed to&amp;nbsp;&lt;a href=&quot;https://athena.itec.aau.at/reproducibility/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;reproducibility&lt;/a&gt;&amp;nbsp;in research through open source tools (e.g.,&amp;nbsp;&lt;a href=&quot;https://github.com/cd-athena/VCA&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Video Complexity Analyzer&lt;/a&gt;&amp;nbsp;and&amp;nbsp;&lt;a href=&quot;https://github.com/cd-athena/LLL-CAdViSE&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;LLL-CAdViSE&lt;/a&gt;) and datasets (e.g.,&amp;nbsp;&lt;a href=&quot;http://ftp.itec.aau.at/datasets/mmsys22/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Video Complexity Dataset and Multi-Codec Ultra High Definition 8K MPEG-DASH Dataset&lt;/a&gt;) among others.&lt;/p&gt;&lt;p class=&quot;p1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;We also note our contributions to the applications of AI in video coding &amp;amp; streaming, for example in video coding and video streaming as follows:&lt;/p&gt;&lt;ul class=&quot;ul1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; box-sizing: border-box; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; list-style: square; margin: 0px 0px 24px 1.5em; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;span class=&quot;s1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;a href=&quot;https://athena.itec.aau.at/2021/05/ieee-oj-sp-fast-multi-resolution-and-multi-rate-encoding-for-http-adaptive-streaming-using-machine-learning/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;span class=&quot;s2&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Fast Multi-Rate Encoding with Machine Learning&lt;/span&gt;&lt;/a&gt;&lt;/span&gt;&amp;nbsp;(using Convolutional Neural Networks (CNNs))&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;span class=&quot;s1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;a href=&quot;https://athena.itec.aau.at/2022/05/lider-lightweight-dense-residual-network-for-video-super-resolution-on-mobile-devices/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;span class=&quot;s2&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;LiDeR: Lightweight video Super Resolution for mobile devices&lt;/span&gt;&lt;/a&gt;&lt;/span&gt;&amp;nbsp;(using Deep Neural Networks (DNNs))&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;span class=&quot;s1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;a href=&quot;https://athena.itec.aau.at/2023/10/video-coding-enhancements-for-http-adaptive-streaming-using-machine-learning/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;span class=&quot;s2&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Blind Visual Quality Assessment Using Vision Transformers&lt;/span&gt;&lt;/a&gt;&lt;/span&gt;&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;span class=&quot;s1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;a href=&quot;https://athena.itec.aau.at/2022/04/vca-video-complexity-analyzer/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;span class=&quot;s2&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Video Complexity Analysis (VCA)&lt;/span&gt;&lt;/a&gt;&lt;/span&gt;&amp;nbsp;and optimizations for per-title encoding (using Linear Regression, Random Forest, and XGBoost models)&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;span class=&quot;s1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;a href=&quot;https://athena.itec.aau.at/2022/12/3056/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;span class=&quot;s2&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;DeepStream: Video streaming enhancements using compressed deep neural networks&lt;/span&gt;&lt;/a&gt;&lt;/span&gt;&amp;nbsp;(using Deep Neural Networks (DNNs))&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;span class=&quot;s1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;a href=&quot;https://athena.itec.aau.at/2021/11/ecas-ml-edge-computing-assisted-adaptation-scheme-with-machine-learning-for-http-adaptive-streaming/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;span class=&quot;s2&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;ECAS-ML&lt;/span&gt;&lt;/a&gt;&lt;/span&gt;: Edge-assisted adaptive bitrate switching (using Long Short-Term Memory (LSTM))&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;span class=&quot;s1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;a href=&quot;https://athena.itec.aau.at/2021/12/quality-optimization-of-live-streaming-services-over-http-with-reinforcement-learning/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;span class=&quot;s2&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Quality Optimization of Live Streaming Services over HTTP with Reinforcement Learning&lt;/span&gt;&lt;/a&gt;&lt;/span&gt;&amp;nbsp;(RL)&lt;/li&gt;&lt;/ul&gt;&lt;p class=&quot;p1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;A major outcome of the second phase is the successful defense of the inaugural cohort of PhD students:&lt;/p&gt;&lt;ul class=&quot;ul1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; box-sizing: border-box; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; list-style: square; margin: 0px 0px 24px 1.5em; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Dr. Alireza Erfanian: “&lt;a href=&quot;https://athena.itec.aau.at/2023/10/optimizing-qoe-and-latency-of-live-video-streaming-using-edge-computing-and-in-network-intelligence/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Optimizing QoE and Latency of Video Streaming using Edge Computing and In-Network Intelligence&lt;/a&gt;”, May 25, 2023&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Dr. Ekrem Çetinkaya: “&lt;a href=&quot;https://athena.itec.aau.at/2023/10/video-coding-enhancements-for-http-adaptive-streaming-using-machine-learning/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Video Coding Enhancements for HTTP Adaptive Streaming using Machine Learning&lt;/a&gt;”, June 7, 2023&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Dr. Minh Nguyen: “&lt;a href=&quot;https://athena.itec.aau.at/2023/10/policy-driven-dynamic-http-adaptive-streaming-player-environment/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Policy-driven Dynamic HTTP Adaptive Streaming Player Environment&lt;/a&gt;”, June 30, 2023&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Dr. Jesús Aguilar Armijo: “&lt;a href=&quot;https://athena.itec.aau.at/2023/10/multi-access-edge-computing-for-adaptive-video-streaming/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Multi-access Edge Computing for Adaptive Video Streaming&lt;/a&gt;”, July 10, 2023&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Dr. Reza Farahani: “&lt;a href=&quot;https://athena.itec.aau.at/2023/11/network-assisted-delivery-of-adaptive-video-streaming-services-through-cdn-sdn-and-mec/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Network-Assisted Delivery of Adaptive Video Streaming Services through CDN, SDN, and MEC&lt;/a&gt;”, August 22, 2023&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Dr. Vignesh V Menon: “&lt;a href=&quot;https://athena.itec.aau.at/2024/01/content-adaptive-video-coding-for-http-adaptive-streaming/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Content-adaptive Video Coding for HTTP Adaptive Streaming&lt;/a&gt;”, January 15, 2024&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Dr. Babak Taraghi, “&lt;a href=&quot;https://athena.itec.aau.at/2024/07/end-to-end-quality-of-experience-evaluation-for-http-adaptive-streaming-2/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;End-to-end Quality of Experience Evaluation for HTTP Adaptive Streaming&lt;/a&gt;”, July 10, 2024&lt;/li&gt;&lt;/ul&gt;&lt;p class=&quot;p1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;Two postdoctoral scholars have reached a significant milestone on their path toward habilitation&lt;/p&gt;&lt;ul class=&quot;ul1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; box-sizing: border-box; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; list-style: square; margin: 0px 0px 24px 1.5em; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Dr. Hadi Amirpour, “&lt;a href=&quot;https://www.ftf.or.at/2024/01/video-coding-for-efficient-http-adaptive-streaming/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Video Coding for Efficient HTTP Adaptive Streaming&lt;/a&gt;”, February 8, 2024&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Dr. Farzad Tashtarian, “&lt;a href=&quot;https://www.ftf.or.at/2023/02/how-to-optimize-dynamic-adaptive-video-streaming-challenges-and-solutions/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;How to Optimize Dynamic Adaptive Video Streaming? Challenges and Solutions&lt;/a&gt;”, February 27, 2023 &amp;amp; “&lt;a href=&quot;https://www.ftf.or.at/2024/06/end-to-end-adaptive-video-streaming-optimization/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;End-to-End Adaptive Video Streaming Optimization&lt;/a&gt;”, June 26, 2024&lt;/li&gt;&lt;/ul&gt;&lt;p class=&quot;p1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;During the second phase, each work package produced excellent publications in their domain, briefly highlighted in the following. Content provisioning (WP-1) focuses mainly on video coding for HAS (43 papers) and immersive media coding for streaming (4 papers). The former can be further subdivided into the following topic areas:&lt;/p&gt;&lt;ul class=&quot;ul1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; box-sizing: border-box; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; list-style: square; margin: 0px 0px 24px 1.5em; padding: 0px; vertical-align: baseline;&quot;&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Video complexity: spatial and temporal feature extraction (4 papers)&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Compression efficiency improvement of individual representations (1 paper)&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Encoding parameter prediction for HAS (9 papers)&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Efficient bitrate ladder construction (4 papers)&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Fast multi-rate encoding (3 papers)&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Data security and data hiding (7 papers)&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Energy-efficient video encoding for HAS (4 papers)&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Advancing video quality evaluation (7 papers)&lt;/li&gt;&lt;li class=&quot;li1&quot; style=&quot;background: transparent; border: 0px; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Datasets (4 papers)&lt;/li&gt;&lt;/ul&gt;&lt;p class=&quot;p1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;Content delivery (WP-2) dealt with SDN/CDN assistance for HAS, edge computing support for HAS, and network-embedded media streaming support, resulting in 21 papers. Content consumption (WP-3) worked on QoE enhancement mechanisms at client-side and QoE- and energy-aware content consumption (11 papers). Finally, end-to-end Aspects (WP-4) produced 15 papers in the area of end-to-end QoE improvement in multimedia video streaming. We reported 94 papers published/accepted for the ATHENA 5-year evaluation.&lt;/p&gt;&lt;p class=&quot;p1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;In this context, it is also important to highlight the collaboration within ATHENA, which has resulted in joint publications across various work packages (WPs) and with other&amp;nbsp;&lt;a href=&quot;https://itec.aau.at/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;ITEC&lt;/a&gt;&amp;nbsp;members. For example, collaborations with Prof.&amp;nbsp;&lt;a href=&quot;https://klausschoeffmann.com/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Schöffmann&lt;/a&gt;&amp;nbsp;(FWF-funded project OVID), FFG-funded projects&amp;nbsp;&lt;a href=&quot;https://athena.itec.aau.at/apollo/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;APOLLO&lt;/a&gt;/&lt;a href=&quot;https://athena.itec.aau.at/gaia/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;GAIA&lt;/a&gt;, and EU-funded project&amp;nbsp;&lt;a href=&quot;https://athena.itec.aau.at/spirit/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;SPIRIT&lt;/a&gt;. In addition, we would like to acknowledge our&amp;nbsp;&lt;a href=&quot;https://athena.itec.aau.at/team/international-collaborators/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;international collaborators&lt;/a&gt;, such as Prof. Hongjie He from&amp;nbsp;&lt;a href=&quot;https://en.swjtu.edu.cn/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Southwest Jiaotong University&lt;/a&gt;, Prof.&amp;nbsp;&lt;a href=&quot;https://scholar.google.com/citations?user=llgwlUgAAAAJ&amp;amp;hl=en&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Patrick Le Callet&lt;/a&gt;&amp;nbsp;from the University of Nantes, Prof.&amp;nbsp;&lt;a href=&quot;https://scholar.google.com/citations?hl=en&amp;amp;user=ywBnUIAAAAAJ&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Wassim Hamidouche&lt;/a&gt;&amp;nbsp;from the Technology Innovation Institute (UAE),&amp;nbsp;&lt;a href=&quot;https://networks.imdea.org/team/imdea-networks-team/people/sergey-gorinsky/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Dr. Sergey Gorinsky&lt;/a&gt;&amp;nbsp;from IMDEA,&amp;nbsp;&lt;a href=&quot;https://www.concordia.ca/faculty/abdelhak-bentaleb.html&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Dr. Abdelhak Bentaleb&lt;/a&gt;&amp;nbsp;from Concordia University,&amp;nbsp;&lt;a href=&quot;https://www.schatz.cc/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Dr. Raimund Schatz&lt;/a&gt;&amp;nbsp;from AIT, and Prof.&amp;nbsp;&lt;a href=&quot;https://scholar.google.com/citations?hl=en&amp;amp;user=guRMl5IAAAAJ&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Pablo Cesar&lt;/a&gt;&amp;nbsp;from CWI. We are also pleased to report the successful technology transfers to Bitmovin, particularly&amp;nbsp;&lt;a href=&quot;https://athena.itec.aau.at/2020/05/cadvise-cloud-based-adaptive-video-streaming-evaluation-framework-for-the-automated-testing-of-media-players/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;CAdViSE&lt;/a&gt;&amp;nbsp;(WP-4) and&amp;nbsp;&lt;a href=&quot;https://athena.itec.aau.at/2021/07/wish-user-centric-bitrate-adaptation-for-http-adaptive-streaming-on-mobile-devices/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;WISH ABR&lt;/a&gt;&amp;nbsp;(WP-3). Regular “Fun with ATHENA” meetups and Break-out Groups are utilized for in-depth discussions about innovations and potential technology transfers.&lt;/p&gt;&lt;p class=&quot;p1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;Over the next two years, the ATHENA project will prioritize the development of deep neural network/AI-based image and video coding within the context of HAS. This includes energy- and cost-aware video coding for HAS, immersive video coding such as volumetric video and holography, as well as Quality of Experience (QoE) and energy-aware content consumption for HAS (including energy-efficient, AI-based live video streaming) and generative AI for HAS.&lt;/p&gt;&lt;p class=&quot;p1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;Thanks to all&amp;nbsp;&lt;a href=&quot;https://athena.itec.aau.at/team/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;current&lt;/a&gt;&amp;nbsp;and&amp;nbsp;&lt;a href=&quot;https://athena.itec.aau.at/team/alumni/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;former&lt;/a&gt;&amp;nbsp;ATHENA team members: Samira Afzal, Hadi Amirpour, Jesús Aguilar Armijo, Emanuele Artioli, Christian Bauer, Alexis Boniface, Ekrem Çetinkaya, Reza Ebrahimi, Alireza Erfanian, Reza Farahani, Mohammad Ghanbari (late), Milad Ghanbari, Mohammad Ghasempour, Selina Zoë Haack, Hermann Hellwagner, Manuel Hoi, Andreas Kogler, Gregor Lammer, Armin Lachini, David Langmeier, Sandro Linder, Daniele Lorenzi, Vignesh V Menon, Minh Nguyen, Engin Orhan, Lingfeng Qu, Jameson Steiner, Nina Stiller, Babak Taraghi, Farzad Tashtarian, Yuan Yuan, and Yiying Wei. Finally, thanks to ITEC support staff Martina Steinbacher, Nina Stiller, Margit Letter, Marion Taschwer, and Rudolf Messner.&lt;/p&gt;&lt;p class=&quot;p1&quot; style=&quot;background: rgb(255, 255, 255); border: 0px; color: #333333; font-family: Georgia, &amp;quot;Bitstream Charter&amp;quot;, serif; font-size: 16px; margin: 0px 0px 24px; padding: 0px; vertical-align: baseline;&quot;&gt;We also would like to thank the&amp;nbsp;&lt;a href=&quot;https://www.cdg.ac.at/en/&quot; style=&quot;background: transparent; border: 0px; color: #0066cc; margin: 0px; padding: 0px; vertical-align: baseline;&quot;&gt;Christian Doppler Research Association&lt;/a&gt;&amp;nbsp;for continuous support, organizing the review, and the reviewer for constructive feedback!&lt;/p&gt;</description><link>http://multimediacommunication.blogspot.com/2024/07/successful-5-year-evaluation-of.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-3073359993565435523</guid><pubDate>Mon, 01 Jul 2024 15:05:00 +0000</pubDate><atom:updated>2024-07-01T17:05:14.148+02:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">adaptive media streaming</category><category domain="http://www.blogger.com/atom/ns#">athena</category><category domain="http://www.blogger.com/atom/ns#">dash</category><title>HTTP Adaptive Streaming – Quo Vadis? (2024)</title><description>&lt;p style=&quot;text-align: center;&quot;&gt;Telecom Seminar Series at TII,&amp;nbsp;Jun 27, 2024, 04:00 PM Dubai&lt;/p&gt;&lt;p&gt;&lt;b&gt;Abstract&lt;/b&gt;: Video traffic on the Internet is constantly growing; networked multimedia applications consume a predominant share of the available Internet bandwidth. A major technical breakthrough and enabler in multimedia systems research and of industrial networked multimedia services certainly was the HTTP Adaptive Streaming (HAS) technique. This resulted in the standardization of MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) which, together with HTTP Live Streaming (HLS), is widely used for multimedia delivery in today’s networks. Existing challenges in multimedia systems research deal with the trade-off between (i) the ever-increasing content complexity, (ii) various requirements with respect to time (most importantly, latency), and (iii) quality of experience (QoE). Optimizing towards one aspect usually negatively impacts at least one of the other two aspects if not both. This situation sets the stage for our research work in the ATHENA Christian Doppler (CD) Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services; https://athena.itec.aau.at/), jointly funded by public sources and industry. In this talk, we will present selected novel approaches and research results of the first year of the ATHENA CD Lab’s operation. We will highlight HAS-related research on (i) multimedia content provisioning (machine learning for video encoding); (ii) multimedia content delivery (support of edge processing and virtualized network functions for video networking); (iii) multimedia content consumption and end-to-end aspects (player-triggered segment retransmissions to improve video playout quality); and (iv) novel QoE investigations (adaptive point cloud streaming). We will also put the work into the context of international multimedia systems research.&amp;nbsp;&lt;/p&gt;

&lt;iframe src=&quot;https://www.slideshare.net/slideshow/embed_code/key/wGMXXr9TZXMbN1?startSlide=1&quot; width=&quot;597&quot; height=&quot;486&quot; frameborder=&quot;0&quot; marginwidth=&quot;0&quot; marginheight=&quot;0&quot; scrolling=&quot;no&quot; style=&quot;border:1px solid #CCC; border-width:1px; margin-bottom:5px;max-width: 100%;&quot; allowfullscreen&gt;&lt;/iframe&gt;&lt;div style=&quot;margin-bottom:5px&quot;&gt;&lt;strong&gt;&lt;a href=&quot;https://www.slideshare.net/slideshow/http-adaptive-streaming-quo-vadis-2024/269997956&quot; title=&quot;HTTP Adaptive Streaming – Quo Vadis (2024)&quot; target=&quot;_blank&quot;&gt;HTTP Adaptive Streaming – Quo Vadis (2024)&lt;/a&gt;&lt;/strong&gt; from &lt;strong&gt;&lt;a href=&quot;https://www.slideshare.net/christian.timmerer&quot; target=&quot;_blank&quot;&gt;Alpen-Adria-Universität&lt;/a&gt;&lt;/strong&gt;&lt;/div&gt;</description><link>http://multimediacommunication.blogspot.com/2024/07/http-adaptive-streaming-quo-vadis-2024.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-3059383765640504315</guid><pubDate>Thu, 06 Jun 2024 16:35:00 +0000</pubDate><atom:updated>2024-06-06T18:35:28.107+02:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">video streaming</category><title>Video Streaming: Then, Now, Future</title><description>&lt;p&gt;I&#39;m happy to share my slides from my public/inaugural lecture at the University of Klagenfurt on June 5, 2022.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Title: &quot;&lt;b&gt;Video Streaming: Then, Now, Future&lt;/b&gt;&quot;&lt;/li&gt;&lt;li&gt;June 5, 2024, 17:00, University of Klagenfurt, Hörsaal 2&lt;/li&gt;&lt;/ul&gt;&lt;div&gt;In my public lecture, I provide insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. I&#39;m also presenting provocative contributions of his own that have significantly influenced the industry. I conclude by looking at future challenges and invite the audience to join in a discussion (e.g., in the comments below).&lt;/div&gt;&lt;p&gt;&lt;/p&gt;

&lt;iframe allowfullscreen=&quot;&quot; frameborder=&quot;0&quot; height=&quot;486&quot; marginheight=&quot;0&quot; marginwidth=&quot;0&quot; scrolling=&quot;no&quot; src=&quot;https://www.slideshare.net/slideshow/embed_code/key/p2ydABvEvojtxK?startSlide=1&quot; style=&quot;border-width: 1px; border: 1px solid #CCC; margin-bottom: 5px; max-width: 100%;&quot; width=&quot;597&quot;&gt;&lt;/iframe&gt;&lt;div style=&quot;margin-bottom: 5px;&quot;&gt;&lt;strong&gt;&lt;a href=&quot;https://www.slideshare.net/slideshow/video-streaming-then-now-and-in-the-future/269539283&quot; target=&quot;_blank&quot; title=&quot;Video Streaming: Then, Now, and in the Future&quot;&gt;Video Streaming: Then, Now, and in the Future&lt;/a&gt;&lt;/strong&gt; from &lt;strong&gt;&lt;a href=&quot;https://www.slideshare.net/christian.timmerer&quot; target=&quot;_blank&quot;&gt;Alpen-Adria-Universität&lt;/a&gt;&lt;/strong&gt;&lt;/div&gt;</description><link>http://multimediacommunication.blogspot.com/2024/06/video-streaming-then-now-future.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-211752488545399175</guid><pubDate>Sat, 18 May 2024 19:45:00 +0000</pubDate><atom:updated>2024-05-18T21:45:07.340+02:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">MPEG</category><category domain="http://www.blogger.com/atom/ns#">press release</category><title>MPEG news: a report from the 146th meeting</title><description>&lt;p style=&quot;text-align: right;&quot;&gt;&amp;nbsp;&lt;span style=&quot;font-size: x-small;&quot;&gt;This blog post is based on the MPEG press release and has been modified/updated here to focus on and highlight research aspects. This version of the blog post will also be posted at ACM SIGMM Records.&lt;/span&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHeax1p_CysQZDHK7D4Zm2y3RrdBy46D5lJ-_4klCc5jFAeEJMLJ5PDeP3VFCbfmJJsQdTmz6PJL65rddcqFYFsR3xFmRe14tg9uH4CdCjd-rZ_hTX1YS399SZpghF6qpAzpV8oZI1nHM24EUbIMmhKPIHrG8gz-qQfY-WyW8E2g6mOu2aTmKyHuBiNnY/s1200/MPEG-Logo-1.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;416&quot; data-original-width=&quot;1200&quot; height=&quot;111&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHeax1p_CysQZDHK7D4Zm2y3RrdBy46D5lJ-_4klCc5jFAeEJMLJ5PDeP3VFCbfmJJsQdTmz6PJL65rddcqFYFsR3xFmRe14tg9uH4CdCjd-rZ_hTX1YS399SZpghF6qpAzpV8oZI1nHM24EUbIMmhKPIHrG8gz-qQfY-WyW8E2g6mOu2aTmKyHuBiNnY/s320/MPEG-Logo-1.png&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both;&quot;&gt;&lt;a href=&quot;https://multimediacommunication.blogspot.com/2013/04/mpeg-news-archive.html&quot;&gt;MPEG News Archive&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;The 146th MPEG meeting was held in Rennes, France from 22-26 April 2024, and the official press release can be found &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-146/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;. It comprises the following highlights:&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;&lt;b&gt;AI-based Point Cloud Coding&lt;/b&gt;*: Call for proposals focusing on AI-driven point cloud encoding for applications such as immersive experiences and autonomous driving.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Object Wave Compression&lt;/b&gt;*: Call for interest in object wave compression for enhancing computer holography transmission.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Open Font Format&lt;/b&gt;: Committee Draft of the fifth edition, overcoming previous limitations like the 64K glyph encoding constraint.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Scene Description&lt;/b&gt;: Ratified second edition, integrating immersive media objects and extending support for various data types.&lt;/li&gt;&lt;li&gt;&lt;b&gt;MPEG Immersive Video (MIV)&lt;/b&gt;: New features in the second edition, enhancing the compression of immersive video content.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Video Coding Standards&lt;/b&gt;: New editions of AVC, HEVC, and Video CICP, incorporating additional SEI messages and extended multiview profiles.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Machine-Optimized Video Compression&lt;/b&gt;*: Advancement in optimizing video encoders for machine analysis.&lt;/li&gt;&lt;li&gt;&lt;b&gt;MPEG-I Immersive Audio&lt;/b&gt;*: Reached Committee Draft stage, supporting high-quality, real-time interactive audio rendering for VR/AR/MR.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Video-based Dynamic Mesh Coding (V-DMC)&lt;/b&gt;*: Committee Draft status for efficiently storing and transmitting dynamic 3D content.&lt;/li&gt;&lt;li&gt;&lt;b&gt;LiDAR Coding&lt;/b&gt;*: Enhanced efficiency and responsiveness in LiDAR data processing with the new standard reaching Committee Draft status.&lt;/li&gt;&lt;/ul&gt;* ... covered in this column.&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;AI-based Point Cloud Coding&lt;/h2&gt;&lt;div style=&quot;text-align: left;&quot;&gt;MPEG issued a Call for Proposals (CfP) on AI-based point cloud coding technologies as a result from ongoing explorations regarding use cases, requirements, and the capabilities of AI-driven point cloud encoding, particularly for dynamic point clouds.&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;With recent significant progress in AI-based point cloud compression technologies, MPEG is keen on studying and adopting AI methodologies. MPEG is specifically looking for learning-based codecs capable of handling a broad spectrum of dynamic point clouds, which are crucial for applications ranging from immersive experiences to autonomous driving and navigation. As the field evolves rapidly, MPEG expects to receive multiple innovative proposals. These may include a unified codec, capable of addressing multiple types of point clouds, or specialized codecs tailored to meet specific requirements, contingent upon demonstrating clear advantages. MPEG has therefore publicly called for submissions of AI-based point cloud codecs, aimed at deepening the understanding of the various options available and their respective impacts. Submissions that meet the requirements outlined in the call will be invited to provide source code for further analysis, potentially laying the groundwork for a new standard in AI-based point cloud coding. MPEG welcomes all relevant contributions and looks forward to evaluating the responses.&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;Research aspects&lt;/b&gt;: In-depth analysis of algorithms, techniques, and methodologies, including a comparative study of various AI-driven point cloud compression techniques to identify the most effective approaches. Other aspects include creating or improving learning-based codecs that can handle dynamic point clouds as well as metrics for evaluating the performance of these codecs in terms of compression efficiency, reconstruction quality, computational complexity, and scalability. Finally, the assessment of how improved point cloud compression can enhance user experiences would be worthwhile to consider here also.&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;Object Wave Compression&lt;/h2&gt;&lt;div style=&quot;text-align: left;&quot;&gt;A Call for Interest (CfI) in object wave compression has been issued by MPEG. Computer holography, a 3D display technology, utilizes a digital fringe pattern called a computer-generated hologram (CGH) to reconstruct 3D images from input 3D models. Holographic near-eye displays (HNEDs) reduce the need for extensive pixel counts due to their wearable design, positioning the display near the eye. This positions HNEDs as frontrunners for the early commercialization of computer holography, with significant research underway for product development. Innovative approaches facilitate the transmission of object wave data, crucial for CGH calculations, over networks. Object wave transmission offers several advantages, including independent treatment from playback device optics, lower computational complexity, and compatibility with video coding technology. These advancements open doors for diverse applications, ranging from entertainment experiences to real- time two-way spatial transmissions, revolutionizing fields such as remote surgery and virtual collaboration. As MPEG explores object wave compression for computer holography transmission, a Call for Interest seeks contributions to address market needs in this field.&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;Research aspects&lt;/b&gt;: Apart from compression efficiency, lower computation complexity, and compatibility with video coding technology, there is a range of research aspects, including the design, implementation, and evaluation of coding algorithms within the scope of this CfI. The QoE of computer-generated holograms (CGHs) together with holographic near-eye displays (HNEDs) is yet another dimension to be explored.&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;Machine-Optimized Video Compression&lt;/h2&gt;&lt;div style=&quot;text-align: left;&quot;&gt;MPEG started working on a technical report regarding to the &quot;Optimization of Encoders and Receiving Systems for Machine Analysis of Coded Video Content&quot;. In recent years, the efficacy of machine learning-based algorithms in video content analysis has steadily improved. However, an encoder designed for human consumption does not always produce compressed video conducive to effective machine analysis. This challenge lies not in the compression standard but in optimizing the encoder or receiving system. The forthcoming technical report addresses this gap by showcasing technologies and methods that optimize encoders or receiving systems to enhance machine analysis performance.&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;Research aspects&lt;/b&gt;: Video (and audio) coding for machines has been recently addressed by MPEG Video and Audio working groups, respectively. MPEG Joint Video Experts Team with ITU-T SG16, also known as JVET, joined this space with a technical report, but research aspects remain unchanged, i.e., coding efficiency, metrics, and quality aspects for machine analysis of compressed/coded video content.&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;MPEG-I Immersive Audio&lt;/h2&gt;&lt;div style=&quot;text-align: left;&quot;&gt;MPEG Audio Coding enters the &quot;immersive space&quot; with MPEG-I immersive audio and its corresponding reference software. The MPEG-I immersive audio standard sets a new benchmark for compact and lifelike audio representation in virtual and physical spaces, catering to Virtual, Augmented, and Mixed Reality (VR/AR/MR) applications. By enabling high-quality, real-time interactive rendering of audio content with six degrees of freedom (6DoF), users can experience immersion, freely exploring 3D environments while enjoying dynamic audio. Designed in accordance with MPEG&#39;s rigorous standards, MPEG-I immersive audio ensures efficient distribution across bandwidth-constrained networks without compromising on quality. Unlike proprietary frameworks, this standard prioritizes interoperability, stability, and versatility, supporting both streaming and downloadable content while seamlessly integrating with MPEG-H 3D audio compression. MPEG-I&#39;s comprehensive modeling of real-world acoustic effects, including sound source properties and environmental characteristics, guarantees an authentic auditory experience. Moreover, its efficient rendering algorithms balance computational complexity with accuracy, empowering users to finely tune scene characteristics for desired outcomes.&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;Research aspects&lt;/b&gt;: Evaluating QoE of MPEG-I immersive audio-enabled environments as well as the efficient audio distribution across bandwidth-constrained networks without compromising on audio quality are two important research aspects to be addressed by the research community.&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;Video-based Dynamic Mesh Coding (V-DMC)&lt;/h2&gt;&lt;div style=&quot;text-align: left;&quot;&gt;Video-based Dynamic Mesh Compression (V-DMC) represents a significant advancement in 3D content compression, catering to the ever-increasing complexity of dynamic meshes used across various applications, including real-time communications, storage, free-viewpoint video, augmented reality (AR), and virtual reality (VR). The standard addresses the challenges associated with dynamic meshes that exhibit time-varying connectivity and attribute maps, which were not sufficiently supported by previous standards. Video-based Dynamic Mesh Compression promises to revolutionize how dynamic 3D content is stored and transmitted, allowing more efficient and realistic interactions with 3D content globally.&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;Research aspects&lt;/b&gt;: V-DMC aims to allow &quot;more efficient and realistic interactions with 3D content&quot;, which are subject to research, i.e., compression efficiency vs. QoE in constrained networked environments.&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;Low Latency, Low Complexity LiDAR Coding&lt;/h2&gt;&lt;div style=&quot;text-align: left;&quot;&gt;Low Latency, Low Complexity LiDAR Coding underscores MPEG&#39;s commitment to advancing coding technologies required by modern LiDAR applications across diverse sectors. The new standard addresses critical needs in the processing and compression of LiDAR-acquired point clouds, which are integral to applications ranging from automated driving to smart city management. It provides an optimized solution for scenarios requiring high efficiency in both compression and real-time delivery, responding to the increasingly complex demands of LiDAR data handling. LiDAR technology has become essential for various applications that require detailed environmental scanning, from autonomous vehicles navigating roads to robots mapping indoor spaces. The Low Latency, Low Complexity LiDAR Coding standard will facilitate a new level of efficiency and responsiveness in LiDAR data processing, which is critical for the real-time decision-making capabilities needed in these applications. This standard builds on comprehensive analysis and industry feedback to address specific challenges such as noise reduction, temporal data redundancy, and the need for region-based quality of compression. The standard also emphasizes the importance of low latency coding to support real-time applications, essential for operational safety and efficiency in dynamic environments.&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;Research aspects&lt;/b&gt;: This standard effectively tackles the challenge of balancing high compression efficiency with real-time capabilities, addressing these often conflicting goals. Researchers may carefully consider these aspects and make meaningful contributions.&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;The 147th MPEG meeting will be held in Sapporo, Japan, from July 15-19, 2024. Click &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-147/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt; for more information about MPEG meetings and their developments.&lt;/div&gt;&lt;/div&gt;</description><link>http://multimediacommunication.blogspot.com/2024/05/mpeg-news-report-from-146th-meeting.html</link><author>noreply@blogger.com (Unknown)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHeax1p_CysQZDHK7D4Zm2y3RrdBy46D5lJ-_4klCc5jFAeEJMLJ5PDeP3VFCbfmJJsQdTmz6PJL65rddcqFYFsR3xFmRe14tg9uH4CdCjd-rZ_hTX1YS399SZpghF6qpAzpV8oZI1nHM24EUbIMmhKPIHrG8gz-qQfY-WyW8E2g6mOu2aTmKyHuBiNnY/s72-c/MPEG-Logo-1.png" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-4637438423235318892</guid><pubDate>Fri, 17 May 2024 19:26:00 +0000</pubDate><atom:updated>2024-05-18T21:26:33.305+02:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">MPEG</category><category domain="http://www.blogger.com/atom/ns#">press release</category><title>MPEG news: a report from the 145th meeting</title><description>&lt;p style=&quot;text-align: right;&quot;&gt;&lt;span style=&quot;font-size: x-small;&quot;&gt;This blog post is based on the MPEG press release and has been modified/updated here to focus on and highlight research aspects. This version of the blog post will also be posted at &lt;a href=&quot;http://records.sigmm.org/&quot; target=&quot;_blank&quot;&gt;ACM SIGMM Records&lt;/a&gt;.&lt;/span&gt;&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEju266lkAN2rcumZDn4AHpdUgrDZWI5pq74AGS3YPmb0SLHzLL9GSJe8j4Cb0tRJlIWTy831504O9jvq7q7PiVXSJ185n7dD2mmZrRCceaKlH48PRKbRvOzfjGdCg1xa2duC4qJ19FQcOdbtAKnd1lgqlPDDqSurvGTlzoqjNnAgdxt8WqMyfhLbABpJas/s1200/MPEG-Logo-1.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;416&quot; data-original-width=&quot;1200&quot; height=&quot;111&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEju266lkAN2rcumZDn4AHpdUgrDZWI5pq74AGS3YPmb0SLHzLL9GSJe8j4Cb0tRJlIWTy831504O9jvq7q7PiVXSJ185n7dD2mmZrRCceaKlH48PRKbRvOzfjGdCg1xa2duC4qJ19FQcOdbtAKnd1lgqlPDDqSurvGTlzoqjNnAgdxt8WqMyfhLbABpJas/s320/MPEG-Logo-1.png&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;span style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;https://multimediacommunication.blogspot.com/2013/04/mpeg-news-archive.html&quot;&gt;MPEG News Archive&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;The 145th MPEG meeting was held online from 22-26 January 2024, and the official press release can be found &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-145/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;. It comprises the following highlights:&lt;/div&gt;&lt;div&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Latest Edition of the High Efficiency Image Format Standard Unveils Cutting-Edge Features for Enhanced Image Decoding and Annotation&lt;/li&gt;&lt;li&gt;MPEG Systems finalizes Standards supporting Interoperability Testing&lt;/li&gt;&lt;li&gt;MPEG finalizes the Third Edition of MPEG-D Dynamic Range Control&lt;/li&gt;&lt;li&gt;MPEG finalizes the Second Edition of MPEG-4 Audio Conformance&lt;/li&gt;&lt;li&gt;MPEG Genomic Coding extended to support Transport and File Format for Genomic Annotations&lt;/li&gt;&lt;li&gt;MPEG White Paper: Neural Network Coding (NNC) – Efficient Storage and Inference of Neural Networks for Multimedia Applications&lt;/li&gt;&lt;/ul&gt;This column will focus on the High Efficiency Image Format (HEIF) and interoperability testing. As usual, a brief update on MPEG-DASH et al. will be provided.&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;High Efficiency Image Format (HEIF)&lt;/h2&gt;&lt;div&gt;The High Efficiency Image Format (HEIF) is a widely adopted standard in the imaging industry that continues to grow in popularity. At the 145th MPEG meeting, MPEG Systems (WG 3) ratified its third edition, which introduces exciting new features, such as progressive decoding capabilities that enhance image quality through a sequential, single-decoder instance process. With this enhancement, users can decode bitstreams in successive steps, with each phase delivering perceptible improvements in image quality compared to the preceding step. Additionally, the new edition introduces a sophisticated data structure that describes the spatial configuration of the camera and outlines the unique characteristics responsible for generating the image content. The update also includes innovative tools for annotating specific areas in diverse shapes, adding a layer of creativity and customization to image content manipulation. These annotation features cater to the diverse needs of users across various industries.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;Research aspects&lt;/b&gt;: Progressive coding has been a part of modern image coding formats for some time now. However, the inclusion of supplementary metadata provides an opportunity to explore new use cases that can benefit both user experience (UX) and quality of experience (QoE) in academic settings.&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;Interoperability Testing&lt;/h2&gt;&lt;div&gt;MPEG standards typically comprise format definitions (or specifications) to enable interoperability among products and services from different vendors. Interestingly, MPEG goes beyond these format specifications and provides reference software and conformance bitstreams, allowing conformance testing.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;At the 145th MPEG meeting, MPEG Systems (WG 3) finalized two standards comprising conformance and reference software by promoting it to the Final Draft International Standard (FDIS), the final stage of standards development. The finalized standards, ISO/IEC 23090-24 and ISO/IEC 23090-25, showcase the pinnacle of conformance and reference software for scene description and visual volumetric video-based coding data, respectively.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;ISO/IEC 23090-24 focuses on conformance and reference software for scene description, providing a comprehensive reference implementation and bitstream tailored for conformance testing related to ISO/IEC 23090-14, scene description. This standard opens new avenues for advancements in scene depiction technologies, setting a new standard for conformance and software reference in this domain.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Similarly, ISO/IEC 23090-25 targets conformance and reference software for the carriage of visual volumetric video-based coding data. With a dedicated reference implementation and bitstream, this standard is poised to elevate the conformance testing standards for ISO/IEC 23090-10, the carriage of visual volumetric video-based coding data. The introduction of this standard is expected to have a transformative impact on the visualization of volumetric video data.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;At the same 145th MPEG meeting, MPEG Audio Coding (WG6) celebrated the completion of the second edition of ISO/IEC 14496-26, audio conformance, elevating it to the Final Draft International Standard (FDIS) stage. This significant update incorporates seven corrigenda and five amendments into the initial edition, originally published in 2010.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;ISO/IEC 14496-26 serves as a pivotal standard, providing a framework for designing tests to ensure the compliance of compressed data and decoders with the requirements outlined in ISO/IEC 14496-3 (MPEG-4 Audio). The second edition reflects an evolution of the original, addressing key updates and enhancements through diligent amendments and corrigenda. This latest edition, now at the FDIS stage, marks a notable stride in MPEG Audio Coding&#39;s commitment to refining audio conformance standards and ensuring the seamless integration of compressed data within the MPEG-4 Audio framework.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;These standards will be made freely accessible for download on the official ISO website, ensuring widespread availability for industry professionals, researchers, and enthusiasts alike.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;b&gt;Research aspects&lt;/b&gt;: Reference software and conformance bitstreams often serve as the basis for further research (and development) activities and, thus, are highly appreciated. For example, reference software of video coding formats (e.g., HM for HEVC, VM for VVC) can be used as a baseline when improving coding efficiency or other aspects of the coding format.&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;MPEG-DASH Updates&lt;/h2&gt;&lt;div&gt;The current status of MPEG-DASH is shown in the figure below.&lt;/div&gt;&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhN-XpIEXXyxLj11_x8LTE_dLi5J7YpLjKFKRTZVaa_OiocPxtbD0PEyyBIGUkhoh9I_u-3MX36K1Ss7TZNX3ifRwo7O3h6euC7GIIYOYLT4tqr1ecvy7neUQrQMI6cU3EUlOZeEZvaZJH4qPC18Ef5YvXQ9zADXBvpJhxVMTFlS_6xpw4Tt53QIlRM_Vk/s1024/MPEG-DASH-standard-status.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;576&quot; data-original-width=&quot;1024&quot; height=&quot;360&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhN-XpIEXXyxLj11_x8LTE_dLi5J7YpLjKFKRTZVaa_OiocPxtbD0PEyyBIGUkhoh9I_u-3MX36K1Ss7TZNX3ifRwo7O3h6euC7GIIYOYLT4tqr1ecvy7neUQrQMI6cU3EUlOZeEZvaZJH4qPC18Ef5YvXQ9zADXBvpJhxVMTFlS_6xpw4Tt53QIlRM_Vk/w640-h360/MPEG-DASH-standard-status.png&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;MPEG-DASH Status, January 2024.&lt;br /&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;The following most notable aspects have been discussed at the 145th MPEG meeting and adopted into ISO/IEC 23009-1, which will eventually become the 6th edition of the MPEG-DASH standard:&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;ul&gt;&lt;li&gt;It is now possible to pass CMCD parameters sid and cid via the MPD URL.&lt;/li&gt;&lt;li&gt;Segment duration patterns can be signaled using SegmentTimeline.&lt;/li&gt;&lt;li&gt;Definition of a background mode of operation, which allows a DASH player to receive MPD updates and listen to events without possibly decrypting or rendering any media.&lt;/li&gt;&lt;/ul&gt;Additionally, the technologies under consideration (TuC) document has been updated with means to signal maximum segment rate, extend copyright license signaling, and improve haptics signaling in DASH. Finally, REAP is progressing towards FDIS but not yet there and most details will be discussed in the upcoming AhG period.&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;The 146th MPEG meeting will be held in Rennes, France, from April 22-26, 2024. Click &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-146/&quot;&gt;here&lt;/a&gt; for more information about MPEG meetings and their developments.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;br /&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;br /&gt;&lt;/div&gt;</description><link>http://multimediacommunication.blogspot.com/2024/05/mpeg-news-report-from-145th-meeting.html</link><author>noreply@blogger.com (Unknown)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEju266lkAN2rcumZDn4AHpdUgrDZWI5pq74AGS3YPmb0SLHzLL9GSJe8j4Cb0tRJlIWTy831504O9jvq7q7PiVXSJ185n7dD2mmZrRCceaKlH48PRKbRvOzfjGdCg1xa2duC4qJ19FQcOdbtAKnd1lgqlPDDqSurvGTlzoqjNnAgdxt8WqMyfhLbABpJas/s72-c/MPEG-Logo-1.png" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-7372011409388284520</guid><pubDate>Thu, 16 May 2024 20:41:00 +0000</pubDate><atom:updated>2024-05-16T22:41:41.942+02:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">jobs</category><title>Assistant Professor (postdoc) with QA option (tenure track) (all genders welcome)</title><description>&lt;p style=&quot;text-align: center;&quot;&gt;&lt;b&gt;Department of Information Technology&amp;nbsp;&lt;/b&gt;&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;Scientific Staff&amp;nbsp; | Full time&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;b&gt;Application deadline: 12 June 2024&lt;/b&gt;&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;Reference code: 673/23 [&lt;a href=&quot;https://jobs.aau.at/en/job/assistant-professor-postdoc-with-qa-option-tenure-track-all-genders-welcome-3/&quot;&gt;URL&lt;/a&gt;]&lt;/p&gt;&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;&lt;p&gt;The University of Klagenfurt, with approximately 1,500 employees and over 12,000 students, is located in the Alps-Adriatic region and consistently achieves excellent placements in rankings. The motto “per aspera ad astra” underscores our firm commitment to the pursuit of excellence in all activities in research, teaching, and university management. The principles of equality, diversity, health, sustainability, and compatibility of work and family life serve as the foundation for our work at the university.&lt;/p&gt;&lt;p&gt;The University of Klagenfurt is pleased to announce the following open position at the Department of Information Technology at the Faculty of Technical Sciences with an expected starting date of 7 January 2025:&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;b&gt;Assistant Professor (postdoc) with QA option (tenure track) (all genders welcome)&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Level of employment: 100 % (40 hours/week)&lt;/p&gt;&lt;p&gt;Minimum salary: € 66,532.20 per annum (gross), Classification according to collective agreement: B1 lit.b&lt;/p&gt;&lt;p&gt;Limited to: 6 years (with the option of transitioning to a permanent contract)&lt;/p&gt;&lt;p&gt;Application deadline: 12 June 2024&lt;/p&gt;&lt;p&gt;Reference code: 673/23&lt;/p&gt;&lt;p&gt;&lt;b&gt;Area of responsibility&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Independent research in computer science and communication technologies with the aim of habilitation&lt;/li&gt;&lt;li&gt;Independent delivery of courses in English and German using established and innovative methods&lt;/li&gt;&lt;li&gt;Participation in the research and teaching projects run by the organisational unit&lt;/li&gt;&lt;li&gt;Acquisition and management of third-party funded projects&lt;/li&gt;&lt;li&gt;Supervision of students at Bachelor, Master, and doctoral levels&lt;/li&gt;&lt;li&gt;Participation in organisational and administrative tasks and in quality assurance measures&lt;/li&gt;&lt;li&gt;Contribution to expanding the international scientific and cultural contacts of the organisational unit&lt;/li&gt;&lt;li&gt;Participation in public relations activities including third mission&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Requirements&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Doctoral degree in the field of computer science, information and communications engineering, electrical engineering or related fields completed at a domestic or foreign higher education institution&lt;/li&gt;&lt;li&gt;Relevant and good publication record in the field of multimedia systems&lt;/li&gt;&lt;li&gt;A strong background in one or both fields&lt;/li&gt;&lt;ul&gt;&lt;li&gt;(Distributed) multimedia systems, preferably covering video in the context of video coding, communication, streaming, and quality of experience (QoE);&lt;/li&gt;&lt;li&gt;Machine learning, preferably in the context of (distributed) multimedia systems or/and computer vision&lt;/li&gt;&lt;/ul&gt;&lt;li&gt;Very good scientific communication and dissemination skills (scientific writing and oral presentations)&lt;/li&gt;&lt;li&gt;Excellent programming skills in multimedia systems or/and machine learning&lt;/li&gt;&lt;li&gt;Excellent spoken and written English skills&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;b&gt;Desired skills&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Experience in the acquisition and running of third-party funded projects and readiness to play an active role in third-party funded projects and their acquisition&lt;/li&gt;&lt;li&gt;Didactic competence and proven successful teaching experience&lt;/li&gt;&lt;li&gt;Willingness to actively participate in research, teaching, and administration&lt;/li&gt;&lt;li&gt;Scientific curiosity and enthusiasm for imparting knowledge&lt;/li&gt;&lt;li&gt;Gender mainstreaming and diversity management skills&lt;/li&gt;&lt;li&gt;Leadership and teamwork skills&lt;/li&gt;&lt;li&gt;Good spoken and written German skills&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Additional information&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;Our offer:&lt;/i&gt;&lt;/p&gt;&lt;p&gt;This tenure track position includes the option of negotiating a qualification agreement in accordance with Section 27 of the collective agreement for university staff for the areas of research, independent teaching, management and administrative tasks, and experience gained externally (QA). The employment contract is concluded for the position as Assistant Professor (postdoc) with QA option and stipulates a starting salary of € 4,752.30 gross per month (14 times a year; previous experience deemed relevant to the job can be recognised in accordance with the collective agreement). Upon entering into the qualification agreement, the position shall be classified as an Assistant Professorship with a minimum gross salary of € 5,595.60 per month. Upon fulfilling the stipulations of the qualification agreement, the post-holder shall be promoted to tenured Associate Professor with a minimum gross salary of € 6,055.70 per month.&lt;/p&gt;&lt;p&gt;&lt;i&gt;The University of Klagenfurt also offers:&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Personal and professional advanced training courses, management, and career coaching&lt;/li&gt;&lt;li&gt;Numerous attractive additional benefits, see also &lt;a href=&quot;https://jobs.aau.at/en/the-university-as-employer/&quot;&gt;https://jobs.aau.at/en/the-university-as-employer/&lt;/a&gt;&lt;/li&gt;&lt;li&gt;Diversity- and family-friendly university culture&lt;/li&gt;&lt;li&gt;The opportunity to live and work in the attractive Alps-Adriatic region with a wide range of leisure activities in the spheres of culture, nature, and sports&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;The application:&lt;/i&gt;&lt;/p&gt;&lt;p&gt;If you are interested in this position, please apply in German or English, providing a convincing application including the following:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Letter of application, including – but not limited to – motivation as well as a concise research and teaching statement, respectively&lt;/li&gt;&lt;li&gt;Curriculum vitae, including publication and lecture lists, as well as details and an explanation of research and teaching activities (please do not include a photo)&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;Furthermore:&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Proof of all completed higher education programmes (certificates, supplements, if applicable)&lt;/li&gt;&lt;li&gt;Outline of the content of the doctoral programme (listing academic achievements, intermediate examinations, etc.) as well as the content of the thesis (summary)&lt;/li&gt;&lt;li&gt;Other documentary evidence that may be relevant to this announcement (see prerequisites and desired qualifications)&lt;/li&gt;&lt;li&gt;Please provide three references (contact details of persons who the university may contact by telephone for information purposes)&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;To apply, please select the position with the reference code 673/23 in the category “Scientific Staff” using the link “Apply for this position” in the job portal at &lt;a href=&quot;https://jobs.aau.at/en/&quot;&gt;https://jobs.aau.at/en/&lt;/a&gt;. &amp;gt;&amp;gt;&amp;gt; &lt;a href=&quot;https://jobs.aau.at/en/job/assistant-professor-postdoc-with-qa-option-tenure-track-all-genders-welcome-3/&quot;&gt;LINK&lt;/a&gt; &amp;lt;&amp;lt;&amp;lt;&lt;/p&gt;&lt;p&gt;Candidates must furnish proof that they meet the required qualifications by 12 June 2024 at the latest.&lt;/p&gt;&lt;p&gt;For further information on this specific vacancy, please contact Prof. Christian Timmerer (christian.timmerer@aau.at). General information about the university as an employer can be found at &lt;a href=&quot;https://jobs.aau.at/en/the-university-as-employer/&quot;&gt;https://jobs.aau.at/en/the-university-as-employer/&lt;/a&gt;. At the University of Klagenfurt, recruitment and staff matters are accompanied not only by the authority responsible for the recruitment procedure but also by the &lt;a href=&quot;https://www.aau.at/en/university/organisation/representations-commissioners/equal-opportunities-working-group/&quot; target=&quot;_blank&quot;&gt;Equal Opportunities Working Group&lt;/a&gt; and, if necessary, by the &lt;a href=&quot;https://www.aau.at/en/university/organisation/administration-and-management/integrated-study/&quot; target=&quot;_blank&quot;&gt;Representative for Disabled Persons&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;The University of Klagenfurt aims to increase the proportion of women and therefore specifically invites qualified women to apply for the position. Where the qualification is equivalent, women will be given preferential consideration.&lt;/p&gt;&lt;p&gt;As part of its human resources policy, the University of Klagenfurt places particular emphasis on anti-discrimination, equal opportunities, and diversity.&lt;/p&gt;&lt;p&gt;People with disabilities or chronic diseases, who fulfil the requirements, are particularly encouraged to apply.&lt;/p&gt;&lt;p&gt;Travel and accommodation costs incurred during the application process will not be refunded.&lt;/p&gt;&lt;p&gt;Translations into other languages shall serve informational purposes only. Solely the version advertised in the University Bulletin (&lt;a href=&quot;https://www.aau.at/universitaet/service-kontakt/mitteilungsblaetter/mitteilungsblaetter-2023-2024/&quot; target=&quot;_blank&quot;&gt;Mitteilungsblatt&lt;/a&gt;) shall be legally binding.&amp;nbsp;&lt;/p&gt;</description><link>http://multimediacommunication.blogspot.com/2024/05/assistant-professor-postdoc-with-qa.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-7956322987097607677</guid><pubDate>Wed, 17 Jan 2024 10:29:00 +0000</pubDate><atom:updated>2024-01-17T11:29:49.550+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">conference</category><category domain="http://www.blogger.com/atom/ns#">MHV</category><category domain="http://www.blogger.com/atom/ns#">MOQ</category><category domain="http://www.blogger.com/atom/ns#">Segments</category><category domain="http://www.blogger.com/atom/ns#">streaming</category><title>Streaming week in Denver: MOQ interim + Mile-High Video + SVTA Segments</title><description>&lt;p&gt;&lt;span style=&quot;caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px;&quot;&gt;The next &lt;b&gt;Media over QUIC (MOQ)&lt;/b&gt; interim meeting will be hosted by Comcast in Denver (Feb. 6-8). It is open to public participation and it is free. Details are here: &lt;a href=&quot;https://github.com/moq-wg/wg-materials/blob/main/interim-24-02/arrangements.md&quot;&gt;https://github.com/moq-wg/wg-materials/blob/main/interim-24-02/arrangements.md&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;&lt;div style=&quot;caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px;&quot;&gt;Then, the&amp;nbsp;&lt;b&gt;ACM Mile-High Video conference&lt;/b&gt; will be just a few miles away (including a Latency Party during the Super Bowl) between Feb. 11-14. Details are here: &lt;a href=&quot;https://www.mile-high.video/technical-program&quot;&gt;https://www.mile-high.video/technical-program&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px;&quot;&gt;&lt;br /&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj158ZM-9H0hJc-vvhBnwsGozEv6eRv6lpmYHmkIQ9kMi5hT0L6FeOsg0bkNJwsEk8-X62Gx-Ofy6BdmTEPZv_JkPaCd6EtSFhyL06pB_0iEyE8Yj7vhxE4LUY057xOxH5PMKdoFnyew8QR4WiN_N9KRjLSiyUxfTyQBMo_SGfW8HwsMkQMg1uUBL4pgt8/s1134/MHV24_2.webp&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;216&quot; data-original-width=&quot;1134&quot; height=&quot;61&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj158ZM-9H0hJc-vvhBnwsGozEv6eRv6lpmYHmkIQ9kMi5hT0L6FeOsg0bkNJwsEk8-X62Gx-Ofy6BdmTEPZv_JkPaCd6EtSFhyL06pB_0iEyE8Yj7vhxE4LUY057xOxH5PMKdoFnyew8QR4WiN_N9KRjLSiyUxfTyQBMo_SGfW8HwsMkQMg1uUBL4pgt8/s320/MHV24_2.webp&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px;&quot;&gt;Finally, &lt;b&gt;SVTA Segments 2024&lt;/b&gt; will take place at the same venue on Feb. 14th. Details are here: &lt;a href=&quot;https://segments2024.svta.org/&quot;&gt;https://segments2024.svta.org/&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNwNvp2jgG_6vyhRpVYWd652vcraDIPGXU1DV6s1QVLtF7HV39ICcYZcfCnJRB07p_KvdH6GJgkF1JL-KgQWG1IT7zF_0OEmKX9tvUKxDf5jU0s2iHMIFy-SpIWMack5oR_x0tmG83Y73r4mN-bTcbfKJqgvX8QRVyE8xWXM6_bBOdtKCFL1yIqi_mXLY/s691/SEGMENTS-Logo-2024.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;116&quot; data-original-width=&quot;691&quot; height=&quot;54&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNwNvp2jgG_6vyhRpVYWd652vcraDIPGXU1DV6s1QVLtF7HV39ICcYZcfCnJRB07p_KvdH6GJgkF1JL-KgQWG1IT7zF_0OEmKX9tvUKxDf5jU0s2iHMIFy-SpIWMack5oR_x0tmG83Y73r4mN-bTcbfKJqgvX8QRVyE8xWXM6_bBOdtKCFL1yIqi_mXLY/s320/SEGMENTS-Logo-2024.png&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; text-align: center;&quot;&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style=&quot;caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px;&quot;&gt;You can benefit from the early (and combo) registration rates for Mile-High Video and Segments.&lt;/div&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;br /&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;</description><link>http://multimediacommunication.blogspot.com/2024/01/streaming-week-in-denver-moq-interim.html</link><author>noreply@blogger.com (Unknown)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj158ZM-9H0hJc-vvhBnwsGozEv6eRv6lpmYHmkIQ9kMi5hT0L6FeOsg0bkNJwsEk8-X62Gx-Ofy6BdmTEPZv_JkPaCd6EtSFhyL06pB_0iEyE8Yj7vhxE4LUY057xOxH5PMKdoFnyew8QR4WiN_N9KRjLSiyUxfTyQBMo_SGfW8HwsMkQMg1uUBL4pgt8/s72-c/MHV24_2.webp" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-7463852109463716175</guid><pubDate>Thu, 07 Dec 2023 14:34:00 +0000</pubDate><atom:updated>2023-12-07T15:34:51.533+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">award</category><category domain="http://www.blogger.com/atom/ns#">dash</category><category domain="http://www.blogger.com/atom/ns#">mpeg-dash</category><title>Hat-Trick Victory: MPEG-DASH Papers Shine in ACM SIGMM Test of Time Awards</title><description>The Association for Computing Machinery (ACM) Special Interest Group on Multimedia (SIGMM) provides a Test of Time Award. The details for this award can be found &lt;a href=&quot;http://sigmm.org/Awards/testoftime&quot;&gt;here&lt;/a&gt;, and I took the liberty to copy the main aspects as follows.&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;&lt;blockquote&gt;&quot;This award is presented every year, starting in 2020, to the authors of the paper published either 10, 11 or 12 years previously at an SIGMM sponsored or co-sponsored conference (so the 2020 award would be for papers at a 2008, 2009 or 2010 SIGMM conference). The award recognizes the paper that has had the most impact and influence on the field of Multimedia in terms of research, development, product or ideas, during the intervening years, as selected by a selection committee. The contributions the selection committee will focus on may be theoretical advances, techniques and/or software tools that have been widely used, and/or innovative applications that have had impact on multimedia computing.&quot;&lt;/blockquote&gt;&lt;br /&gt;Interestingly, in the past three years, papers related to MPEG-DASH were always among the winners or honorable mentions as follows:&lt;/div&gt;&lt;div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;2021 Awards&lt;/h2&gt;&lt;b&gt;Winner (MM Systems &amp;amp; Networking)&lt;br /&gt;&lt;/b&gt;&lt;br /&gt;&lt;div style=&quot;text-align: center;&quot;&gt;Thomas Stockhammer. 2011. &lt;b&gt;Dynamic adaptive streaming over HTTP --: standards and design principles&lt;/b&gt;. In Proceedings of the second annual ACM conference on Multimedia systems (MMSys &#39;11). Association for Computing Machinery, New York, NY, USA, 133–144. &lt;a href=&quot;https://dl.acm.org/doi/10.1145/1943552.1943572&quot;&gt;https://dl.acm.org/doi/10.1145/1943552.1943572&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;b&gt;Abstract&lt;/b&gt;: In this paper, we provide some insight and background into the Dynamic Adaptive Streaming over HTTP (DASH) specifications as available from 3GPP and in draft version also from MPEG. Specifically, the 3GPP version provides a normative description of a Media Presentation, the formats of a Segment, and the delivery protocol. In addition, it adds an informative description on how a DASH Client may use the provided information to establish a streaming service for the user. The solution supports different service types (e.g., On-Demand, Live, Time-Shift Viewing), different features (e.g., adaptive bitrate switching, multiple language support, ad insertion, trick modes, DRM) and different deployment options. Design principles and examples are provided.&lt;br /&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;2022 Awards&lt;/h2&gt;&lt;b&gt;Honorable Mention, in the category of “Multimedia Systems- Networks”&lt;br /&gt;&lt;/b&gt;&lt;br /&gt;&lt;div style=&quot;text-align: center;&quot;&gt;Saamer Akhshabi, Ali C. Begen, and Constantine Dovrolis. 2011. &lt;b&gt;An experimental evaluation of rate-adaptation algorithms in adaptive streaming over HTTP&lt;/b&gt;. In Proceedings of the second annual ACM conference on Multimedia systems (MMSys &#39;11). Association for Computing Machinery, New York, NY, USA, 157–168. &lt;a href=&quot;https://dl.acm.org/doi/10.1145/1943552.1943574&quot;&gt;https://dl.acm.org/doi/10.1145/1943552.1943574&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;b&gt;Abstract&lt;/b&gt;: Adaptive (video) streaming over HTTP is gradually being adopted, as it offers significant advantages in terms of both user-perceived quality and resource utilization for content and network service providers. In this paper, we focus on the rate-adaptation mechanisms of adaptive streaming and experimentally evaluate two major commercial players (Smooth Streaming, Netflix) and one open source player (OSMF). Our experiments cover three important operating conditions. First, how does an adaptive video player react to either persistent or short-term changes in the underlying network available bandwidth. Can the player quickly converge to the maximum sustainable bitrate? Second, what happens when two adaptive video players compete for available bandwidth in the bottleneck link? Can they share the resources in a stable and fair manner? And third, how does adaptive streaming perform with live content? Is the player able to sustain a short playback delay? We identify major differences between the three players, and significant inefficiencies in each of them.&lt;br /&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;2023 Awards&lt;/h2&gt;&lt;b&gt;Honorable Mention, in the category of &quot;MM Systems &amp;amp; Networking&quot;&lt;br /&gt;&lt;/b&gt;&lt;br /&gt;&lt;div style=&quot;text-align: center;&quot;&gt;Stefan Lederer, Christopher Müller, and Christian Timmerer. 2012. &lt;b&gt;Dynamic adaptive streaming over HTTP dataset&lt;/b&gt;. In Proceedings of the 3rd Multimedia Systems Conference (MMSys &#39;12). Association for Computing Machinery, New York, NY, USA, 89–94. &lt;a href=&quot;https://dl.acm.org/doi/10.1145/2155555.2155570&quot;&gt;https://dl.acm.org/doi/10.1145/2155555.2155570&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;b&gt;Abstract&lt;/b&gt;: The delivery of audio-visual content over the Hypertext Transfer Protocol (HTTP) got lot of attention in recent years and with dynamic adaptive streaming over HTTP (DASH) a standard is now available. Many papers cover this topic and present their research results, but unfortunately all of them use their own private dataset which -- in most cases -- is not publicly available. Hence, it is difficult to compare, e.g., adaptation algorithms in an objective way due to the lack of a common dataset which shall be used as basis for such experiments. In this paper, we present our DASH dataset including our DASHEncoder, an open source DASH content generation tool. We also provide basic evaluations of the different segment lengths, the influence of HTTP server settings, and, in this context, we show some of the advantages as well as problems of shorter segment lengths.&lt;/div&gt;</description><link>http://multimediacommunication.blogspot.com/2023/12/hat-trick-victory-mpeg-dash-papers.html</link><author>noreply@blogger.com (Unknown)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-6674429876431979803</guid><pubDate>Tue, 05 Dec 2023 16:47:00 +0000</pubDate><atom:updated>2023-12-05T17:47:51.991+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">holography</category><category domain="http://www.blogger.com/atom/ns#">immersive</category><category domain="http://www.blogger.com/atom/ns#">immersive experience</category><category domain="http://www.blogger.com/atom/ns#">omnidirectional video streaming</category><category domain="http://www.blogger.com/atom/ns#">survey</category><category domain="http://www.blogger.com/atom/ns#">tutorial</category><title>A Tutorial on Immersive Video Delivery: From Omnidirectional Video to Holography</title><description>&lt;p style=&quot;text-align: center;&quot;&gt;&amp;nbsp;&lt;b&gt;A Tutorial on Immersive Video Delivery: From Omnidirectional Video to Holography&lt;/b&gt;&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;i&gt;IEEE Communications Surveys and Tutorials&lt;/i&gt;&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;[&lt;a href=&quot;https://ieeexplore.ieee.org/document/10089176&quot; target=&quot;_blank&quot;&gt;PDF&lt;/a&gt;]&lt;/p&gt;&lt;p style=&quot;text-align: justify;&quot;&gt;Jeroen van der Hooft (Ghent University, Belgium), Hadi Amirpour (AAU, Austria), Maria Torres Vega (KU Leuven, Belgium), Yago Sanchez (Fraunhofer/HHI), Raimund Schatz (AIT, Austria), Thomas Schierl (Fraunhofer/HHI, Germany), and Christian Timmerer (AAU, Austria)&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhB319YQbrI_DNKiCkfqNywTMN8GeOSPQSFYIWLiw5hmIKBTnmnUlsi0o9eABgTOjT9axXFm4IUHVkcZmDQJxs6499CjcWBv4Mm0_j0UO1eD4e8LtQEf3DK8chVfVOKXcfwL0TkObRqISdUfID5w8PCiJ774TF7YK-LScmw22vjS8LpuxX-GwjQ1mGJoo0/s4202/3_end_to_end.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;1596&quot; data-original-width=&quot;4202&quot; height=&quot;245&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhB319YQbrI_DNKiCkfqNywTMN8GeOSPQSFYIWLiw5hmIKBTnmnUlsi0o9eABgTOjT9axXFm4IUHVkcZmDQJxs6499CjcWBv4Mm0_j0UO1eD4e8LtQEf3DK8chVfVOKXcfwL0TkObRqISdUfID5w8PCiJ774TF7YK-LScmw22vjS8LpuxX-GwjQ1mGJoo0/w640-h245/3_end_to_end.png&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p style=&quot;text-align: justify;&quot;&gt;&lt;b&gt;Abstract&lt;/b&gt;: Video services are evolving from traditional two-dimensional video to virtual reality and holograms, which offer six degrees of freedom to users, enabling them to freely move around in a scene and change focus as desired. However, this increase in freedom translates into stringent requirements in terms of ultra-high bandwidth (in the order of Gigabits per second) and minimal latency (in the order of milliseconds). To realize such immersive services, the network transport, as well as the video representation and encoding, have to be fundamentally enhanced. The purpose of this tutorial article is to provide an elaborate introduction to the creation, streaming, and evaluation of immersive video. Moreover, it aims to provide lessons learned and to point at promising research paths to enable truly interactive immersive video applications toward holography.&lt;/p&gt;&lt;p style=&quot;text-align: justify;&quot;&gt;&lt;b&gt;Keywords&lt;/b&gt;—Immersive video delivery, 3DoF, 6DoF, omnidirectional video, volumetric video, point clouds, meshes, light fields, holography, end-to-end systems&lt;/p&gt;&lt;p style=&quot;text-align: justify;&quot;&gt;J. van der Hooft,&amp;nbsp;H. Amirpour, M. Torres Vega, Y. Sanchez, R. Schatz, T. Schierl, C. Timmerer, &quot;A Tutorial on Immersive Video Delivery: From Omnidirectional Video to Holography,&quot; in IEEE Communications Surveys &amp;amp; Tutorials, vol. 25, no. 2, pp. 1336-1375, Secondquarter 2023, doi: 10.1109/COMST.2023.3263252.&lt;/p&gt;

&lt;iframe allowfullscreen=&quot;&quot; frameborder=&quot;0&quot; height=&quot;486&quot; marginheight=&quot;0&quot; marginwidth=&quot;0&quot; scrolling=&quot;no&quot; src=&quot;https://www.slideshare.net/slideshow/embed_code/key/x5rSbaAlQhb7X4?startSlide=1&quot; style=&quot;border-width: 1px; border: 1px solid #CCC; margin-bottom: 5px; max-width: 100%;&quot; width=&quot;597&quot;&gt;&lt;/iframe&gt;&lt;div style=&quot;margin-bottom: 5px;&quot;&gt;&lt;strong&gt;&lt;a href=&quot;https://www.slideshare.net/christian.timmerer/immersive-video-delivery-from-omnidirectional-video-to-holography&quot; target=&quot;_blank&quot; title=&quot;Immersive Video Delivery: From Omnidirectional Video to Holography&quot;&gt;Immersive Video Delivery: From Omnidirectional Video to Holography&lt;/a&gt;&lt;/strong&gt; from &lt;strong&gt;&lt;a href=&quot;https://www.slideshare.net/christian.timmerer&quot; target=&quot;_blank&quot;&gt;Alpen-Adria-Universität&lt;/a&gt;&lt;/strong&gt;&lt;/div&gt;</description><link>http://multimediacommunication.blogspot.com/2023/12/a-tutorial-on-immersive-video-delivery.html</link><author>noreply@blogger.com (Unknown)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhB319YQbrI_DNKiCkfqNywTMN8GeOSPQSFYIWLiw5hmIKBTnmnUlsi0o9eABgTOjT9axXFm4IUHVkcZmDQJxs6499CjcWBv4Mm0_j0UO1eD4e8LtQEf3DK8chVfVOKXcfwL0TkObRqISdUfID5w8PCiJ774TF7YK-LScmw22vjS8LpuxX-GwjQ1mGJoo0/s72-w640-h245-c/3_end_to_end.png" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-3887232270809471153</guid><pubDate>Tue, 28 Nov 2023 15:50:00 +0000</pubDate><atom:updated>2023-11-28T16:50:03.673+01:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">MPEG</category><category domain="http://www.blogger.com/atom/ns#">press release</category><title>MPEG news: a report from the 144th meeting</title><description>&lt;p style=&quot;text-align: right;&quot;&gt;&lt;span style=&quot;font-size: x-small;&quot;&gt;&lt;span style=&quot;text-align: right;&quot;&gt;The original blog post can be found at the&lt;/span&gt;&lt;span style=&quot;text-align: right;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;https://bitmovin.com/blog/&quot; style=&quot;text-align: right;&quot; target=&quot;_blank&quot;&gt;Bitmovin Techblog&lt;/a&gt;&lt;span style=&quot;text-align: right;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;text-align: right;&quot;&gt;and has been modified/updated here to focus on and highlight research aspects. Additionally, this version of the blog post will also be posted at&lt;/span&gt;&lt;span style=&quot;text-align: right;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;http://records.sigmm.org/&quot; style=&quot;text-align: right;&quot; target=&quot;_blank&quot;&gt;ACM SIGMM Records&lt;/a&gt;&lt;span style=&quot;text-align: right;&quot;&gt;.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEij0QIAZ_UVFAPnPYytvUiJpcBnV-GBGdKUvJpvGk-V74pMIt95egAN0wNemqx-33iLCDvZWGWTzkfqOQfhzICkNaZFaWnIY6AITUagy8S3yj1ZBDc9x8_fgu-2KNs1a-1PLdil-III06mZeEU6tWLYzZq4vu5q-8_kGbZUPlTk986B86PFKACf7GK__Qk/s1200/MPEG-Logo-1.png&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;416&quot; data-original-width=&quot;1200&quot; height=&quot;111&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEij0QIAZ_UVFAPnPYytvUiJpcBnV-GBGdKUvJpvGk-V74pMIt95egAN0wNemqx-33iLCDvZWGWTzkfqOQfhzICkNaZFaWnIY6AITUagy8S3yj1ZBDc9x8_fgu-2KNs1a-1PLdil-III06mZeEU6tWLYzZq4vu5q-8_kGbZUPlTk986B86PFKACf7GK__Qk/s320/MPEG-Logo-1.png&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://multimediacommunication.blogspot.com/2013/04/mpeg-news-archive.html&quot;&gt;MPEG News Archive&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;p&gt;The 144th MPEG meeting was held in Hannover, Germany! For those interested, the press release with all the details is &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-144/&quot; target=&quot;_blank&quot;&gt;available&lt;/a&gt;. It’s great to see progress being made in person (cf. also the group pictures below).&lt;/p&gt;&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgf3NKANwd72uHp2YvEeDzMfUH4U7SwrfVr4tatL9whHHD6dWNWQfTbVW-Zvt2jRVPwI33NrsuwA6fpNOZMmb44WYLqKfgQY-YzRY-d9AxIM0k0L2xHHggBT5-Gd9F7XRwu0iwvKUdlO7yBfIHRm8ReEhPIvWmszT1OzVO0Qb66rwT9r4KXPbVTDq1RbbI/s5286/MPEG144.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img alt=&quot;Attendees of the 144th MPEG meeting in Hannover, Germany.&quot; border=&quot;0&quot; data-original-height=&quot;1876&quot; data-original-width=&quot;5286&quot; height=&quot;227&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgf3NKANwd72uHp2YvEeDzMfUH4U7SwrfVr4tatL9whHHD6dWNWQfTbVW-Zvt2jRVPwI33NrsuwA6fpNOZMmb44WYLqKfgQY-YzRY-d9AxIM0k0L2xHHggBT5-Gd9F7XRwu0iwvKUdlO7yBfIHRm8ReEhPIvWmszT1OzVO0Qb66rwT9r4KXPbVTDq1RbbI/w640-h227/MPEG144.jpg&quot; title=&quot;Attendees of the 144th MPEG meeting in Hannover, Germany.&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;Attendees of the 144th MPEG meeting in Hannover, Germany.&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;p&gt;The main outcome of this meeting is as follows:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;MPEG issues Call for Learning-Based Video Codecs for Study of Quality Assessment&lt;/li&gt;&lt;li&gt;MPEG evaluates Call for Proposals on Feature Compression for Video Coding for Machines&lt;/li&gt;&lt;li&gt;MPEG progresses ISOBMFF-related Standards for the Carriage of Network Abstraction Layer Video Data&lt;/li&gt;&lt;li&gt;MPEG enhances the Support of Energy-Efficient Media Consumption&lt;/li&gt;&lt;li&gt;MPEG ratifies the Support of Temporal Scalability for Geometry-based Point Cloud Compression&lt;/li&gt;&lt;li&gt;MPEG reaches the First Milestone for the Interchange of 3D Graphics Formats&lt;/li&gt;&lt;li&gt;MPEG announces Completion of Coding of Genomic Annotations&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We have modified the press release to cater to the readers of ACM SIGMM Records and highlighted research on video technologies. This edition of the MPEG column focuses on MPEG Systems-related standards and visual quality assessment. As usual, the column will end with an update on MPEG-DASH.&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;Visual Quality Assessment&lt;/h2&gt;&lt;p&gt;MPEG does not create standards in the visual quality assessment domain. However, it conducts visual quality assessments for its standards during various stages of the standardization process. For instance, it evaluates responses to call for proposals, conducts verification tests of its final standards, and so on. MPEG Visual Quality Assessment (AG 5) issued an open call to study quality assessment for learning-based video codecs. AG 5 has been conducting subjective quality evaluations for coded video content and studying their correlation with objective quality metrics. Most of these studies have focused on the High Efficiency Video Coding (HEVC) and Versatile Video Coding (VVC) standards. To facilitate the study of visual quality, MPEG maintains the Compressed Video for the study of Quality Metrics (CVQM) dataset.&lt;/p&gt;&lt;p&gt;With the recent advancements in learning-based video compression algorithms, MPEG is now studying compression using these codecs. It is expected that reconstructed videos compressed using learning-based codecs will have different types of distortion compared to those induced by traditional block-based motion-compensated video coding designs. To gain a deeper understanding of these distortions and their impact on visual quality, MPEG has issued a public call related to learning-based video codecs. MPEG is open to inputs in response to the call and will invite responses that meet the call’s requirements to submit compressed bitstreams for further study of their subjective quality and potential inclusion into the CVQM dataset.&lt;/p&gt;&lt;p&gt;Considering the rapid advancements in the development of learning-based video compression algorithms, MPEG will keep this call open and anticipates future updates to the call.&lt;/p&gt;&lt;p&gt;Interested parties are kindly requested to contact the MPEG AG 5 Convenor Mathias Wien (&lt;a href=&quot;mailto:wien@lfb.rwth-aachen.de&quot;&gt;wien@lfb.rwth- aachen.de&lt;/a&gt;) and submit responses for review at the 145th MPEG meeting in January 2024. Further details are given in the call, issued as AG 5 document &lt;a href=&quot;https://www.mpeg.org/wp-content/uploads/mpeg_meetings/144_Hannover/w23270.zip&quot;&gt;N 104&lt;/a&gt; and available from the &lt;a href=&quot;https://www.mpeg.org/structure/visual-quality-assessment/&quot;&gt;mpeg.org&lt;/a&gt; website.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Research aspects&lt;/b&gt;: Learning-based data compression (e.g., for image, audio, video content) is a hot research topic. Research on this topic relies on datasets offering a set of common test sequences, sometimes also common test conditions, that are publicly available and allow for comparison across different schemes. MPEG’s Compressed Video for the study of Quality Metrics (CVQM) dataset is such a dataset, available here, and ready to be used also by researchers and scientists outside of MPEG. The call mentioned above is open for everyone inside/outside of MPEG and allows researchers to participate in international standards efforts (note: to attend meetings, one must become a delegate of a national body).&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;MPEG Systems-related Standards&lt;/h2&gt;&lt;p&gt;At the 144th MPEG meeting, MPEG Systems (WG 3) produced three news-worthy items as follows:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Progression of ISOBMFF-related standards for the carriage of Network Abstraction Layer (NAL) video data.&lt;/li&gt;&lt;li&gt;Enhancement of the support of energy-efficient media consumption.&lt;/li&gt;&lt;li&gt;Support of temporal scalability for geometry-based Point Cloud Compression (PPC).&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;ISO/IEC 14496-15, a part of the family of ISOBMFF-related standards, defines the carriage of Network Abstract Layer (NAL) unit structured video data such as Advanced Video Coding (AVC), High Efficiency Video Coding (HEVC), Versatile Video Coding (VVC), Essential Video Coding (EVC), and Low Complexity Enhancement Video Coding (LCEVC). This standard has been further improved with the approval of the Final Draft Amendment (FDAM), which adds support for enhanced features such as Picture-in-Picture (PiP) use cases enabled by VVC.&lt;/p&gt;&lt;p&gt;In addition to the improvements made to ISO/IEC 14496-15, separately developed amendments have been consolidated in the 7th edition of the standard. This edition has been promoted to Final Draft International Standard (FDIS), marking the final milestone of the formal standard development.&lt;/p&gt;&lt;p&gt;Another important standard in development is the 2nd edition of ISO/IEC14496-32 (file format reference software and conformance). This standard, currently at the Committee Draft (CD) stage of development, is planned to be completed and reach the status of Final Draft International Standard (FDIS) by the beginning of 2025. This standard will be essential for industry professionals who require a reliable and standardized method of verifying the conformance of their implementation.&lt;/p&gt;&lt;p&gt;MPEG Systems (WG 3) also promoted ISO/IEC 23001-11 (energy-efficient media consumption (green metadata)) Amendment 1 to Final Draft Amendment (FDAM). This amendment introduces energy-efficient media consumption (green metadata) for Essential Video Coding (EVC) and defines metadata that enables a reduction in decoder power consumption. At the same time, ISO/IEC 23001-11 Amendment 2 has been promoted to the Committee Draft Amendment (CDAM) stage of development. This amendment introduces a novel way to carry metadata about display power reduction encoded as a video elementary stream interleaved with the video it describes. The amendment is expected to be completed and reach the status of Final Draft Amendment (FDAM) by the beginning of 2025.&lt;/p&gt;&lt;p&gt;Finally, MPEG Systems (WG 3) promoted ISO/IEC 23090-18 (carriage of geometry-based point cloud compression data) Amendment 1 to Final Draft Amendment (FDAM). This amendment enables the compression of a single elementary stream of point cloud data using ISO/IEC 23090-9 (geometry-based point cloud compression) and storing it in more than one track of ISO Base Media File Format (ISOBMFF)-based files. This enables support for applications that require multiple frame rates within a single file and introduces a track grouping mechanism to indicate multiple tracks carrying a specific temporal layer of a single elementary stream separately.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Research aspects&lt;/b&gt;: MPEG Systems usually provides standards on top of existing compression standards, enabling efficient storage and delivery of media data (among others). Researchers may use these standards (including reference software and conformance bitstreams) to conduct research in the general area of multimedia systems (cf. &lt;a href=&quot;https://acmmmsys.org/&quot;&gt;ACM MMSys&lt;/a&gt;) or, specifically on green multimedia systems (cf. &lt;a href=&quot;https://athena.itec.aau.at/gmsys24/&quot;&gt;ACM GMSys&lt;/a&gt;).&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;MPEG-DASH Updates&lt;/h2&gt;&lt;p&gt;The current status of MPEG-DASH is shown in the figure below with only minor updates compared to the &lt;a href=&quot;https://multimediacommunication.blogspot.com/2023/08/mpeg-news-report-from-143rd-meeting.html&quot;&gt;last meeting&lt;/a&gt;.&lt;/p&gt;&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZRojgSFu5xvnifH5PNsNkL_HauLIYnjCCuumL7bpfBmfX9yTKJl1MnW-geh9MER9ueN_t3yY1DlbZG7iRrLoIfJTq_gJMkiAewyJAmHBS16GMlzWH5UTsc6pqu3I-mge7JIxR53z4c8Rctk1Lb12ZvPtnNZLZ2JI9vRkVQoYXMW2TqhvmhcUe-p2qSLU/s1024/MPEG-DASH-standard-status.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;576&quot; data-original-width=&quot;1024&quot; height=&quot;360&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZRojgSFu5xvnifH5PNsNkL_HauLIYnjCCuumL7bpfBmfX9yTKJl1MnW-geh9MER9ueN_t3yY1DlbZG7iRrLoIfJTq_gJMkiAewyJAmHBS16GMlzWH5UTsc6pqu3I-mge7JIxR53z4c8Rctk1Lb12ZvPtnNZLZ2JI9vRkVQoYXMW2TqhvmhcUe-p2qSLU/w640-h360/MPEG-DASH-standard-status.png&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;MPEG-DASH Status, October 2023.&lt;br /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;p&gt;In particular, the 6th edition of MPEG-DASH is scheduled for 2024 but may not include all amendments under development. An overview of existing amendments can be found in the &lt;a href=&quot;https://multimediacommunication.blogspot.com/2023/08/mpeg-news-report-from-143rd-meeting.html&quot;&gt;blog post from the last meeting&lt;/a&gt;. Current amendments have been (slightly) updated and progressed toward completion in the upcoming meetings. The signaling of haptics in DASH has been discussed and accepted for inclusion in the Technologies under Consideration (TuC) document. The TuC document comprises candidate technologies for possible future amendments to the MPEG-DASH standard and is publicly available &lt;a href=&quot;https://www.mpeg.org/standards/MPEG-DASH/1/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Research aspects&lt;/b&gt;: MPEG-DASH has been heavily researched in the multimedia systems, quality, and communications research communities. Adding haptics to MPEG-DASH would provide another dimension worth considering within research, including, but not limited to, performance aspects and Quality of Experience (QoE).&lt;/p&gt;&lt;p&gt;The 145th MPEG meeting will be online from January 22-26, 2024. Click &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-145/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt; for more information about MPEG meetings and their developments.&lt;/p&gt;</description><link>http://multimediacommunication.blogspot.com/2023/11/mpeg-news-report-from-144th-meeting.html</link><author>noreply@blogger.com (Unknown)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEij0QIAZ_UVFAPnPYytvUiJpcBnV-GBGdKUvJpvGk-V74pMIt95egAN0wNemqx-33iLCDvZWGWTzkfqOQfhzICkNaZFaWnIY6AITUagy8S3yj1ZBDc9x8_fgu-2KNs1a-1PLdil-III06mZeEU6tWLYzZq4vu5q-8_kGbZUPlTk986B86PFKACf7GK__Qk/s72-c/MPEG-Logo-1.png" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-7461761178036895729.post-7119336910888076439</guid><pubDate>Fri, 11 Aug 2023 10:43:00 +0000</pubDate><atom:updated>2023-08-11T12:43:35.720+02:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">MPEG</category><category domain="http://www.blogger.com/atom/ns#">press release</category><title>MPEG news: a report from the 143rd meeting</title><description>&lt;p style=&quot;text-align: right;&quot;&gt;&lt;span style=&quot;font-size: x-small;&quot;&gt;The original blog post can be found at the &lt;a href=&quot;https://bitmovin.com/blog/&quot; target=&quot;_blank&quot;&gt;Bitmovin Techblog&lt;/a&gt; and has been modified/updated here to focus on and highlight research aspects. Additionally, this version of the blog post will also be posted at &lt;a href=&quot;http://records.sigmm.org/&quot; target=&quot;_blank&quot;&gt;ACM SIGMM Records&lt;/a&gt;.&lt;/span&gt;&lt;/p&gt;&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEij0QIAZ_UVFAPnPYytvUiJpcBnV-GBGdKUvJpvGk-V74pMIt95egAN0wNemqx-33iLCDvZWGWTzkfqOQfhzICkNaZFaWnIY6AITUagy8S3yj1ZBDc9x8_fgu-2KNs1a-1PLdil-III06mZeEU6tWLYzZq4vu5q-8_kGbZUPlTk986B86PFKACf7GK__Qk/s1200/MPEG-Logo-1.png&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;416&quot; data-original-width=&quot;1200&quot; height=&quot;111&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEij0QIAZ_UVFAPnPYytvUiJpcBnV-GBGdKUvJpvGk-V74pMIt95egAN0wNemqx-33iLCDvZWGWTzkfqOQfhzICkNaZFaWnIY6AITUagy8S3yj1ZBDc9x8_fgu-2KNs1a-1PLdil-III06mZeEU6tWLYzZq4vu5q-8_kGbZUPlTk986B86PFKACf7GK__Qk/s320/MPEG-Logo-1.png&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://multimediacommunication.blogspot.com/2013/04/mpeg-news-archive.html&quot;&gt;MPEG News Archive&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;p&gt;The 143rd MPEG meeting took place in person in Geneva, Switzerland. The official press release can be accessed &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-143/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt; and includes the following details:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;MPEG finalizes the Carriage of Uncompressed Video and Images in ISOBMFF&lt;/li&gt;&lt;li&gt;MPEG reaches the First Milestone for two ISOBMFF Enhancements&lt;/li&gt;&lt;li&gt;MPEG ratifies Third Editions of VVC and VSEI&lt;/li&gt;&lt;li&gt;MPEG reaches the First Milestone of AVC (11th Edition) and HEVC Amendment&lt;/li&gt;&lt;li&gt;MPEG Genomic Coding extended to support Joint Structured Storage and Transport of Sequencing Data, Annotation Data, and Metadata&lt;/li&gt;&lt;li&gt;MPEG completes Reference Software and Conformance for Geometry-based Point Cloud Compression&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We have adjusted the press release to suit the audience here and emphasized research on video technologies. This blog post centers around ISOBMFF and video codecs. As always, I will conclude with an update on MPEG-DASH.&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;ISOBMFF Enhancements&lt;/h2&gt;&lt;p&gt;The ISO Base Media File Format (ISOBMFF) supports the carriage of a wide range of media data such as video, audio, point clouds, haptics, etc., which has now been further extended to uncompressed videos and images.&lt;/p&gt;&lt;p&gt;ISO/IEC 23001-17 – Carriage of uncompressed video and images in ISOBMFF – specifies how uncompressed 2D image and video data is carried in files that comply with the ISOBMFF family of standards. This encompasses a range of data types, including monochromatic and color data, transparency (alpha) information, and depth information. The standard enables the industry to effectively exchange uncompressed video and image data while utilizing all additional information provided by the ISOBMFF, such as timing, color space, and sample aspect ratio for interoperable interpretation and/or display of uncompressed video and image data.&lt;/p&gt;&lt;p&gt;ISO/IEC 14496-15 (based on ISOBMFF), provides the basis for &quot;network abstraction layer (NAL) unit structured video coding formats&quot; such as AVC, HEVC, and VVC. The current version is the 6th edition, which has been amended to support neural-network post-filter supplemental enhancement information (SEI) messages. This amendment defines the carriage of the neural-network post-filter characteristics (NNPFC) SEI messages and the neural-network post-filter activation (NNPFA) SEI messages to enable the delivery of (i) a base post-processing filter and (ii) a series of neural network updates synchronized with the input video pictures/frames.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Research aspects&lt;/b&gt;: While the former, the carriage of uncompressed video and images in ISOBMFF, seems to be something obvious to be supported within a file format, the latter enables to use neural network-based post-processing filters to enhance video quality after the decoding process, which is an active field of research. The current extensions with the file format provide a baseline for the evaluation (cf. also next section).&amp;nbsp;&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;Video Codec Enhancements&lt;/h2&gt;&lt;p&gt;MPEG finalized the specifications of the third editions of the Versatile Video Coding (VVC, ISO/IEC 23090-3) and the Versatile Supplemental Enhancement Information (VSEI, ISO/IEC 23002-7) standards. Additionally, MPEG issued the Committee Draft (CD) text of the eleventh edition of the Advanced Video Coding (AVC, ISO/IEC 14496-10) standard and the Committee Draft Amendment (CDAM) text on top of the High Efficiency Video Coding standard (HEVC, ISO/IEC 23008-2).&lt;/p&gt;&lt;p&gt;These SEI messages include two systems-related SEI messages, (a) one for signaling of green metadata as specified in ISO/IEC 23001-11 and (b) the other for signaling of an alternative video decoding interface for immersive media as specified in ISO/IEC 23090-13. Furthermore, the neural network post-filter characteristics SEI message and the neural-network post-processing filter activation SEI message have been added to AVC, HEVC, and VVC.&lt;/p&gt;&lt;p&gt;The two SEI messages for describing and activating post-filters using neural network technology in video bitstreams could, for example, be used for reducing coding noise, spatial and temporal upsampling (i.e., super-resolution and frame interpolation), color improvement, or general denoising of the decoder output. The description of the neural network architecture itself is based on MPEG’s neural network representation standard (ISO/IEC 15938 17). As results from an exploration experiment have shown, neural network-based post-filters can deliver better results than conventional filtering methods. Processes for invoking these new post-filters have already been tested in a software framework and will be made available in an upcoming version of the VVC reference software (ISO/IEC 23090-16).&lt;/p&gt;&lt;p&gt;&lt;b&gt;Research aspects&lt;/b&gt;: SEI messages for neural network post-filters (NNPF) for AVC, HEVC, and VVC, including systems supports within the ISOBMFF, is a powerful tool(box) for interoperable visual quality enhancements at the client. This tool(box) will (i) allow for Quality of Experience (QoE) assessments and (ii) enable the analysis thereof across codecs once integrated within the corresponding reference software.&amp;nbsp;&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;MPEG-DASH Updates&lt;/h2&gt;&lt;p&gt;The current status of MPEG-DASH is depicted in the figure below:&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjbjATH1iR5mQhu26m25k_U4eq0SNSZwD3vfxXn2MYNWZ0P7uLdrEWlGg2jmTo_QvjJWUKwlvJao_YReFlWyxFeDpPUC7MEAgs5M8G7Sn2b7yPedWyyXf9ZrchbWS2oDTk2g3SAftBHPIFBcQnat9hy_nw-R_f8Aw93yeqKh5Ikwzp0zKOoxP0u7H6Ncw/s1024/MPEG-DASH-standard-status-0723v2.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;576&quot; data-original-width=&quot;1024&quot; height=&quot;360&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjbjATH1iR5mQhu26m25k_U4eq0SNSZwD3vfxXn2MYNWZ0P7uLdrEWlGg2jmTo_QvjJWUKwlvJao_YReFlWyxFeDpPUC7MEAgs5M8G7Sn2b7yPedWyyXf9ZrchbWS2oDTk2g3SAftBHPIFBcQnat9hy_nw-R_f8Aw93yeqKh5Ikwzp0zKOoxP0u7H6Ncw/w640-h360/MPEG-DASH-standard-status-0723v2.png&quot; width=&quot;640&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;&lt;p&gt;The latest edition of MPEG-DASH is the 5th edition (ISO/IEC 23009-1:2022) which is publicly/freely available &lt;a href=&quot;https://standards.iso.org/ittf/PubliclyAvailableStandards/c083314_ISO_IEC%2023009-1_2022(en).zip&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;. There are currently three amendments under development:&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;ISO/IEC 23009-1:2022 Amendment 1: Preroll, nonlinear playback, and other extensions. This amendment has been ratified already and is currently being integrated into the 5th edition of part 1 of the MPEG-DASH specification.&lt;/li&gt;&lt;li&gt;ISO/IEC 23009-1:2022 Amendment 2: EDRAP streaming and other extensions. EDRAP stands for Extended Dependent Random Access Point and at this meeting the Draft Amendment (DAM) has been approved. EDRAP increases the coding efficiency for random access and has been adopted within VVC.&lt;/li&gt;&lt;li&gt;ISO/IEC 23009-1:2022 Amendment 3: Segment sequences for random access and switching. This amendment is at Committee Draft Amendment (CDAM) stage, the first milestone of the formal standardization process. This amendment aims at improving tune-in time for low latency streaming.&lt;/li&gt;&lt;/ul&gt;&lt;p style=&quot;text-align: left;&quot;&gt;Additionally, MPEG Technologies under Consideration (TuC) comprises a few new work items, such as content selection and adaptation logic based on device orientation and signaling of haptics data within DASH.&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;Finally, part 9 of MPEG-DASH -- redundant encoding and packaging for segmented live media (REAP) -- has been promoted to Draft International Standard (DIS). It is expected to be finalized in the upcoming meetings.&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;b&gt;Research aspects&lt;/b&gt;: Random access has been extensively evaluated in the context of video coding but not (low latency) streaming. Additionally, the TuC item related to content selection and adaptation logic based on device orientation raises QoE issues to be further explored.&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;The 144th MPEG meeting will be held in Hannover from October 16-20, 2023. Click &lt;a href=&quot;https://www.mpeg.org/meetings/mpeg-144/&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt; for more information about MPEG meetings and their developments.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</description><link>http://multimediacommunication.blogspot.com/2023/08/mpeg-news-report-from-143rd-meeting.html</link><author>noreply@blogger.com (Unknown)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEij0QIAZ_UVFAPnPYytvUiJpcBnV-GBGdKUvJpvGk-V74pMIt95egAN0wNemqx-33iLCDvZWGWTzkfqOQfhzICkNaZFaWnIY6AITUagy8S3yj1ZBDc9x8_fgu-2KNs1a-1PLdil-III06mZeEU6tWLYzZq4vu5q-8_kGbZUPlTk986B86PFKACf7GK__Qk/s72-c/MPEG-Logo-1.png" height="72" width="72"/><thr:total>0</thr:total></item></channel></rss>