<br />
<b>Deprecated</b>:  Assigning the return value of new by reference is deprecated in <b>/mnt/tb/scilsdata/www/comminfo.rutgers.edu/htdocs2/conferences/mmchallenge/wp-settings.php</b> on line <b>512</b><br />
<br />
<b>Deprecated</b>:  Assigning the return value of new by reference is deprecated in <b>/mnt/tb/scilsdata/www/comminfo.rutgers.edu/htdocs2/conferences/mmchallenge/wp-settings.php</b> on line <b>527</b><br />
<br />
<b>Deprecated</b>:  Assigning the return value of new by reference is deprecated in <b>/mnt/tb/scilsdata/www/comminfo.rutgers.edu/htdocs2/conferences/mmchallenge/wp-settings.php</b> on line <b>534</b><br />
<br />
<b>Deprecated</b>:  Assigning the return value of new by reference is deprecated in <b>/mnt/tb/scilsdata/www/comminfo.rutgers.edu/htdocs2/conferences/mmchallenge/wp-settings.php</b> on line <b>570</b><br />
<br />
<b>Deprecated</b>:  Assigning the return value of new by reference is deprecated in <b>/mnt/tb/scilsdata/www/comminfo.rutgers.edu/htdocs2/conferences/mmchallenge/wp-includes/cache.php</b> on line <b>103</b><br />
<br />
<b>Deprecated</b>:  Assigning the return value of new by reference is deprecated in <b>/mnt/tb/scilsdata/www/comminfo.rutgers.edu/htdocs2/conferences/mmchallenge/wp-includes/query.php</b> on line <b>61</b><br />
<br />
<b>Deprecated</b>:  Assigning the return value of new by reference is deprecated in <b>/mnt/tb/scilsdata/www/comminfo.rutgers.edu/htdocs2/conferences/mmchallenge/wp-includes/theme.php</b> on line <b>1109</b><br />
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	>

<channel>
	<title>Multimedia Grand Challenge 2010</title>
	<atom:link href="http://comminfo.rutgers.edu/conferences/mmchallenge/feed/" rel="self" type="application/rss+xml" />
	<link>http://comminfo.rutgers.edu/conferences/mmchallenge</link>
	<description>All you need to know about the Multimedia Grand Challenge for ACM Multimedia 2010</description>
	<pubDate>Wed, 27 Oct 2010 17:34:45 +0000</pubDate>
	<generator>http://wordpress.org/?v=2.7</generator>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
			<item>
		<title>And the winner is&#8230;</title>
		<link>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/10/27/and-the-winner-is/</link>
		<comments>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/10/27/and-the-winner-is/#comments</comments>
		<pubDate>Wed, 27 Oct 2010 17:34:45 +0000</pubDate>
		<dc:creator>cees</dc:creator>
		
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://comminfo.rutgers.edu/conferences/mmchallenge/?p=387</guid>
		<description><![CDATA[Thanks to all participants, jury members, and the audience, the 2010 Multimedia Grand Challenge is now history. It was again great fun, and good to see that all participants put so much effort in crafting an inspiring and interesting pitch. Participation is more important than winning, but nonetheless our industry partners selected 3 winners. The [...]]]></description>
			<content:encoded><![CDATA[<p>Thanks to all participants, jury members, and the audience, the 2010 Multimedia Grand Challenge is now history. It was again great fun, and good to see that all participants put so much effort in crafting an inspiring and interesting pitch. Participation is more important than winning, but nonetheless our industry partners selected 3 winners. The winners are (presenters in bold):</p>
<p><img src="http://staff.science.uva.nl/~cgmsnoek/mmgc2010/GLD.gif" alt="" width="55" align="right" /><br />
1. <strong>Jana Machajdik</strong>, Allan Hanbury, Julian Stöttinger. <em>Understanding Affect in Images</em>.</p>
<p><img src="http://staff.science.uva.nl/~cgmsnoek/mmgc2010/SLV.gif" alt="" width="55" align="right" /><br />
2. Wei Song, <strong>Dian Tjondronegoro</strong>, Ivan Himawan. <em>ROI-based Content Adaptation for Mobile Device Usage of Video Conferencing</em>.</p>
<p><img src="http://staff.science.uva.nl/~cgmsnoek/mmgc2010/BRZ.gif" alt="" width="55" align="right" /><br />
3. <strong> Julien Law-To</strong>, Gregory Grefenstette, Jean-Luc Gauvain, Guillaume Gravier, Lori Lamel, Julien Despres. <em>Introducing topic segmentation and segmented-based browsing tools into a content based video retrieval system</em>.</p>
<p>Congratulations to the winners and hope to see you all again next year!</p>
<p>-Cees &amp; Malcolm</p>
]]></content:encoded>
			<wfw:commentRss>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/10/27/and-the-winner-is/feed/</wfw:commentRss>
		</item>
		<item>
		<title>Multimedia Grand Challenge Program</title>
		<link>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/09/03/multimedia-grand-challenge-program/</link>
		<comments>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/09/03/multimedia-grand-challenge-program/#comments</comments>
		<pubDate>Fri, 03 Sep 2010 20:15:46 +0000</pubDate>
		<dc:creator>cees</dc:creator>
		
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://comminfo.rutgers.edu/conferences/mmchallenge/?p=382</guid>
		<description><![CDATA[The finalists for the multimedia grand challenge are known. We received many good submissions this year. We have given preference to contributions that have been backed by an accepted paper from the regular program and included only the most suitable ones that

Significantly address one of the industry grand challenges.
Depict working, presentable systems or demos.
Describe why [...]]]></description>
			<content:encoded><![CDATA[<p>The finalists for the multimedia grand challenge are known. We received many good submissions this year. We have given preference to contributions that have been backed by an accepted paper from the regular program and included only the most suitable ones that</p>
<ul>
<li>Significantly address one of the industry grand challenges.</li>
<li>Depict working, presentable systems or demos.</li>
<li>Describe why the system presents a novel and interesting solution.</li>
</ul>
<p>The 2010 Multimedia Grand Challenge will feature the following presentations:</p>
<table id="ossc_papers" border="0">
<tbody>
<tr>
<th scope="col">Title</th>
<th scope="col">Authors</th>
<th scope="col">Challenge</th>
</tr>
<tr>
<td>A low-cost performance analysis and coaching system for tennis</td>
<td>Philip T Kelly; Petros Daras; Noel. E. O’Connor; Juan Diego Pérez-Moneo Agapito</td>
<td>3DLife</td>
</tr>
<tr>
<td class="alt">Human Body Tracking of Tennis Players using Hierarchical Particle Filtering and Variable Windows</td>
<td class="alt">Adolfo López-Méndez; Marcel Alcoverro; Montse Pardas; Josep R. Casas</td>
<td class="alt">3DLife</td>
</tr>
<tr>
<td>Audience Dependent Photo Collection Summarization</td>
<td>Pere Obrador, Rodrigo de Oliveira, Nuria Oliver</td>
<td>CeWe</td>
</tr>
<tr>
<td class="alt">Social Game Epitome versus Automatic Visual Analysis</td>
<td class="alt">Peter Vajda; Ivan Ivanov; Lutz Goldmann; Touradj Ebrahimi</td>
<td class="alt">HP</td>
</tr>
<tr>
<td>Learning-to-Photograph towards HP Challenge</td>
<td>Bin Cheng, Bingbing Ni, Shuicheng Yan, Qi Tian</td>
<td>HP</td>
</tr>
<tr>
<td class="alt">Using Android and Indoor Localization for Diaries</td>
<td class="alt">Eladio Martin; Oriol Vinyals; Gerald Friedland; Ruzena Bajcsy</td>
<td class="alt">Google diaries</td>
</tr>
<tr>
<td>Improving Personal Diaries Using Social Audio Features</td>
<td>Michael Kuhn, Roger Wattenhofer, Samuel Welten</td>
<td>Google diaries</td>
</tr>
<tr>
<td class="alt">Google Challenge 2010: Efficient Genre-specific Semantic Video Indexing</td>
<td class="alt">Jun Wu; Marcel Worring</td>
<td class="alt">Google video genre</td>
</tr>
<tr>
<td>Video Classification based on Contextual Visual Vocabulary</td>
<td>Shiliang Zhang, Qi Tian, Gang Hua, Qingming Huang, Shuqiang Jiang, Wen Gao</td>
<td>Google video genre</td>
</tr>
<tr>
<td class="alt">VIRaL: Visual Image Retrieval and Localization</td>
<td class="alt">Yannis Avrithis, Yannis Kalantidis, Giorgos Tolias</td>
<td class="alt">Nokia</td>
</tr>
<tr>
<td>ROI-based Content Adaptation for Mobile Device Usage of Video Conferencing</td>
<td>Wei Song, Dian Tjondronegoro, Ivan Himawan</td>
<td>Radvision adapt</td>
</tr>
<tr>
<td class="alt">Gaze Awareness and Interaction Support in Presentations: Video Conference Experience Grand Challenge Statement</td>
<td class="alt">Kar-Han Tan; Dan Gelb; Ramin Samadani; Ian N Robinson; Bruce Culbertson; John Apostolopoulos</td>
<td class="alt">Radvision video conf</td>
</tr>
<tr>
<td>Rendering Panorama in Mobile Video Conferencing</td>
<td>Shu Shi; Zhengyou Zhang</td>
<td>Radvision video conf</td>
</tr>
<tr>
<td class="alt">Multi-Scale Entropy analysis of Dominance in Social Creative Activities</td>
<td class="alt">Donald Glowinski; Paolo Coletta; Maurizio Mancini</td>
<td class="alt">Radvision video conf</td>
</tr>
<tr>
<td>Searching and Browsing Social Images through iAVATAR</td>
<td>Aixin Sun; Sourav Bhowmick</td>
<td>Yahoo classification</td>
</tr>
<tr>
<td class="alt">Understanding Affect in Images</td>
<td class="alt">Jana Machajdik, Allan Hanbury, Julian Stöttinger</td>
<td class="alt">Yahoo classification</td>
</tr>
<tr>
<td>Introducing topic segmentation and segmented-based browsing tools into a content based video retrieval system</td>
<td>Julien Law-To; Gregory Grefenstette; Jean-Luc Gauvain; Guillaume Gravier; Lori Lamel; Julien</td>
<td>Yahoo segmentation</td>
</tr>
<tr>
<td class="alt">Towards Yahoo! Challenge: A Generic Event Detection and Segmentation System for Video Navigation and Search</td>
<td class="alt">Tianzhu Zhang, Changsheng Xu, Guangyu Zhu, Si Liu, Hanqing Lu</td>
<td class="alt">Yahoo segmentation</td>
</tr>
</tbody>
</table>
]]></content:encoded>
			<wfw:commentRss>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/09/03/multimedia-grand-challenge-program/feed/</wfw:commentRss>
		</item>
		<item>
		<title>Extra prize money announcement</title>
		<link>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/07/01/extra-prize-money-announcement/</link>
		<comments>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/07/01/extra-prize-money-announcement/#comments</comments>
		<pubDate>Thu, 01 Jul 2010 20:58:07 +0000</pubDate>
		<dc:creator>cees</dc:creator>
		
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://comminfo.rutgers.edu/conferences/mmchallenge/?p=376</guid>
		<description><![CDATA[We are happy to inform you that this year&#8217;s Grand Challenge will award three prizes:

Gold medal       &#8212; 1500 USD
Silver medal     &#8212; 1000 USD
Bronze medal      &#8212; 500 USD

Researchers are encouraged to  submit working systems in response to the challenges to [...]]]></description>
			<content:encoded><![CDATA[<p>We are happy to inform you that this year&#8217;s Grand Challenge will award three prizes:</p>
<ul>
<li>Gold medal       &#8212; 1500 USD</li>
<li>Silver medal     &#8212; 1000 USD</li>
<li>Bronze medal      &#8212; 500 USD</li>
</ul>
<p>Researchers are encouraged to  submit working systems in response to the challenges to win the grand  Challenge competition! A number of solutions (perhaps 10-20) will be selected as finalists and  invited to describe their work, demonstrate their solution and argue for  the paper&#8217;s success in the Grand Challenge Session at ACM Multimedia 2010 in Florence.  Each  finalist will have several minutes to present their case.  Final winners  will be chosen by industry scientists, engineers and business luminaries.</p>
<p>Prepare your submissions according to <a href="http://comminfo.rutgers.edu/conferences/mmchallenge/submission/">these guidelines</a>, and submit your grand challenge participation before August 1st.</p>
<p>See you in Florence.</p>
<p>Malcolm &amp; Cees.</p>
]]></content:encoded>
			<wfw:commentRss>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/07/01/extra-prize-money-announcement/feed/</wfw:commentRss>
		</item>
		<item>
		<title>The challenge is on!</title>
		<link>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/03/15/the-challenge-is-on/</link>
		<comments>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/03/15/the-challenge-is-on/#comments</comments>
		<pubDate>Mon, 15 Mar 2010 11:32:11 +0000</pubDate>
		<dc:creator>cees</dc:creator>
		
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://comminfo.rutgers.edu/conferences/mmchallenge/?p=371</guid>
		<description><![CDATA[What problems do Google, Yahoo!, HP, Radvision, CeWe, Nokia and other companies see driving the future of multimedia? The Multimedia Grand Challenge is a set of problems and issues from these industry leaders, geared to engage the Multimedia research community towards solving relevant, interesting and challenging questions in the multimedia industry’s 2-5 year horizon. The [...]]]></description>
			<content:encoded><![CDATA[<p>What problems do Google, Yahoo!, HP, Radvision, CeWe, Nokia and other companies see driving the future of multimedia? The Multimedia Grand Challenge is a set of problems and issues from these industry leaders, geared to engage the Multimedia research community towards solving relevant, interesting and challenging questions in the multimedia industry’s 2-5 year horizon. The Grand Challenge was first presented as part of ACM Multimedia 2009. and it will be presented again in slightly modified form at ACM Multimedia 2010. Researchers are encouraged to submit working systems in response to these challenges to win the grand Challenge competition!</p>
]]></content:encoded>
			<wfw:commentRss>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/03/15/the-challenge-is-on/feed/</wfw:commentRss>
		</item>
		<item>
		<title>CeWe Challenge 2010: Automatic Theme Identification of Photo Sets for Digital Print Products</title>
		<link>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/03/11/cewe-challenge/</link>
		<comments>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/03/11/cewe-challenge/#comments</comments>
		<pubDate>Thu, 11 Mar 2010 16:20:44 +0000</pubDate>
		<dc:creator>sabine</dc:creator>
		
		<category><![CDATA[CeWe]]></category>

		<category><![CDATA[image]]></category>

		<category><![CDATA[photos]]></category>

		<category><![CDATA[prints]]></category>

		<guid isPermaLink="false">http://www.scils.rutgers.edu/conferences/mmchallenge/?p=89</guid>
		<description><![CDATA[With the advent of digital photography, the number of photos taken has increased tremendously. While only recently, in the analogue days, a small number of films documented a 2-weeks holiday, we are nowadays taking and storing hundreds or even thousands of digital photos. As in the analogue days a common way of preserving memories or [...]]]></description>
			<content:encoded><![CDATA[<p>With the advent of digital photography, the number of photos taken has increased tremendously. While only recently, in the analogue days, a small number of films documented a 2-weeks holiday, we are nowadays taking and storing hundreds or even thousands of digital photos. As in the analogue days a common way of preserving memories or making them available to others are individual photo products like posters, calendars or photo books. Today also the processes of designing these products have become digital and thus many companies are offering digitally designed counterparts of the former analogue photoproducts. Popular examples are photo books like the CEWE PHOTOBOOK. In these books the digital photos are arranged over the pages of the book in different designs.</p>
<h3>Concrete problem</h3>
<p>When a user wants to create a CEWE PHOTOBOOK it can be designed with the help of a dedicated software application. This software first assists the user in selecting and laying out the photos from the selected on the pages and add titles and captions. The user may also design the individual page with regard to the number of images, their position, size, rotation and so on. A concept not only present in the CEWE PHOTOBOOK software is the use of different styles for different kinds of photo books. Such a style influences how photos are laid out on the pages, which backgrounds are chosen or which text font is used for headings and captions. In the CEWE PHOTOBOOK software about 100 styles are available to suit different user tastes and different types of photo books. These styles are defined and designed by skilled designers being experts in both general layout principles and understanding photo book user’s needs. However, albeit being a huge enhancement over pure manual layout, the user still has to manually select the style for his photo set out of a large and potentially overwhelming set of styles.</p>
<h3>The challenge</h3>
<p>In order to further assist the process of photo book design CEWE is seeking for ways simplify this style selection step. The CEWE PHOTOBOOK software today provides different styles for different events (such as party, holiday, chronicle, or family events), for different seasons (such as Christmas, summer, Easter) but also meeting design styles (classical, funky, cute, …). These styles are organized in different categories. The challenge CEWE wants to address is to support users selecting and assigning the right style to an individual photo set. The goal is, however, not necessarily to automatically determine the one and only perfect style, but rather to provide the user with a reasonable selection of styles he or she can choose from. This selection should fit the user’s preferences, the images in the set and the current structure of the photo book, i.e., the distribution of the photos over the single pages.</p>
<h3>Data Set and Evaluation</h3>
<p>For researchers interested in the described challenge CEWE COLOR will provide selected photo sets together with a list of suitable styles for each set. These list of styles covers the styles currently offered in the CEWE PHOTOBOOK application. This list of styles is organized in a taxonomy, which will also be provided upon request. The developed methods for matching photo sets to photo book styles should be evaluated on the provided photo sets. Please contact Sabine Thieme (Sabine.Thieme@cewecolor.de) for details.</p>
<h3>About CeWe</h3>
<p><a href="http://www.cewecolor.de/">CeWe Color</a> is the Number One services partner for first-class trade brands on the European photographic market. CeWe supplies both stores and Internet retailers (e-commerce) with photographic products.</p>
<div id="contact_box">
<p>Feel free to correspond with the challenge authors via the comments form below.</p>
<p>For private correspondence, consult the <a href="http://www.multimediagrandchallenge.com/about/">About page</a> for contact details.</div>
]]></content:encoded>
			<wfw:commentRss>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/03/11/cewe-challenge/feed/</wfw:commentRss>
		</item>
		<item>
		<title>3DLife Challenge 2010: Sports Activity Analysis in Camera Networks</title>
		<link>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/03/09/3dlife-challenge-2010-sports-activity-analysis-in-camera-networks/</link>
		<comments>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/03/09/3dlife-challenge-2010-sports-activity-analysis-in-camera-networks/#comments</comments>
		<pubDate>Tue, 09 Mar 2010 22:49:49 +0000</pubDate>
		<dc:creator>noel</dc:creator>
		
		<category><![CDATA[3DLife]]></category>

		<guid isPermaLink="false">http://comminfo.rutgers.edu/conferences/mmchallenge/?p=350</guid>
		<description><![CDATA[This challenge is designed to facilitate exploration of some of the key research challenges facing the future media internet in a specific application domain, corresponding to sports. Advances in the availability and utility of cameras is rapidly changing the sporting landscape. In professional sports we are familiar with high-end camera technology being used to enhance [...]]]></description>
			<content:encoded><![CDATA[<p>This challenge is designed to facilitate exploration of some of the key research challenges facing the<strong> future media internet </strong>in a specific application domain, corresponding to sports. Advances in the availability and utility of cameras is rapidly changing the sporting landscape. In professional sports we are familiar with high-end camera technology being used to enhance the viewer experience above and beyond a traditional broadcast. High profile examples include the <a href="http://www.hawkeyeinnovations.co.uk" target="_blank">Hawk-Eye Officiating System</a> as used in tennis and cricket or ESPN&#8217;s recent <a href="http://sports.espn.go.com/espn/news/story?id=4796555" target="_blank">announcement </a>to showcase 3D broadcast in its coverage of the 2010 FIFA World Cup. Whilst extremely valuable to the viewing experience, such technologies are really only feasible for high profile professional sports. On the other hand, advances in camera technology coupled with falling prices means that <strong>reasonable quality visual capture is now within reach of most local and amateur sporting and leisure organizations</strong>. Thus it becomes feasible for every field sports club, whether tennis, soccer, cricket or hockey, to install their own camera network at their local ground. In fact, the same goes for other leisure activities like dance, aerobics and performance art that take place in a constrained environment and that would benefit from visual capture. In these cases, the motivation is usually not for broadcast purposes, or for the technology to act as a &#8220;video referee&#8221; or adjudicator, but rather to facilitate coaches and mentors to provide better feedback to athletes based on recorded competitive training matches, training drills or any prescribed set of activities.</p>
<p>This challenge focuses on exploring the limits of what is possible in terms of 2D and 3D data extraction from a low-cost camera network for sports. It hopefully provides opportunities for research in areas such as:</p>
<ul>
<li>Content &amp; context fusion for improved multimedia access;</li>
<li>3D content generation leveraging emerging acquisition channels;</li>
<li> Immersive multimedia experiences;</li>
<li> Multimedia, multimodal &amp; deformable objects search</li>
</ul>
<p>More generally, the data-set and challenge will hopefully facilitate researchers wishing to address the broader issues posed by the increasing availability of such capture technologies, that brings many new exciting challenges (see for example the <strong><a href="http://www.future-internet.eu/publications/papers-reports.html">recent white paper by the Future Media Internet task force</a> </strong>that outlines these challenges.</p>
<p>Tennis is chosen as a case study as it is a sporting environment that is relatively easy to instrument with cheap cameras and features a small number of actors (players) who exhibit explosive and rapid sophisticated motion. Video data from an AV network, corresponding to 9 cameras with built in mics, installed around an indoor court capturing real athletes is provided for experimentation purposes.  The capture infrastructure is deliberately set-up to model what is feasible for a local tennis club using commercial off-the shelf components i.e. 720 x 680, MPEG-4 25Hz cameras that are not calibrated or synchronized and that share only limited overlapping fields of view. We are interested in submissions that explore the limits of what is possible from such a real-world capture scenario in terms of:</p>
<ul>
<li>Player localization on court and tracking through multiple camera views;</li>
<li>Event-based analysis and human behaviour modeling using multiple views of the same event / activity: one example is robustly classifying every stroke as a serve, backhand, forehand, etc considering fusion across multiple camera views; another example is detecting the game structure automatically (point, game, match).</li>
<li>3D reconstruction of the playing arena and/or the players or their actions; an example is using player location and stroke classification to animate an avatar of the player, even coarsely;</li>
<li>Longitudinal analysis of player activity and motion over an entire training session;</li>
<li> Novel visualization and feedback mechanisms of any analysis results.</li>
</ul>
<h3>Dataset</h3>
<p>Data features audio and video from up to 9 CCTV-like cameras placed at different points around a tennis court. Camera calibration data is provided. The dataset features 2 players involved in competitive training matches. Court measurements and relative camera placement details are also provided. In addition to video information, accelerometer data from inertial measurement units were also captured with each sequence. Two accelerometers units were placed on each player; one on the player&#8217;s dominant forearm, and one on their torso (chest). Each provides time-stamped accelerometer, gyroscope and magnetometer data at their location for the duration of the session.</p>
<p>The data set is now available at the following link:</p>
<p><a href="http://www.cdvp.dcu.ie/tennisireland/TennisVideos/acm_mm_3dlife_grand_challenge/">http://www.cdvp.dcu.ie/tennisireland/TennisVideos/acm_mm_3dlife_grand_challenge/</a></p>
<div>
<p>Feel free to correspond with the challenge authors via the comments form below.</p>
<p>For private correspondence, consult the <a href="http://www.multimediagrandchallenge.com/about/">About page</a> for contact details.</div>
]]></content:encoded>
			<wfw:commentRss>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/03/09/3dlife-challenge-2010-sports-activity-analysis-in-camera-networks/feed/</wfw:commentRss>
		</item>
		<item>
		<title>Alive and kicking</title>
		<link>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/02/11/alive-and-kicking/</link>
		<comments>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/02/11/alive-and-kicking/#comments</comments>
		<pubDate>Thu, 11 Feb 2010 16:24:28 +0000</pubDate>
		<dc:creator>cees</dc:creator>
		
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://comminfo.rutgers.edu/conferences/mmchallenge/?p=317</guid>
		<description><![CDATA[The Multimedia Grand Challenge is alive and kicking. Most challenges on the website still reflect the challenges of 2009, but we are in the process of updating them for the 2010 edition. Check the Nokia 2010 challenge.
In the meantime, we can already announce one innovation for 2010. We open the challenge for all papers (long, [...]]]></description>
			<content:encoded><![CDATA[<p>The Multimedia Grand Challenge is alive and kicking. Most challenges on the website still reflect the challenges of 2009, but we are in the process of updating them for the 2010 edition. Check the Nokia 2010 challenge.</p>
<p>In the meantime, we can already announce one innovation for 2010. We open the challenge for <strong>all</strong> papers (long, short, and demo) accepted for the ACM Multimedia conference. So if your (accepted) paper is related to one of the challenges, you are elegible for the prize money.  More details on this new procedure later.</p>
<p>Stay tuned!</p>
<p>Malcolm &amp; Cees.</p>
]]></content:encoded>
			<wfw:commentRss>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/02/11/alive-and-kicking/feed/</wfw:commentRss>
		</item>
		<item>
		<title>Google Challenge 2010: Robust, As-Accurate-As-Human Genre Classification for Video</title>
		<link>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/02/10/google-challenge/</link>
		<comments>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/02/10/google-challenge/#comments</comments>
		<pubDate>Wed, 10 Feb 2010 14:00:51 +0000</pubDate>
		<dc:creator>Jay Yagnik</dc:creator>
		
		<category><![CDATA[Google]]></category>

		<category><![CDATA[classification]]></category>

		<category><![CDATA[video]]></category>

		<guid isPermaLink="false">http://www.scils.rutgers.edu/conferences/mmchallenge/?p=95</guid>
		<description><![CDATA[A notion of browsing collections is naturally associated with videos. Having videos classified into a pre-existing hierarchy of genres is one way to make the browsing task easier. The goal of this task would be to take user generated videos (along with their sparse and noisy metadata) and automatically classify them into genres. A public [...]]]></description>
			<content:encoded><![CDATA[<p>A notion of browsing collections is naturally associated with videos. Having videos classified into a pre-existing hierarchy of genres is one way to make the browsing task easier. The goal of this task would be to take user generated videos (along with their sparse and noisy metadata) and automatically classify them into genres. A public genre hierarchy like ODP (Open Directory Project) can be used as a target for this task.</p>
<p>Evaluations can be based on purely video content features as well as a combination of content and metadata features. Features that bring in information from other public data sources can also be used (eg. Object detectors trained on a separate public dataset). Thinking of new (and surprising) features is recommended!</p>
<p>Any dataset that reflects a breath of content is acceptable, and of course, YouTube and Google Video are a recommended source. Particularly, the data should cover most of the common video genres. If the dataset consists of web videos, sharing a list of links and corresponding labels would be ideal for researchers to compare notes. You may want to consult the <a href="http://code.google.com/apis/youtube/overview.html">The YouTube Data API</a> for retrieving video data.</p>
<h3>Evaluation</h3>
<p>We propose two types of evaluations for this challenge:</p>
<ul>
<li>Offline (direct evaluation): Use a labeled test set to measure precision/recall for the ODP categories.</li>
<li>Online (indirect): Allow users a browse interface for your classifiers and measure how easily they can find some target concepts (e.g., find a basketball scoring scene). Note that the errors of the classifier can be compensated here since a video can appear in multiple categories, so one could conceive of training for different loss functions here.</li>
</ul>
<p>The ideal target in this case would match the optimal score for human agreement on the dataset.  If 5 raters categorize each video and we have agreement in 92% of the cases, we expect the automatic classifier to hit the same agreement rate.</p>
<div id="contact_box">
<p>Feel free to correspond with the challenge authors via the comments form below.</p>
<p>For private correspondence, consult the <a href="http://www.multimediagrandchallenge.com/about/">About page</a> for contact details.</div>
]]></content:encoded>
			<wfw:commentRss>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/02/10/google-challenge/feed/</wfw:commentRss>
		</item>
		<item>
		<title>Google Challenge 2010: Indexing and Fast Interactive Searching in Personal Diaries</title>
		<link>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/02/10/google-diaries-challenge/</link>
		<comments>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/02/10/google-diaries-challenge/#comments</comments>
		<pubDate>Wed, 10 Feb 2010 14:00:50 +0000</pubDate>
		<dc:creator>cees</dc:creator>
		
		<category><![CDATA[Google]]></category>

		<guid isPermaLink="false">http://comminfo.rutgers.edu/conferences/mmchallenge/?p=332</guid>
		<description><![CDATA[The developing interest in recording digital diaries or archives of one&#8217;s life needs a good indexing and search capability to be useful or interesting.  Diaries can be any combination of audio, video, geographic location, photos, phone logs, and whatever other multimedia data the user generates or accesses.  To make the data accessible, it [...]]]></description>
			<content:encoded><![CDATA[<p>The developing interest in recording digital diaries or archives of one&#8217;s life needs a good indexing and search capability to be useful or interesting.  Diaries can be any combination of audio, video, geographic location, photos, phone logs, and whatever other multimedia data the user generates or accesses.  To make the data accessible, it needs to be parsed into indexable, browsable, and searchable structures such as places, environments, episodes, actions, and events of various sorts, and clustered and tagged with categories, identities, and tags of whatever sort the user proposes.  A UI is needed to browse and manually improve these structures and tags, and to search for things that the user knows about but that the system hasn&#8217;t yet learned a name for.</p>
<p>The challenge is to develop good schema, algorithms, UI, etc., that will be useful for diaries from audio-only through full-featured multimedia.  Specializations to certain contexts, as well as generic systems, are all of interest.</p>
<div id="contact_box">
<p>Feel free to correspond with the challenge authors via the comments form below.</p>
<p>For private correspondence, consult the <a href="http://www.multimediagrandchallenge.com/about/">About page</a> for contact details.</div>
]]></content:encoded>
			<wfw:commentRss>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/02/10/google-diaries-challenge/feed/</wfw:commentRss>
		</item>
		<item>
		<title>Radvision Challenge 2010: Video Conferencing To Surpass &#8220;In-Person&#8221; Meeting Experience</title>
		<link>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/02/10/radvision-challenge-experience/</link>
		<comments>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/02/10/radvision-challenge-experience/#comments</comments>
		<pubDate>Wed, 10 Feb 2010 14:00:31 +0000</pubDate>
		<dc:creator>sagee</dc:creator>
		
		<category><![CDATA[Radvision]]></category>

		<category><![CDATA[video]]></category>

		<category><![CDATA[video conferencing]]></category>

		<guid isPermaLink="false">http://www.scils.rutgers.edu/conferences/mmchallenge/?p=80</guid>
		<description><![CDATA[Video conferencing is part of a $5 Billion dollar real-time collaboration market that includes audio, video and web conferencing products and services.
The great challenge for Video conferencing vendors is to supply users with a meeting experience that equals or surpasses “in-person” meetings. It is assumed that when meeting experience will be good enough, or even [...]]]></description>
			<content:encoded><![CDATA[<p>Video conferencing is part of a $5 Billion dollar real-time collaboration market that includes audio, video and web conferencing products and services.</p>
<p>The great challenge for Video conferencing vendors is to supply users with a meeting experience that equals or surpasses “in-person” meetings. It is assumed that when meeting experience will be good enough, or even better, the technology could potentially minimize the need for “physical” meetings (at least for business purposes). Such reduction would mean less travel, less cost (to people, organizations, and the planet), better efficiency and better communication.</p>
<p>This challenge focuses on developing new technologies and ideas to surpass the “in-person” meeting experience. In the process a set of subjective and objective measures to evaluate “meeting” experience will be developed. With these measures, alternative solutions could be compared to each other and to in-person meetings, and optimized accordingly.</p>
<h3>Dataset</h3>
<p>Not required.</p>
<h3>Metrics/Evaluation</h3>
<p>As noted above, we are hoping for new metrics, objective and subjective, to be developed that capture the meeting experience. It is desired to have a high correlation between the objective and subjective metrics, and that metrics are robust and reliable. Those metrics could be used to compare existing video conferencing solutions, in-person meetings and new technologies suggested.</p>
<h3>About Radvision</h3>
<p><a href="http://www.radvision.com/">Radvision</a> (Nasdaq: RVSN) is the industry’s leading provider of products and technologies for unified visual communications over IP, 3G and emerging IMS/Next Generation networks - enabling high definition video-conferencing, converged video telephony services, and scalable desktop–based visual communications.</p>
<div id="contact_box">
<p>Feel free to correspond with the challenge authors via the comments form below.</p>
<p>For private correspondence, consult the <a href="http://www.multimediagrandchallenge.com/about/">About page</a> for contact details.</div>
]]></content:encoded>
			<wfw:commentRss>http://comminfo.rutgers.edu/conferences/mmchallenge/2010/02/10/radvision-challenge-experience/feed/</wfw:commentRss>
		</item>
	</channel>
</rss>
