October 2013 Newsletter

Welcome to October’s edition of the IEEE-TCMC (Technical
Committee on Multimedia Computing) monthly mailing.

This month’s topics include:

1. The first Bay Area Multimedia Forum call for participation
2. Industry news (e.g., W3C API, Commercial Video streaming bandwidth)
3. IJMDEM abstracts and CFP

To join TCMC, or to update your information, especially your email
address, visit the following web site and fill in the online form.


Computer Society and IEEE members will use their usual IEEE
web account login to access membership products and renew.
Nonmembers can create an IEEE web account to join any TC.

TCMC home:

The First Bay Area Multimedia Forum (BAMMF) Invitation

We are excited to invite you to our first Bay Area Multimedia
Forum (BAMMF) on November 7th, 2013. As all of us know, there
are many multimedia researchers in the SF Bay Area. Many joined
startups and product teams at large companies, and so do not have
time to travel to academic conferences. However, we want to meet
other peers to exchange ideas. We want to attend short, more
frequent, local meetings. In addition, professors from local
universities or universities in other states/countries want a
forum at which to meet industrial researchers to expose their
ideas to researchers and have more interaction with industry.
They also want to learn about real problems that industry wants
to solve to guide their future research. To fit these professors’
tight schedules, short but more frequent meetings are also preferable.

Encouraged and sponsored by the IEEE Technical Committee on
Multimedia Computing, IEEE Technical Committee on Semantic
Computing, ACM Special In terest Group on Multimedia, and FX
Palo Alto Laboratory, we are starting a bimonthly half-day Bay
Area Multimedia Forum(BAMMF) series. Experts from both academia
and industry are invited to exchange ideas and information
through talks, tutorials, panel discussions and networking sessions.
Topics of the forum will include emerging areas in multimedia,
advancement in algorithms and development, demonstration of
new inventions, productization of technologies, business
opportunities, etc. If you are interested in giving a talk at
the forum, please contact the organizers. If you have a problem
that you want help from other experts, please don’t hesitate to
let us know. If you only want to listen to other people’s
talks or meet friends, you are also very welcome.

Again, we want to invite you to enjoy this event with us.
Because the conference room size limitation, there will be a
website for seat reservation at this event. The website is


Location: Kumo Conference Room, FX Palo Alto Laborat
ory (FXPAL), 3174 Porter Drive, Palo Alto,
California 94304 USA (Refreshments will be provided)

Time: Nov.7, Thursday, 1:30pm-4:30pm

Organizing Team: Qiong Liu, Tong Zhang, Henry Tang, Jian Fan
, Shanchan Wu, Bee Liew

Advisory Board: Prof. Shih-Fu Chang, ACM SIGMM Chair;
Prof. Shu-Ching Chen, IEEE TCMC Chair;
Prof. Phil Sheu, IEEE TCSEM Chair;
Dr. Lynn Wilcox, FXPAL Vice President;
Prof. Chang-Wen Chen, ICME Steering Committee Chair.

Keynote Speaker 1 – Towards Mobile Augmented Reality

Bernd Girod
Robert L. and Audrey S. Hancock Professor of Electrical Engineering
Senior Associate Dean, Online Learning and Professional Development
Stanford University

Mobile devices are expected to become ubiquitous platforms for
visual search and mobile augmented reality applications. For
object recognition on mobile devices, a visual database is typically
stored in the cloud. Hence, for a visual comparison, information
must be either uploaded from, or downloaded to, the mobile over a
wireless link. The response time of the system critically depends
on how much information must be transferred in both directions,
and efficient compression is the key to a good user experience.
We review recent advances in mobile visual search, using compact
feature descriptors, and show that dramatic speed-ups and power
savings are possible by considering recognition, compression,
and retrieval jointly. For augmented reality applications, where
image matching is performed continually at video frame rates,
interframe coding of SIFT descriptors  achieves bit-rate reductions
of 1-2 orders of magnitude relative to advanced video coding
techniques. We will use real-time implementations for different
example applications, such as recognition of landmarks, media
covers or printed documents, to show the benefits of implementing
computer vision algorithms on the mobile device, in the cloud,
or both.

Keynote Speaker 2 – Reshaping User Experiences with Analytics

Haohong Wang
General Manager, TCL Research America

In the past few years, devices with screens have been getting
much  smarter, however, far from sufficient for the large screens.
Almost all industry giants tried and failed, some hurt badly,
in bringing pleasant user experiences to the home screens, thus
this trillion-dollar market has not been really conquered so far.
Now we are marching into the era of Ultra High-Definition (UHD),
the screen size and resolution will increase again significantly,
however, the pace of user interaction development seems lag behind.
In this talk, we discuss using data analytics to improve user
experiences for home entertainment. With the incorporate of
analytics components, such as user behaviors learning and mining,
user preference understanding, media low-level features and
high-level semantics extraction, object detection and recognition,
media recognition, and real-time recommendation and so on,
we showcase that user experience innovations can be achieved to
make the devices with screens much more user friendly.



W3C API for Media Resources 1.0

On October 15, the W3C Media Annotations Working Group released
a new version of a document entitled “API for Media Resources
1.0”. The document has been published as a Proposed Recommendation,
which in the W3C categorization represents a specification on
track to become an endorsed recommendation.

Eventually, if servers and clients implement the infrastructure
to support the API, developers will have the option to write
applications in JavaScript or Java that retrieve metadata
values for media objects distributed across the Web.

The specification defines methods to retrieve data
asynchronously (required) and synchronously (optional). It
defines an interface for metadata retrieval, another
interface for collecting metadata results, and a third
interface to select metadata sources.

In terms of metadata vocabulary, this document uses the
terminology and data structures defined in a W3C Recommendation
known as the Ontology for Media Resources 1.0.

Link to the API for Media Resources:


IETF 88 – Vancouver, BC, Canada

The 88th IETF Meeting will be held at the Hyatt Regency in
Vancouver, Canada, Nov. 3 – 8. The agenda for the meeting lists
a number of groups actively involved in protocol development
related to multimedia: Multiparty Multimedia Session Control
(mmusic), Audio/Video Transport Extensions (avtext),
Peer-to-Peer Streaming Protocol (ppsp), Real-Time Communications
in Web Browsers (rtcweb), Controlling Multiple Streams for
Telepresence (clue), and several others.

Link to the IETF 88 web site:


Commercial video streaming bandwidth

Web sites like Ookla.com provide interesting statistics about
bandwidth connectivity per country, per city, and per ISP.
Today Ookla shows for example that Hong Kong has the top
connectivity bandwidth at a swift 62.03 Mbps. Sweden is 8th
with 38.49 Mbps and the US is 24th with 20 Mbps. However
this and other measuring sites typically report statistics
on short data transfers. For commercial video applications,
the numbers can look quite different. Netflix is now publishing
a monthly report on connectivity bandwidth for different countries
per ISP. In their recent report (September, 2013), they indicate
for example that streaming bandwidth in Sweden is between
2.35 Mbps (Tele2) and 3.09 Mbps (Ownit). In the US, the numbers are
between 1.20 Mbps (Clearwire) and 3.41 Mbps (Google Fiber).

Link to the Netflix ISP speed index: http://ispspeedindex.netflix.com/

Link to Ookla: http://www.ookla.com/

The contents of the latest issue of:

International Journal of Multimedia Data Engineering & Management (IJMDEM)

Official Publication of the Information Resources Management Association
Volume 4, Issue 2, April – June 2013
Published: Quarterly in Print and Electronically
ISSN: 1947-8534 EISSN: 1947-8542
Published by IGI Publishing, Hershey, Pennsylvania, USA

Editor-in-Chief: Shu-Ching Chen, Florida International University, USA


JIRL: A C++ Toolkit for JPEG Compressed Domain Image Retrieval
David Edmundson (Department of Computer Science, Loughborough
University, Loughborough, UK) and Gerald Schaefer (Department
of Computer Science, Loughborough University, Loughborough, UK)

Since there are few open image retrieval toolkits available,
researchers in the field are often forced to re-implement
existing algorithms in order to perform a comparative evaluation.
None of the existing toolkits support retrieval of JPEG images
directly in the compressed domain. The authors’ aim is therefore
to facilitate the use of compressed domain image retrieval
techniques as well as ease retrieval evaluation by fellow
researchers. For this purpose, the authors present JIRL, an
open source C++ software suite that allows content-based image
retrieval in the JPEG compressed domain and provides tools
for benchmarking retrieval accuracy and retrieval time. In
total, twelve state-of-the-art JPEG retrieval algorithms are
implemented, while for each method techniques for compressed
domain feature extraction as well as feature comparison are
provided in an object-oriented framework. An example image
retrieval application is also provided to demonstrate how the
library can be used. JIRL is made available to fellow researchers
under the LGPL v.2.1 license.


A Web-Based Multimedia Retrieval System with MCA-Based Filtering
and Subspace-Based Learning Algorithms

Chao Chen (Department of Electrical and Computer Engineering,
University of Miami, Coral Gables, FL, USA), Tao Meng
(Department of Electrical and Computer Engineering, University of
Miami, Coral Gables, FL, USA) and Lin Lin (Department of Electrical
and Computer Engineering, University of Miami, Coral Gables, FL, USA)

The popularity of research on intelligent multimedia services and
applications is motivated by the high demand of the convenient access
and distribution of pervasive multimedia data. Facing with abundant
multimedia resources but inefficient and rather old-fashioned
keyword-based retrieval approaches, Intelligent Multimedia Systems
(IMS) demand on (i) effective filtering algorithms for storage saving,
computation reduction, and dynamic media delivery; and (ii) advanced
learning methods to accurately identify target concepts, effectively
search personalized media content, and enable media-type driven
applications. Nowadays, the web based multimedia applications become
more and more popular. Therefore, how to utilize the web technology
into multimedia data management and retrieval becomes an important
research topic. In this paper, the authors developed a web-based
intelligent video retrieval system that integrates effective and
efficient MCA-based filtering and subspace-based learning to facilitate
end users to retrieve their desired semantic concepts. A web-based
demo shows the effectiveness of the proposed intelligent multimedia
system to provide relevant results of target semantic concepts
retrieved from TRECVID video collections.

Content-Based Multimedia Retrieval Using Feature Correlation
Clustering and Fusion

Hsin-Yu Ha (School of Computing and Information Sciences, Florida
International University, Miami, FL, USA), Fausto C. Fleites
(School of Computing and Information Sciences, Florida International
University, Miami, FL, USA) and Shu-Ching Chen (School of Computing
and Information Sciences, Florida International University, Miami, FL, USA)

Nowadays, only processing visual features is not enough for
multimedia semantic retrieval due to the complexity of multimedia
data, which usually involve a variety of modalities, e.g. graphics,
text, speech, video, etc. It becomes crucial to fully utilize the
correlation between each feature and the target concept, the feature
correlation within modalities, and the feature correlation across
modalities. In this paper, the authors propose a Feature Correlation
Clustering-based Multi-Modality Fusion Framework (FCC-MMF) for multimedia
semantic retrieval. Features from different modalities are combined into
one feature set with the same representation via a normalization and
discretization process. Within and across modalities, multiple correspondence
analysis is utilized to obtain the correlation between feature-value pairs,
which are then projected onto the two principal components. K-medoids algorithm,
which is a widely used partitioned clustering algorithm, is selected to
minimize the Euclidean distance within the resulted clusters and produce
high intra-correlated feature-value pair clusters. Majority vote is applied to
subsequently decide which cluster each feature belongs to. Once the feature
clusters are formed, one classifier is built and trained for each cluster.
The correlation and confidence of each classifier are considered while
fusing the classification scores, and mean average precision is used to
evaluate the final ranked classification scores. Finally, the proposed
framework is applied on NUS-wide Lite data set to demonstrate the
effectiveness in multimedia semantic retrieval.

For full copies of the above articles, check for this issue of the
International Journal of Multimedia Data Engineering & Management (IJMDEM)
in your institution’s library.  This journal is also included in the IGI
Global aggregated “InfoSci-Journals” database:


Mission of IJMDEM:

The primary objective International Journal of Multimedia Data Engineering &
Management (IJMDEM) is to promote and advance multimedia research from
different aspects in multimedia data engineering and management. It
provides a forum for university researchers, scientists, industry
professionals, software engineers and graduate students who need to be
become acquainted with new theories, algorithms, and technologies in
multimedia engineering, and to all those who wish to gain a detailed technical
understanding of what multimedia engineering involves. Novel and fundamental
theories, algorithms, technologies, and applications will be published to
support this mission.

Submission website:

Coverage of IJMDEM:

Topics to be discussed in this journal include, but are not limited to, the following:

Content-based retrieval (image, video, audio, etc.)
Image/video/audio databases
Learning support for multimedia data
Multimedia data engineering
Multimedia data indexing
Multimedia data mining
Multimedia data modeling
Multimedia data storage
Multimedia databases
Multimedia systems
Multimodal data analysis
Network support for multimedia data
New standards
Relevance feedback
Security support for multimedia data
Technologies and applications

IGI Global is pleased to offer a special Multi-Year Subscription Loyalty Program.
In this program, customers who subscribe to one or more journals for a minimum of
two years will qualify for secure subscription pricing. IGI Global pledges to cap
their annual price increase at 5%, which guarantees that the subscription rates for
these customers will not increase by more than 5% annually.

Subscribe to the RSS Feed for this issue.

Subscribe to the RSS Feed for the entire journal.

Interested authors should consult the journal’s manuscript submission guidelines at

All inquiries about submissions should be sent to:
Editor-in-Chief: Shu-Ching Chen at chens@cs.fiu.edu