Avni Rambhia's Blog

2016 SLM Study Finds Shifts in Paradigm from Anti-Piracy Towards Comprehensive Monetization

21 Apr 2016

Our most recent analysis of the software licensing and monetization market shows that some core value propositions endure even as the technology and its applications undergo significant evolution and change. Software licensing solutions were originally created with the goals of preventing piracy and protecting revenue of software products. While anti-piracy remains a core feature today, the crux of the value proposition has shifted towards monetization. Moving far beyond functions such as floating license management and metering, modern SLM solutions serve to bridge chasms between finance, marketing, sales and engineering teams. This trend is closely correlated to how software businesses are evolving. Software publishers are moving away from perpetual licensing and fixed product editions in response to growing customer demand for usage-based pricing and growing use of the cloud. Hardware and embedded manufacturers are moving towards software-activated features rather than rigid SKUs as they themselves and their customers are transformed by the IoT revolution to become software-based data-driven businesses. Companies can no longer afford delays of weeks or even days for customer requirements to filter in via marketing or sales, then be implemented or customized by engineering, and then be deployed out to the client. Orders need to be fulfilled in hours if not minutes, with automation and tight inter-departmental integration becoming increasingly crucial for competitive agility. IoT, analytics and instant-on deployments are driving transformation of how products are built, selected, sold and deployed. The age of custom-built, specialized hardware is ending. We are in the age of business at the speed of cloud, where connectivity, data and analytics are transforming every aspect of business models and processes. Across the many industries and markets that the community of Frost & Sullivan analysts cover, such as industrial automation, healthcare, telecommunications and more, we are finding that profit margins are growing fastest and revenue growth is most stable for companies who are pivoting to a software mindset. This is not a straightforward transition by any means, and it is fraught with challenges and risks. SLM technologies serve as a key enabler of this transition, and an experienced SLM vendor is an invaluable partner for companies - particularly intelligent device manufacturers - seeking to modernize and fully bring the power of software and software-based business models to bear on their ongoing growth strategy. Even as we see more and more companies across a growing number of verticals embrace commercial SLM solutions for their value proposition and favorable total cost of ownership, there does continue to be considerable use of solutions developed in-house, particularly for the server and back-end components. With the advent of SaaS monetization solutions which automate metering, invoicing, billing and renewals for SaaS offerings, we are seeing growing confusion among customers between solutions for orchestrating business operations, SaaS monetization, and full-fledged SLM solutions. Specifically within the area of licensing enforcement, we continue to see a transition from hardware-based enforcement (using so-called dongles) towards electronic enforcement. While hardware is by no means headed for obsolescence, increased connectivity and lower costs and complexity lend increasing favor to electronic enforcement wherever feasible. Which brings us full circle to another key finding, which is that counterfeiting, piracy, grey manufacturing, data theft and malware infection remain serious threats for software and software-powered products. We find that modern SLM solutions are effective at tackling these threats while preserving product quality, reliability and performance. With piracy becoming a solvable problem by technology, the focus of the industry is now shifting towards more comprehensive monetization and optimization. We find that SLM revenues will approach the half-billion US dollar mark by 2022, rising to protect more than 45 billion US dollars worth of software and software-powered products in that timeframe. North America is experiencing a surge in adoption, particularly in the embedded segment, while long-term growth prospects are strongest in the APAC region. From a competitive perspective, Gemalto's acquisition of market leader Safenet changes the vendor landscape. Also noteworthy is the growing number of low-cost vendors who are catering to the growing demand by smaller publishers for some basic protection for their products as global expansion inevitably places revenues and IP at risk but high start-up costs for deploying full-fledged SLM solutions remain a barrier to entry. In addition to uptake of commercial SLM solutions by SMBs and major publishers alike, rapid adoption by embedded manufacturers and cloud-based services results in encouraging long-term growth prospects for the market.

DRM: Literally and Figuratively the Key to Unlocking OTT Video Profitability

08 Apr 2016

OTT is redefining entertainment, and OTT profitability is on many executive minds as we lead up to NAB this year. There are many trends and buzzwords in play – cloud, personalization, content monetization, software-defined workflows, UltraHD… and the quest to become the next Netflix or MLBAM. Underlying all these individual technologies and trends, however, is the reality that OTT revenues from subscriptions and advertising are yet to approach the ballpark of their true potential. OTT profitability is an even more elusive goal.

Delivering vivid, compelling, premium content experiences to every flavor of device and platform in play is a complicated and expensive proposition. Delivering consistent, managed-quality experiences across the complete roster of today’s consumer video devices is even harder. Yet considering that profitability relies on both increased revenues and decreased total cost of ownership, content companies need to find a way to cost-effectively and consistently deliver premium content across the entire roster of tablets, Smart TVs, gaming consoles, streaming media devices, smartphones and many other classes of connected devices – both newly sold and already deployed.

This problem is about to get more complex. In many use cases today, services are able to rely on stream encryption rather than full-fledged DRM solutions. This will reverse in the wake of growth in resolutions to full 1080p HD and beyond to UltraHD, with rise in HDR and virtual reality, and with an expanding roster of valuable early-window content being made available online. To some extent, technology is coming to the aid of video service operators and OVPs seeking to bring economies of scale to the problem of delivering DRM-protected content through consistent experiences across all devices and networks. HTML5, MPEG-DASH, Common Encryption (CENC) and EME (Encrypted Media Extensions) bring a layer of uniformity to how secure video can be safely rendered using native secure playback infrastructure on every device. However practical considerations make this easier said than done. As I discussed in detail here, most browsers are shipping with support for a single DRM system, and many devices support no more than two DRM systems natively. Thus, while operators can begin to leverage the same compressed and encrypted streams for delivery to all devices, there is still a need to support many different DRM systems from a back-office perspective. Taken in the context of the need for consistent cross-device experiences, this then translates into a need for ensuring users receive a consistent set of features, resolutions and experiences across different DRM systems.

This is the problem that multi-DRM system vendors solve. By abstracting all DRM cores within a single service-facing interface, and by amortizing the tremendous effort of building and maintaining secure DRM clients across all relevant devices and browsers, these solutions can go a long way in helping VSOs and OVPs built profitable, engaging OTT video services. There is a growing perception that DRM is free, but the reality of the costs of deploying a full service and ensuring they remain secure over time, is quite different. In a new white paper sponsored by Verimatrix, we talk about the various aspects of total cost of ownership of a DRM system. In the context of current and foreseen challenges for video service operators and OVPs, we also talk about best practices as uncovered during the course of our research on the DRM and OTT markets. You can download your copy of the paper here

Impossible Questions that the New HEVC Advance License Terms Require to be Answered

23 Jul 2015

HEVC Advance recently released its proposed licensing terms for HEVC compression technology. While the terms are intended to be fair, reasonable and transparent, they are actually going to be extremely difficult for many companies to honor. This is because they make assumptions on content businesses that may have been true in the days of regimented broadcast and structured Pay TV, but that are turned on their head in the age of the Internet and device-centric viewing. The key problem is there are no caps on licensing, so there's no "easy button" for compliance - every unit shipped and every stream monetized has to be measurable. And that's going to be a problem for many businesses. Here are five questions you need to be able to answer about your OTT business in order to accurately calculate your HEVC royalties under the currently  published terms. 

1. How many copies of your app were downloaded, installed and actually activated? From online video games, to video-chat clients, to OTT apps for tablets, phones, and TVs, billions of apps are downloaded each year. Many of these are web browser plug-ins. Very few, if any, of these are downloaded directly from a publisher's site. Our years of research show that very few app developers know exactly how many copies of their app are downloaded, or even what order of magnitude are downloaded. Statistics like active users are more measurable, but many services - particularly ad-funded ones - don't even track that. 

2. How much revenue can be attributed to a given video stream? Most content services had been projecting that between 5 and 15 percent of total streams delivered in 2017 would be compressed using HEVC. How is one to determine what percent of total realized revenue is to be allocated to each stream, especially when only a few titles or resolutions are coded in HEVC? Over time, it is extremely likely that the same stream will be encoded in both AVC and HEVC, and delivery networks (not the service provider) will pick the appropriate stream based on the device and the network. Additional complications in answering this question include differential ad revenues for different demographics and streams.

3. When do you become a "content service"? Applications of video are blurring the lines traditionally held between consumer and enterprise, and M&E and corporate applications. If a university hosts HEVC-compressed classroom videos on its website which in turn encourage students to enrol in classes, is this a licensable use case? If surveillance cameras send HEVC-compressed feeds to a cloud-based service that enables mobile-based monitoring, is that a licenseable use case? If a tablet is going to be used only for an enterprise classroom application, is it exempt from licensing because that is not a consumer use case as currently envisioned? 

Defining licensing terms is arguably easy. Complying with them is far more complex, and enforcing licensing terms can be nearly impossible when assumptions are not in sync with the way business is being done. We foresee interesting challenges, to say the least, for businesses to accurately estimate their royalties due to HEVC Advance under the current model. It will be even more interesting to see how patent holders move to enforce royalties given the challenges above. 

The Role of Standardized TEE in DRM

19 Jul 2015

The trusted execution environment (TEE) is a secure area that resides in the main processor of a connected device and ensures that sensitive applications are stored processed and protected in a trusted environment. The TEEs ability to offer safe execution of authorized security software, known as ‘trusted applications’, enables it to provide end-to-end security by enforcing protection, confidentiality, integrity and data access rights. Made up of software and hardware, the TEE offers a level of protection against software attacks generated in the rich operating system (OS). It assists in the control of access rights and houses sensitive applications, which need to be isolated from the rich OS. The TEE bridges the gap between the rich OS (high functionality, low security) and the SE (limited functionality, high security).

Digital content such as videos and TV programs not only require a high level of functionality to deliver the quality features expected by end-users, but also a high level of security to protect against unlawful reproduction and redistribution of copyrighted works. The TEE simplifies the critical tasks of secure boot chain, secret key storage, secure time verification and secure updates that are needed to implement a robust media player. 

The Need for Standardization in TEEs 

Developing secure media playback applications is not only technologically difficult but is extremely expensive. Porting a robust media application which conforms to typical robustness obligations can easily cost upwards of a quarter million dollars per platform. While a few large service providers have the financial means to bite the proverbial bullet and built out an extensive application base, this level of investment is out of reach for most service providers.  If application developers had a more standardized way to leverage “trusted hardware” resources within any given device, they could – in theory at least – develop an application once and then port it relatively quickly from a security and robustness perspective. 

This was the motivation for GlobalPlatform, who identifies, develops and publishes specifications that promote the secure and interoperable deployment and management of multiple applications on secure chip technology, to take on the work of standardizing an interface to a trusted execution environment (TEE). Picking up where the TrustZone architecture for ARM left off, GlobalPlatform’s TEE specification transcends chipset architectures and specific implementation characteristics to provide an international industry standard for building a trusted end-to-end solution which serves multiple actors and supports several business models.

By providing a reasonably isolated secure execution environment that can be leveraged whenever valuable assets need to be accessed in the end-to-end video rendering path, GlobalPlatform enables a standardized way for DRM applications to store and access application secrets and keys, license storage and management, usage policy or account information.  The same infrastructure is also applicable to functions such as watermarking and fingerprinting.

In addition, the trusted application interface via a large number of APIs (such as HTML5) with other device components / modules, whether secure or unsecure (media playback, scheduling, and rendering), which allow to combine the secure treatment inside the TEE with other features present in the device. To complete the solution, attestation of information for the remote validation of the video rendering path is part of the TEE remote administration.


How to achieve a ‘trusted’ TEE

As with all stakeholders, DRM providers want to be assured that a TEE installed in a connected device is a ‘trusted’ TEE and has been created to a recognized industry standard such as GlobalPlatform. As part of GlobalPlatform’s commitment to ensuring the long-term interoperability of embedded applications on secure chip technology, it has developed an open and thoroughly evaluated compliance program which allows stakeholders to evaluate the functional behavior of a TEE product against the requirements outlined by GlobalPlatform TEE Specifications. Adding to this offering, in February 2015, GlobalPlatform’s TEE Protection Profile was officially certified by Common Criteria. This means that product vendors are now able to undertake formal security evaluation of their TEE products, using laboratories licensed by supporting certification bodies to evaluate and certify that they meet the security requirements in the document.


GlobalPlatform also plans to launch a TEE Security Certification Secretariat later this year, as well as announce GlobalPlatform Security Accredited Laboratories. The certification of products to GlobalPlatform’s TEE Specification Suite and Protection Profile promotes confidence within the connected devices market by establishing an agreed industry framework. This lowers the cost of progress for industry players such as application developers, hardware manufacturers and software developers by removing barriers caused by interoperability issues. Most importantly, a compliance program also provides a common framework for partnership initiatives to develop throughout the value chain. This will increase business opportunities and create an overall benchmark for the market to strive towards.


What does this mean for DRM developers today?

Most DRM vendors Frost & Sullivan interviewed acknowledged that a standardized interface to a TEE has the potential to dramatically improve the scalability of multi-screen applications across a growing combinatorical nightmare of devices, platforms and hardware primitives.  The problem is that there is still considerable gap between the ideal level of maturity and comprehensiveness such a standard would deliver, and the status of implementations currently available.  One content protection company we spoke to commented that: “Our experience is that it has always been, and still is, important to exploit every aspect of hardware support that exists in chipsets. The TEE is one of those hardware layers that is important to leverage, whenever it is available, as part of the security implementation regime.”  Another similarly pointed out that a TEE characterized by a standardized API is a welcome development, but in reality developers continue to grapple with a complex and intricate landscape which they must navigate on a device-by-device basis.

Another aspect that is causing worry across the board is that despite the many security features they offer, TEEs are still not in and of themselves a silver bullet for security against hackers. As one example, any application that is certified under an appropriate program to be “secure” is then authorized to run within the TEE along with other highly sensitive applications. This places tremendous burden on penetration testing and security verification procedures. Even one rogue application slipping past this figurative checkpoint erases the benefits of hosting a DRM application within a quarantined execution environment and places all its keys and secrets at risk of discovery and dissemination. As a result, DRM applications built to leverage a TEE still need to be made internally robust through application hardening measures such as obfuscation, white box cryptography, and more.  Furthermore, the task of DRM applications does not end at cryptography, as is the case with simpler applications such as financial or banking apps. Media applications must continue to secure compressed, decrypted content streams throughput the decompression and rendering process – which nearly always occurs outside the TEE today.

Despite these challenges, it is clear the community at large is in agreement that TEE standardization initiatives like GlobalPlatform represent a much-needed step in the right direction towards enabling cost-effective cross-platform secure content applications.  Over time, as the standard matures, and its implementations and surrounding testing procedures mature, there is tremendous potential for meaningful reductions in the current complexity of porting robust, durable DRM applications to the fullest possible range of managed set top boxes and unmanaged consumer devices. As the reed for customization and re-engineering on a device-by-device basis is minimized, time to market and flexibility are maximized and ROI is dramatically improved.

Factors like globalization of OTT businesses into piracy-rich markets, increase content resolution past HD towards 4K and beyond, and the competitive imperative to open up vast content libraries for anywhere-anytime access, create worrisome risks for content creators who nonetheless need to tap into the revenue potential of new media business models in order to remain competitive, relevant and profitable. As a consequence of this, many studios and programmers will continue to require top of the line security measures from their content licensees, including applications that build out or leverage secure video paths. That said, at Frost & Sullivan we believe there will be a growing number of programmers & copyright owners who are willing to accept best-effort secure clients that opportunistically leverage hardware security anchors as available but who will not require content to be withheld from devices whose price points cannot justify this level of hardware sophistication. This global opportunity notwithstanding, we also believe that content services in major markets will only be able to differentiate and thrive if they achieve the ability to reach over 95 percent of devices in the market with a consistent, high-quality, immersive content experience. Unfortunately the economics for achieving this today are daunting, leading many service providers to restrict their reach to the top 5 or 8 devices in the market and consequently severely restraining their ongoing growth potential. Any technology or initiative that can mitigate this business-critical pain point – even if it cannot fully eliminate it – is a step forward in the right direction.

A detailed discussion of Frost & Sullivan’s findings around technical trends in DRM and the impact of initiatives like GlobalPlatform on the competitive landscape for the DRM industry is available here.

Thoughts on Streaming Media East 2015

05 Jun 2015

My high level takeaway from Streaming Media East was that the time for talking and garage-hacks is past, and everyone's getting down to serious business now. As we saw at CES, it's clear that the world is getting more connected, and there are more businesses and more applications capitalizing on that connectivity. What was once novel and geeky is now mainstream and profitable. However, this creates tremendous infrastructural challenges, ranging from high-capacity networks (so-called ECDNs) within the firewall to high-efficiency CDNs and content delivery strategies over the open Internet. AsDan Rayburn and Mukul Krishna have been saying for the past two years, MSOs and telcos need to stop viewing video as a traffic problem and start seeing it as a business opportunity. We finally saw those lights begin to flicker on during this year's SME program. Increasingly CSPs across the globe are looking at ways where they can leverage an existing infrastructure and customer base to deliver video. One key reason for this is they want to see higher margins as their voice and in some cases data components are increasingly commoditized. 

Video seems to be a major way in which they hope to help their margin situation. We are seeing two ways in which they are typically looking at leveraging their infrastructure: a) Launch a multi-screen video service themselves and/or b) provide an end-to-end managed service to enable content owners to monetize in a multi-screen environment with a focus on QoE and a branded, differentiated, persistent experience - instead of just throwing stuff on to YouTube.

In terms of new technologies, HTML 5, DASH and HEVC are top of mind. We continued our series of real-world business-centric talks on HEVC (slides available upon request and also posted to the SME site; video coming shortly). As predicted, several silicon-centric implementations (e.g. from Advantech and Qualcomm) are being introduced, but the vast majority of production encoders continue to be built on Intel processors today. This will shift over time - Imagine and Ericsson are betting on designs using their own custom silicon - but it is still a position of strength. In terms of AVC transcoding, Intel is gaining strength as software-defined workflows and virtualized high-density transcoders are accepted as best-practice architectural choices. We presented a paper commissioned by Vantrix showing that the total cost of ownership of JITT (powered by Intel) was nearly half that of JITP. Vendors like Imagine Communications, Wowza and Ericsson independently corroborated those findings in private conversations. The impact of DASH (now embraced by all four leading DRM vendors), and the fall of proprietary Widevine and SmoothStreaming formats, is poised to be hugely disruptive. We'll be discussing the implications of this in our upcoming DRM study.

Monetization in general, and advertising in particular, continue to be open issues. While there is growing maturity in ad-insertion and ad-optimization solutions, there is still limited ability to provide ad inventory (except for the live sports application, which is thriving on sold-out ad inventories). Very few companies are able to actually enable content providers to fill their ad carousels in any meaningful way; this remains perhaps the most critical unmet need for the streaming industry today.

Security is also becoming a more important consideration - not from the DRM front (which is well understood already) but throughout the IP-based workflow. More, including a recent thought leadership paper we wrote on the subject, is in my blog post here: http://www.frost.com/reg/blog-index.do#.VV0TtaUq5yU.twitter

In terms of workflows, we saw a number of talks and instructional sessions aimed at building out very large scale networks, storage facilities and streaming facilities. Most of these talks were aimed at broadcasters, news gathering services and telcos. Education was also a key vertical - digital instruction methods (as analyzed in our Lecture Capture Systems study) are pushing education to become arguably the fastest growing source of professionally created and curated video, with soaring consumption volumes. We are seeing growing need to unify video applications and workflows that have traditionally been siloed: live v/s VOD, inside-firewall v/s OTT, in-browser v/s app-based streaming, IP v/s traditional networks... these are all converging, and solutions are being enhanced to enable such convergence and unification. Branding and potential disintermediation are becoming more and more business-critical conversations, even as a land grab war for big data gets fervently underway. We are actually seeing a resurgence in DRM market fragmentation with vendors like Ericsson revitalizing their own DRM systems (Azuki in Ericsson's case) in the quest to capture last-inch usage data and parlay that into analytics, personalization and prediction. We saw discussions ranging from best practices in newsgathering, through scalable IP workflows, all the way through high-volume, highly scalable broadcast and personalized delivery, we are beginning to see streaming video come of age as a fundamental, business-central mode of operation for digital media companies. It is clearly past the chasm, and is on the upswing towards becoming the mainstay of digital media businesses. We find that in a small yet significant way, the traditional NAB show audience is beginning to look for business critical solutions and forward-looking technical instruction under the Streaming Media umbrella.

As the bottom line, a seismic shift driven by online video being streamed to consumer devices is well underway. Everyone - technology vendors, system integrators, content marketing teams, content service providers and consumer device vendors - will need to transform themselves from product-centric, delivery-focused businesses to monetization-enabling solutions. This SME we released, for the first time ever, our vendor positioning map for the M&E industry which categorizes various market participants into quadrants and calls out recommended industry best practices according to Frost and Sullivan. (slides available upon request and also posted to the SME site; video coming shortly). Interestingly, our audience straw poll in the nearly full hall showed that half our audience was operators or broadcasters and the other half was vendors. There were very few (if any) attendees from companies related to user-generated content or enterprise-only content, again emphasizing how crucial OTT strategies and offerings are to mainstream content businesses today.

As always, we're happy to discuss these findings or any other questions you may have. Just reach out via phone, email, linkedin or twitter.

Security Concerns in an IP-connected Environment

20 May 2015

Content protection has always been a top-of-mind priority for content companies. To date, the most protection energy has been focused on the end-user side, in terms of digital rights management (DRM), copy-protection and traitor-tracing. As workflows become digitized, however, and as speed and collaborative agility become critical competitive differentiators, security needs are becoming more complex and more pervasive. 

This expands the notions of security in the digital media world past DRM and towards what's known as enterprise rights management (ERM). ERM solutions link to enterprise-internal user management systems to trace, protect and enforce usage policy on sensitive internal data and file assets. ERM solutions often link with Data Leakage Prevention (DLP) systems and user management systems, among others, to automate processes like rights enforcement and usage monitoring. 

In a paper (attached below) we released at NAB this year, commissioned by AVID, we lay out key security considerations for today's workflows. We've seen a flurry of vendor activity in establishing thought leadership around securing of assets throughout the digital workflow. Cisco Systems notably had an entire section at their NAB booth related to security solutions for digital workflows, and Level 3 talked about the role of network security in modern digital media businesses at Streaming Media East. 

We expect to see security emerge as a differentiator of emerging importance in DAM, NLE, post-production workflow, and similar systems. While performance, automation and scalability will remain primary differentiators, vendors will need to at least show thought leadership if not feature leadership on the security front as well. 

Thoughts on CES: Democratization of Content Delivery

06 Feb 2015

The high level takeaways from CES are simplistic - devices are getting better, faster, smaller and smarter. The word is getting more connected, and there are more businesses and more applications capitalizing on that connectivity. What was once novel and geeky is now mainstream and profitable. At the same time, the market is now predictably getting more and more crowded, with Chinese vendors focused on pushing down price, Korean vendors focused on delivering value and volume, and Japanese vendors once again beginning to get sidelined as they search for the next new disruptive technology or product category to enable at a more fundamental R&D level. That said, if a vendor is in the market peddling a dumb device, there is no choice but to compete on price, and for a majority of businesses that will not get you very far.


For Frost and Sullivan, the interesting developments are occurring in the business-enabling layers that are taking the solved device problem and the solved network connectivity problem and translating them into new markets, new market shares, and new customer value. Network ubiquity is here, device ubiquity is here, now content ubiquity is key.  Consumers want devices that can get them access to interesting content in personalized fashion with a high quality of experience. Devices could be inherently smart, or may be smartified by the devices connected to them. For example, a TV with a streaming media device, or a set top box with a smart phone or tablet companion app, can provide highly compelling content experiences. We emphasize that high quality content experiences in terms of QoS and QoE are key. Competitive differentiation ultimately comes down to the fidelity of the content, and how users can consume and experience it. Consumers no longer wish to be tethered to a single Pay TV provider, and both Pay TV providers and device vendors understand that. This is driving innovation behind every class of consumer devices - TVs, streaming media devices, game consoles, tablets, smart phones, and even set top boxes. While 4K is a great press theme, all resolutions and bandwidths of content are relevant to the new content economy. Business models are clearly dictating the democratization of content delivery, give that it's not just with the traditional CSPs who have the pipes. If you are someone who has an interesting business model and content library, devices and networks exist to get you to your audience - worldwide. No one is debating any longer whether video anytime, anywhere on any device is a viable model.


That said, online content success is still elusive amidst a crowded and confused market, and growth is highly dependent on execution as well as platform reach. The fallout of the successful maturation of the device ecosystem, as clear at CES 2015, is that industry limitations are now on the actual content itself and how you can package it. Also crucial are experience-optimizing technologies such as metadata, analytics, cross-platform integration and application portability. Security remains a key consideration (in the DRM sense), but is by and large considered a solved problem except for 4K content.



In pleasant contrast to last year's CES where 4K and higher resolution TV screens were showing HD or even SD content, this year there was a much more coherent focus on 4K content, engaging interfaces, and high end content offerings. Sony and Samsung still lead in terms of market share, given their early support of inbuilt DRM and app development SDKs. This year, Panasonic (who has thus far focused more on enterprise/signage applications) and LG are seeking to break into the top-3 by market share for Smart TVs. Companies ranging from DivX/NewLion to Gracenote were showcasing TV-specific solutions for TVE. It was particularly interesting (and encouraging) for us to see dramatic leaps forward in the sophistication and ease of the user interface experience For example, touch-pad smart TV remotes from  Samsung were instrumental in bringing the interface to life.


Streaming media devices have taken a huge leap forward in 2014 driven by the popularity of streaming sticks. While AppleTV has the highest nominal unit sales number, the vast majority of their devices are used in the enteprise. Roku, with 7-8 million devices deployed at the end of Dec 2014 per Frost & Sullivan estimates, is who we consider the leading streaming media device today. We expect Samsung and Sony will come out with new-generation models within the next 12-18 months, possibly sooner. This should help drive up the "smart" level of their televisions in a more agile form factor, while preserving the expected lifetime of the TV set itself. At the same time, the distinction between M&E apps and middleware is imminently blurring.  Middleware continues to be deployed on SoCs in a market led largely by STMicro and Broadcom, but nearly all vendors are looking at HTML5 and Java as more portable technologies for consistent experiences across all classes of devices.


4K/UltraHD remains aspirational for now, although early services are beginning to take root. The economics of 4K remain elusive (see link and link) but for now monetization and engagement are the most urgent problems that content services are looking to solve. Analytics plays a crucial role here, with applications across usage reporting, targeted advertising, search, personalized recommendation, value-added second screen services, monitoring of QoS, and more.


On the enterprise side, remote telepresence solutions made a splash on the show floor, driven primarily by BEAM. The differences between tablets (as consumption devices) and PCs/laptops (are productivity devices) are becoming more pronounced, with the prognosis for convertible tablets with keyboards being far brighter than traditional high-end tablets. Lower-end tablets are similarly losing ground to so-called phablets, which we count in smart phones. There were a number of announcements in the broad sphere of Internet of Everything, signaling continuing interest in finding new product niches and new services within the digital home and digital enterprise.

HEVC resource compilation

11 Aug 2014

Mostly since it's easier to send a URL to a blog post than to email large attachments, I'm compiling all my publicly released material on HEVC market analysis here.

We update this material quarterly, including HEVC-enabled and HEVC-capable device shipment numbers, so if you need current data just send a note to your account manager or to me at avni.rambhia@frost.com.

The status webinar from last year is accessible through Brighttalk here: https://www.brighttalk.com/webcast/5567/70391

Update Jan 2015: The video for the talk on business cases and ROI analyses for adopting HEVC, delivered at Streaming Media West 2014, is here: http://bcove.me/2pjdu6wr. The F&S insight version of that presentation can be downloaded from: http://www.frost.com/sublib/display-market-insight.do?id=293083154

Audio Regains First Class Citizen Status in Digital Media Experiences

02 Mar 2014

In the early days of online video communication and online video delivery, audio was a critical component of the overall quality of experience. When audio is smooth and clear, it can compensate for jitters or glitches in video. On the other hand, loss of synchronization between audio and video can demolish the quality of experience even if the video is otherwise flawless. As audio technology matured and the industry nearly universally relied on offerings from Fraunhofer and Dolby for their audio needs, industry focus shifted to video resolution and compression improvements, with audio taking a back seat in terms of R&D, innovation and differentiation. This is changing.

At CES this year, we saw many television sets demonstrating UltraHD experiences. While the video quality was compelling, simple single-channel or even stereo sound clearly fell short of complementing the visual experience. There is growing emphasis in consumer electronics stores on surround sound systems to serve this need, as well as a growing trend of recording and rendering 360 degree surround sound audio - particularly for movies and console games. We are also starting to see some video encoder vendors such as ATEME in contrbution and pay TV applications beginning to differentiate not only on video features but also on audio capabilities.

In terms of user interfaces, 4-arrow remote controls are ineffective in implementing user experiences that are well matched to the needs of interactive and smart TVs, and subscribers used to intuitive and personalized touch screen devices. Audio is playing a growing role in next generation interfaces, as voice recognition technology matures and also as sound tracks are leveraged in new and interesting ways. For example, Gracenote has an application which automatically recognizes the current TV show playing based on analysis of ambient audio, and customizes the second screen experience accordingly. This trend is also expected to grow.

As audio begins to regain its first class citizenship status within the digital media ecosystem, Frost & Sullivan is intensifying its research into and coverage of this technology. Stay tuned for more!

MSOs Shouldn't Overlook AVC As They Evaluate Upgrades From MPEG-2 To HEVC

29 Aug 2013

If I had a dime for every time I’ve been asked this question in the last three months, I’d have enough cash to buy a tall latte, and a pastry along with it. And the tax. And the tip. Why is this such an enticing notion, and does the idea actually bear merit?

Some history is in order. Back in the nineties as North America transitioned to digital cable, MPEG-2 was state of the art compression technology. North America was ahead to the game even with HD and thus nearly all cable applications rely on MPEG-2 for SD and HD alike. But the industry paid a price for that early innovation – no sooner were they done with HD deployment than AVC broke onto the scene and fundamentally disrupted the video compression equation. Faced with a weak economical outlook (remember the dot com crash of 2002, anyone?), and having just made major investments in HD rollout, the cable industry was unable to take advantage in a meaningful way of the benefits offered by AVC. In contrast, as Europe began to transition somewhat later in the game, they did use MPEG-2 for SD digital cable but predominantly use AVC for HD.

Fast forward to 2013, when North American cable subscriber counts continue to drop quarter over quarter, and IPTV is surging in popularity with its vast array of content and the lure of rich applications enabled by bi-directional connectivity. The writing on the wall is clear to MSOs – they can transition their primary business to broadband services, or they must dramatically reinvent themselves and the user experience they offer to remain relevant as mainstream Pay TV service providers. Wherein lies the rub – how do MSOs meaningfully and strategically invest in infrastructure that will ensure they are at state of the art over the next decade?

AVC has matured since its early days, and state of the art AVC encoders can themselves offer twice the compression efficiency of first generation AVC encoders. Transitioning to AVC is the most obvious route to grow quality and/or quantity of Pay TV content without expensive expansions of bandwidth. (Arguably technologies like Switched Digital Video are also options, but let’s not complicate the discussion). The problem is, this is easier said than done. Consider the USA has approximately 56 million cable subscribers, with approximately 2 set top boxes per subscriber. Multiply that by a conservative $100 per replacement set top box, and the cost of transitioning end user clients alone exceeds a staggering 11 B dollars. For context, the total capital expenditure for North American cable in calendar year 2012 was less than $13 B. Add to that the costs of truck rolls, upgrading head-ends, overhauling quality monitoring infrastructure, and more, and it’s easy to see why no MSO wants to do this type of systemic upgrade twice. Which brings us to HEVC.

HEVC, in theory, promises twice the efficiency of AVC. Why, MSOs might ask, should we allow history to repeat itself and spend so much on one systemic upgrade when another disruptive technology is right around the corner? It’s a fair question, but let’s take a look at three of the key assumptions it is predicated on:

i) HEVC offers twice the compression efficiency of AVC: Well, yes and no. That’s the theoretical advantage, but practical encoders are only offering about 20-30% improvement on HD content and even less on SD content. That, by the way, is the same level of improvement that state of the art AVC encoders can offer over legacy MPEG-2 encoders at this point in time. Moreover, they can do this at a fraction of the cost, a fraction of the power consumption and a fraction of the rack space. Given that a large number of modern encoders are built in software (even if they are appliance form factors) rather than rigid hardware, CAPEX is not in jeopardy if a service provider upgrades to an AVC encoder immediately and eventually soft-upgrades it to HEVC when that ecosystem is mature and ready.

ii) HEVC products are being released very quickly, and if I do not transition I will fall behind the curve: There’s certainly plenty of buzz around HEVC; it’s arguably the hottest hash tag at IBC this year. However, there is a difference between first generation products that are a must-have for pilot testing, and a mature product ecosystem that enables mainstream creation, monitoring, delivery and storage of a compression format form end to end. The AVC ecosystem is ready and available today, and costs are falling rapidly as commoditization sets in. The opportunity cost of waiting three years for HEVC products mature needs to weighed against the ability to cost-efficiently purchase and deploy AVC infrastructure immediately.

iii) UltraHD is coming, and HEVC is the key enabler: Again, yes and no. Certainly twice the compression efficiency is critical if you are quadrupling resolution. HEVC’s flexible transform unit size is ideally suited to compressing UltraHD content. However, there are catches. First, if a service is only deploying one or two channels in the short term, there is usually enough bandwidth already available to achieve this via AVC. Second, there’s not enough UltraHD source content available yet to justify the deployment of content beyond – most likely – nature, sports and movies. If that. With global penetration of HD itself at under 33 percent despite the age of the technology, expecting a more rapid pace of deployment for new UltraHD technology is, well, optimistic. Third, there are gaps in the technology ecosystem – for example HDMI 2.0, which is necessary to enable full UltraHD rendering, has not yet been finalized. So UltraHD may be coming, but it’s not something that will happen as a mainstream movement tomorrow morning.

The metrics behind these assumptions will definitely change over time, and ROI that HEVC can deliver will definitely improve over time. While it’s clear that HEVC is a solid technology advancement and no mere flash in the pan, it is important to keep in mind that a mature ecosystem takes time to develop. By all means, MSOs must begin evaluating HEVC as a key technology component for future infrastructure. However, in our measured opinion, there’s little reason to consider jumping straight from outdated MPEG-2 to unproven HEVC. AVC offers concrete benefits immediately, and by selecting software-based products during this upgrade, MSOs can ensure long-term, future-proof returns on infrastructural investments.

Questions? Schedule a virtual client briefing with us today, or browse through our extensive research coverage of video technology trends and markets.

"Too Big To Fail" Applies to the Tech World Too

25 Aug 2013

As economies worldwide are staging a fragile if promising recovery, economists continue to chant the mantra of breaking up institutions - specifically, banks - that are "too big to fail".

Technology has given rise to many other corporations that are similarly too big to fail. Not because their failure would bankrupt or disrupt a financial ecosystem, but because their closure or failure would fundamentally disrupt and cripple a huge number of businesses in the United States, and maybe even in the world.

Microsoft is one that easily comes to mind. Should Windows cease to be enhanced and supported tomorrow, the loss of productivity that would ensue across corporations and individuals alike is easy to imagine. Google is another. Certainly there are alternative search engines available, but there is nothing yet to touch Google's engine in terms of speed and accuracy. There is also no credible alternative to Google's advertisement and analytics capabilities, and its subsidiary businesses such as YouTube. More importantly, there is no sign that such alternatives can be created despite the willingness of competitors to sink in billions of dollars of R&D investment.

As industry reliance on cloud based services grows, so too is the indispensibility of cloud-based service providers.  SalesForce.com, as an example, is a critical enabler of sales teams and customer relationship management systems. It's companion Force.com cloud service plays host to a significant volume of business-critical applications. Breaking news that an outage of Amazon's AWS service had taken down businesses as diverse as AirBNB and Vine (Twitter) emphasizes the depth and breadth to which businesses today rely on cloud infrastructure. With applications ranging from marketing automation to video transcoding and data processing to business-critical workflows being implemented on AWS, this is a close contender for being at the top of the list of technology companies that, today, are simply too big to fail.

There are other companies closely woven with our lives whose failure would cause tremendous personal nuisance and indirectly impact productivity. Yahoo! is exemplary in this category - it is easy to visualize the chaos that would ensue if Yahoo! mail were to close down (or even institute a monthly or annual fee for continued access) at some point. Network drive services such as DropBox are quickly rising to a similar level of significance; Microsoft and Google are also significant players in this space. Facebook, LinkedIn, MySpace... the list of technology vendors who are indispensible parts of our lives goes on. As technology continues to become a pervasive part of our connected lifestyles, this list can only grow.

Challenges for Securing Revenue across Managed and Unmanaged Networks: A Look at NAB 2013 Trends

24 Apr 2013

NAB 2013 brought home three main themes for the digital media team at Frost & Sullivan:

1)    There’s a clear drive toward delivering media & entertainment solutions that control bottom line costs while growing top line revenues.

2)    Market conversations are shifting toward solutions focused on monetization rather than just products focused on delivery.

3)    An HEVC demo is the new must-have booth accessory.

Monetization of video is a particularly interesting challenge, since business models in the OTT and TV Everywhere space remain experimental and online revenues have yet to become significant contributors to MVPD businesses. That said, our recently released study on consumer video devices (executive summary attached below) shows that the devices industry is already in the throes of realizing the lucrative potential of ubiquitous video.

Examining segments including set-top boxes, smart phones, tablets, game consoles, smart TVs, IP streaming devices and more, we found that total unit shipments in 2012 were well past the 1 billion mark, with total revenues exceeding $350 billion. With device shipments on track to triple by 2017, operators across the globe are grappling to bring their ubiquitous video offerings to this critical new ecosystem of unmanaged devices in a scalable, secure fashion. Unmanaged is the key word here – managed set-top boxes only account for under 1/5 of all video-enabled devices shipped in 2012. At the same time, network traffic studies are consistently showing continued growth in long form content consumption on unmanaged devices.

Piracy of course is always a top of mind consideration for content owners and operators when deploying OTT/TE services. The issue gets more critical as live linear content and premium VOD content are delivered equivalently to managed and unmanaged devices in HD resolution.

It's not news to operators that, in contrast to the tightly controlled execution environment of set-top boxes, consumer owned and managed (COAM) devices are far more challenging platforms on which to secure content. Operators are cognizant of the need to support these myriad devices with compelling content offerings despite these challenges in order to minimize churn and remain competitive. The problem is, with revenues still small and business models yet unproven, operators are incurring this complexity and cost with limited upside ROI, particularly when they attempt to extend their traditional conditional access (CA) infrastructure to meet far more dynamic multi-screen needs.

In a white paper we just released, “Cardless Content Security: The Smarter choice for Hybrid Networks,” we examine how challenges like fragmentation of devices and networks and the need to deliver consistent user experiences across all screens can be more effectively overcome. We discuss industry-proven best practices in architecting security solutions for the next-generation ecosystem of multiple transmission networks and devices in a way that minimizes head-end complexity and ensures a future-proof investment.

We also look at how cardless CA and multi-rights DRM platforms are leveraging advances in software anti-tamper technology and silicon-based security measures to deliver cost-efficient, durable content security on the client side. The paper takes a close look at the VCAS solution from Verimatrix as an example of a best-in-class solution that delivers head-end simplification and scalability with robust client-side protection.

The future of the devices market and the security market are both promising to be interesting. Those at NAB couldn’t have missed the HEVC and 4K demonstrations that were running at nearly every booth. Widespread initiatives to deliver HD+ and 4K content to unmanaged devices raise a whole new set of content protection questions.

For example, screen captures of 4K content can easily yield very high quality SD content (perhaps even HD content) for recompression and subsequent piracy, and the incentive for professional hackers to pirate 4K content is thus much higher. Studios and content owners will almost certainly require stronger security standards in terms of encryption and usage enforcement for 4K content. At the same time, as we discuss in this same paper, it will also be important to rely on traitor tracing and piracy tracking technologies, such as watermarking and fingerprinting, to holistically manage this inevitable problem.

We will continue to track these developments in our research coverage of the encoding, transcoding and content protection markets. In the meanwhile, if you missed our recent webinar on our forecasted roadmap for products and services based on HEVC, you can catch the recording here.


Help Desk

Full list of offices

For more information and general enquiries, contact Frost & Sullivan near you.

North America
tel: +1.877.463.7678

Select a location near you..