Use the blog to discuss and comment on the latest industry insights provided by our analyst experts.
ICT Cybersecurity from Frost & Sullivan on Vimeo. In over 11 years of working with the Global 1000, Fortune 500, and SMB’s to develop custom market intelligence for strategic business planning, there are only a handful of issues that have been as critical for global business as the issue of cybersecurity. For some organizations, there is the perception that the intense focus on cybersecurity happened quickly, but that isn’t the case. It developed because of a long and bruising security technology “arms race” that has pitted the business and government sectors against foreign governments, organized crime, hacktivists, and terrorist organizations. Although the word cybersecurity is common in today’s global business vernacular, there are still far too many businesses and government organizations that are woefully behind the curve in securing their networks and data. This fact was highlighted recently in an article “Traveling Light in a Time of Digital Thievery,” where Mike McConnell, Vice Chairman, Booz Allen Hamilton and former Director of National Intelligence & former Director, National Security Agency said “In looking at computer systems of consequence — in government, Congress, at the Department of Defense, aerospace, companies with valuable trade secrets — we’ve not examined one yet that has not been infected by an advanced persistent threat.” It was also highlighted in another recent article “Cameras May Open Up the Board Room to Hackers.” I’ll admit that originally I was just intellectually intrigued with that threat potential until I found myself in a conference room in Silicon Valley last Friday to present strategic business recommendations to a client that had two video conferencing cameras at the end of the table. The point is that regardless of the environment, government, business, or personal computing (see article titled “F.B.I. Admits Hacker Group’s Eavesdropping”), the need to heighten cybersecurity defenses in all walks of life is vital. The question many are asking is “can cybersecurity be adequately addressed by the private sector alone?” The answer from my perspective is an unequivocal “no.” Some areas of cybersecurity will never have a hard and quantifiable positive return on investment for private enterprise, which is why government “encouragement” and in some cases, regulation will likely become a necessity. As a Principal Consultant I am acutely aware of the inefficiencies of government. However, I’m also intimately familiar with the obsession private enterprise has on quarterly earnings and how it can prevent a company from aggressively implementing a comprehensive cybersecurity program. Unfortunately, organizations no longer have the luxury of debating and fighting over how to partner with government to counter the cybersecurity problem today. The fact is that leaders from government and enterprise must find a way to work amicably towards a common goal because the threat is getting worse by the day. In my recent market insight, “Cybersecurity: A Global Economic Crisis,” frank discussions I had with high-level executives revealed that discovery of new malware around the world skyrocketed and has a compound annual growth rate of around 100% (Figure 1). This malware is responsible for what many experts believe to be the largest transfer of wealth the in the history of the world because it is doing more than stealing credit card numbers, it is stealing intellectual property from nations with economies that rely on innovation to grow (Figure 2). When innovation is stolen, service oriented economies will go bankrupt and where severe economic chaos develops, the potential for armed conflict is never far behind. This is why the issue of cybersecurity will be the most discussed topic at the 2012 RSA Conference in San Francisco this week. It’s also the reason that everyone, regardless of profession, should pay attention to the cybersecurity debate taking place now in capitals and business centers around the world.
by Peter Finalle 24 Feb 2012
The BlackBerry Playbook OS is finally receiving an update that will help RIM level the playing field against Apple and Android tablets. The update largely addresses the shortcomings of the initial device, which was criticized for the absence of native email, the limited number of available applications and inadequate stand alone functionality. The Playbook 2.0 OS, however, will put RIM back on offense and is an important incremental step towards the full BBX migration that will unify the ecosystem for all Blackberry devices. With RIM’s strong footprint in the enterprise sector and the Playbook’s competitive price point, BlackBerry has the potential to garner interest from users who are looking for a device that offers similar features and functionality of Android and Apple tablets, but with added enterprise support. One of the most pressing issues with the original Playbook was the limited number of apps available for the QNX operating system. Although the device is robust and the operating system worked well, it was not capable of leveraging any existing BlackBerry phone apps, and fell extremely short of rivaling Android or Apple. However, the latest update offers native emulated support for Android applications, making one of their biggest competitors also one of their biggest allies. Although support is not 100 percent, this feature is expected to continue to expand and be refined as an integral capability of the operating system. RIM estimates 65 percent of the 250,000+ Android Market apps will be fully compatible with PlayBook OS 2.0 & BlackBerry10. Another important issue with the original Playbook OS, was its reliance on other Blackberry devices; resulting in a tablet that was deficient in the hands of users who had no BlackBerry smartphone to control the Playbook’s email functionality. Fortunately, the latest revision of the operating system includes a far more robust native email platform which includes new features, such as a unified inbox to simplify the management of personal and work accounts. Moreover, users can multi-task within email allowing them to reference one email while they compose another. Through this new email platform, RIM has done more than merely add features. This update allows the device to more adequately address the needs of users who own devices outside of the BlackBerry ecosystem. Thus, the Playbook is now able to compete for the same end-users that Apple and Google compete for, while retaining their inherent competitive edge in the enterprise market. Just as important as the upgraded OS capabilities is the way RIM is deploying the new operating system. Android OS updates are often used to pressure customers to buy new hardware. RIM, however, will be releasing its new OS for the original Playbook device. This commitment to existing customers is on par with Apple, and serves as a significant differentiating factor against the many Android devices (which are often considered old less than six months after their release). If RIM continues to improve its devices, while supporting all relevant hardware, the company may begin to attract frustrated Android users who will appreciate an ecosystem that does not cycle through devices every six months.
by Todd Day 16 Feb 2012
| Add Comment
Both Apple and Google are working to merge the computer world (MAC/PC's/Desktops/Laptops) with the mobile world (Tablets/Smartphones) with iOS and Android/Chrome respectively. Over the past couple of years, the world has seen the smartphone and tablet market grow rapidly. At the heart of that growth is three simple factors - mobility, simplicity, convenience. In terms of mobility, smartphones and tablets can be taken everywhere, and traditionally, the smaller smartphone more so than the larger tablet. However, both mobile devices are obviously more portable than a laptop - a factor which has played a large role in their success. The simplicity of clicking on an icon to check facebook or look at email has also played a huge role. In fact, one of the major factors in Apple's success of their iOS products is their ease of use. Finally, most consumers like the convenience of being able to quickly check these things and respond without having to take the time to stop, take out their laptop, and log onto a website. Laptops/desktops differentiate from tablets/smartphones as the focus is on content creation as opposed to content consumption. Although there are some apps that let you edit pictures, work on spreadsheets, create presentations, etc. on mobile devices, most people still need the advanced software (Office, Photoshop, etc.) large screen, full keyboard, and mouse functionality. Finally, we've seen the growth and instant success of "cloud services" through iCloud, SugarSync, Amazon, Dropbox, and others. The general concept behind the success of cloud services is content availability. Consumers want to have access to all of their documents, photos, email, music, videos, etc. available on any and all devices that they own. The idea of "seamless syncability" of content has made it into mobile devices, and through apps like the aforementioned cloud services companies made it into computers - to a certain extent. The next logical question is "Why can't computers have both the simplistic interface that smartphones have, yet still be used for more advanced software?" The answer is "they can". I believe that Apple and Google agree, which is why they're both interested in bridging that gap. Ultimately, we will still continue to have desktop computers and laptops, however, the software running on them will likely change in order to provide consumers the capability to download pictures, work on a spreadsheet, and check email on their home computer, then open up their mobile device and have everything look the same with the same content. What's after that? Likely dummy devices - all with web-based operating system - that anyone can log into and immediately pull up their profile, content, and settings. Similar to what many enterprises have been using for years in "roaming profiles" where multiple employees share office computers. When they log in, their files, desktop settings, etc. all show up. Web-based operating systems could do that on a global scale.
by Brian Cotton 15 Feb 2012
| Add Comment
In last week’s blog, I wrote about the vision of a technology-enabled collaborative government model, in which shared information drives intelligence, policy, and action across bureaucratic boundaries. In an urban context this means an integrated governance strategy that can provide better service to citizens and businesses, and support a more streamlined and fiscally efficient government. The concept is appealing, but how is it actually realized? The technology cornerstone of collaborative government is an intelligent operations center that is both an information warehouse and an analysis and decision-support system. As a warehouse, the platform gathers data from a large number of sources, including archived data, transactions and events, and real-time streaming data (big data) such as that from sensors deployed across city infrastructure. It processes this data, including removing duplicates and creating master records (single views). It also facilitates securely sharing that data across applications and between city departments. Embedded within the system, other tools can produce reports and dashboard views for city managers and executives. The platform is truly intelligent, and truly able to power collaborative government, when analytics are applied to correlate events and detect subtle patterns in the ocean of data. Predictive modeling amplifies the utility of the system in simulation activities, enabling complex “what if” planning for operational and budgetary purposes. When these capabilities are coupled with detailed monitoring of critical factors, such as weather, traffic, healthcare, or economic conditions, city managers are much better equipped to understand and direct services and operations across an entire city. It can be even more robust by facilitating cross-agency communication and collaboration for integrated infrastructure maintenance and incident response to natural disasters and other events. The system can be housed in the heart of a city’s management domain, providing managers and officials with a city-wide visibility across a web of interdependent city services and stakeholders, as illustrated in the IBM figure below. An Intelligent Operations Center for Collaborative Urban Government Source: IBM City operations centers are not new concepts, but they have tended to be focused on discrete domains, such as emergency services or traffic operations. Recently, however, the technology has been applied by companies such as IBM to encompass and enable complex interdependencies around a wide set of governance domains. Cities such as Rio de Janeiro and Incheon are leading the charge by piloting platforms to realize integrated, collaborative governance models. Although an intelligent operations center supporting collaborative government seems to be suited only for large cities with massive IT budgets, it can be deployed in a number of ways to address the needs of a wide range of municipalities. For larger cities, an on-premise deployment gives city managers extensive flexibility over the center’s configuration and operation. For medium sized cities or clusters of suburbs in a metropolitan area, it can be deployed in a shared service arrangement so that many jurisdictions can collaborate on a regional basis. For smaller cities, it can be hosted in the cloud to free the city from the time and expense of buying and managing it themselves. However it is deployed, an intelligent operations center can transform and modernize government structures, and enhance governance capabilities. It can add a great deal of value to a city by supporting collaboration across government departments with a digital infrastructure linking city services and stakeholders. Managers and leaders are able to see in detail what is happening with their city, through cause-and-effect relationships made visible by the technology. Importantly, by streamlining processes, managers can reduce unnecessary delays in service delivery, improve the quality of services and reduce waste. Special thanks to Greg Milwid of IBM Software Group Industry Marketing – Government, for his insightful feedback on a draft of this blog entry. Stay tuned for future blogs exploring some empirical evidence about why collaboration makes government better. A new paper looking at collaborative government, intelligent operations centers, and intelligent transportation systems will be released soon, and follow me on Twitter @BrianCotton1 for an announcement of its release.
by Jake Wengroff 12 Feb 2012
| Add Comment
A video of me discussing social media and demand generation, during a meeting at the offices of Bulldog Solutions. Some survey results regarding social media objectives are also presented.
by Ben Ramirez 09 Feb 2012
| Add Comment
It was recently announced back in March of 2011 that the DMARC (Domain-based Message Authentication, Reporting & Conformance) organization, consisting of AOL, Facebook, Cloudmark, Linkedin, Band of America, PayPal and other leading organizations, proposed a new operational specification for the current email authentication infrastructure. In short, the proposal would ensure that email senders can prove they are indeed the true originator and receivers can take appropriate actions if the email is spam, junk or should just reject the email all together. The grand scheme of this new specification is to simply reduce or even eliminate spam emails, permanently. So will this new email authentication protocol actually work? Interestingly, the new proposed changes are borrowing two fairly old email authentication technologies that have not been widely adopted by many companies and organizations, specifically the Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) frameworks. Taking a deep dive approach, I will give you an overview of what they are about, their weaknesses and their flaws as a stand-alone solution: SPF allows owners to specify IP addresses tied to a list of computers that are mapped to a domain name (for example, host computers I designated as authorized email senders on the Domain Name System (DNS) for ben.com, will only been seen by the receivers). However, it has a problem with handling forwarded email messages. This occurs when the sender changes to another ISP and simply begins forwarding emails from their original email address to other non-specified server. With the lack of end-to-end authentication, there is no true way to determine the authenticity of incoming emails as to their origin. Also, spammers can take advantage of SPF’s weakness by forging email addresses and registering those records into DNS servers in order to pass SPF checks. The other issue here is the belief that SPF acts as a spam filter, when it does not. SPF only detects forged emails and so there is still the need to have endpoint security solution on receiving hosts. More importantly, SPF is still considered experimental and is still in its development phase, according to the Request for Comments 4408 (RFC). DKIM relies on public key cryptography based on a sender email authentication framework. This includes the digital signature, a domain name, and email contents (header and body). DKIM (usually installed in the email server) signs or encrypts the message using a private key. The public key is stored in the DNS servers where the receiver can retrieve and validate the signature by using the public key for genuineness of the sender. The whole point is to ensure that the email has not been modified in any manner and trust is attained between sender and receiver. Problem: Forwarding once again poses a problem for DKIM. If an email using DKIM is signed, but then forwarded to another mail server (such as a Blackberry server), the message will become modified with added tags (i.e., “Sent from my Blackberry device.”). The end result will be a false positive flag given to the receiver in that the email was compromised and will most likely result in a rejection of the message. Also, in terms of availability and performance, CPU and RAM resources need to be considered when cryptographic functions are being processed on very large volumes of email. DNS servers themselves are prone to DDoS attacks, causing delays and even worse, data loss. DMARC will attempt to close these two gaps. It will improve the assurance and guidance between the sender and receiver in order to manage failed email messages and reduce or even possibly eliminate spam and phishing attacks compared to today’s current standards. What I see as a problem in the future is lack of adoption and the persistence of spammers circumventing this new proposal. Although many leading companies have joined in, it will take a long time until all mail and DNS servers become standardized with this new protocol, mostly due to smaller organizations with lower capital adopting at a much slower rate. Email attackers could also still create a genuine DMARC complaint email for phishing attacks if crafted carefully. How DMARC will actually address this is yet to come. Nevertheless, this is one positive step in addressing an old problem that has been a thorn for almost everyone in the world.
by Brent Iadarola 06 Feb 2012
| Add Comment
BlackBerry smartphones with 7.0 and 7.1 operating systems were recently awarded FIPS 140-2 certification by the National Institute of Standards and Technology (NIST) and the Communications Security Establishment Canada (CSEC). While the FIPS 140-2 and Common Criteria certification were expected deliverables from RIM, there are some important takeaways from the announcement: Security Still RIM’s Bread & Butter. FIPS 140-2 is a security standard used to accredit devices or modules that include both hardware and software components. In both the US and Canada, FIPS certification is required before a device can be used by a government agency. These are expensive processes to complete, both in terms of financial expenditures and resources, and RIM’s recent accreditation demonstrates a continued level of commitment to industries such as government, financial services, and healthcare which inherently disseminate highly sensitive information. With 7.0 & 7.1 blackberry devices and the PlayBook now certified under the FIPS program, government agencies and other highly regulated verticals are more apt to deploy an expanded RIM product portfolio. Reinforces RIM’s Commitment to Government Sector. There has been speculation that some government agencies have become increasingly concerned over the long term viability of RIM. This has been fueled by highly publicized network outages, questions on blackberry’s future OS roadmap, and the company’s overall financial stability. Some analysts have even suggested that RIM may be phased out of certain government agencies as quickly as practical. The reality, however, is that RIM maintains an extremely strong government foothold with over one million active North American government users. Scott Totzke, senior VP of BlackBerry security at RIM, has indicated RIM continues to see 'steady and incremental growth’ in the federal sector in terms of new subscriber acquisition and refresh business. Churn rates are substantially lower in government then other verticals. Moreover, the recent security certifications only reinforce RIM’s commitment to the sector, so RIM’s foothold is unlikely to deteriorate any time soon. Competitive Environment Intensifies. Nevertheless, the mobile device landscape is evolving and it is inevitable that federal agencies will increasingly evaluate alternatives to blackberry such as Android and iOS devices. The Department of Defense, for example, has developed a secure kernel for the Android 2.2 OS with FIPS 140-2 capability and is currently testing a variety of customized applications. Military contractors, Harris and Intelligent Software Solutions (ISS), are actively developing applications for the iPhone, iPad and the Android platform. The diversity of mobile devices and overall competitive environment will only continue to intensify and, although adoption may move slower then what we have seen in other vertical markets, a more heterogeneous mobile environment in government is inevitable. How can RIM Maintain Strong Government Foothold? RIM’s announcement of Mobile Fusion was an acknowledgement that RIM (finally) came to terms with the growing diversity of mobile devices in the enterprise. Similar to the early days of mobility in enterprise, RIM was once essentially the only ‘game in town’ for government employees. However, times have changed. Thus, it is critical that RIM stay ahead of the curve in the government sector and be proactive rather than reactive with respect to evolving trends toward device diversity. RIM currently has a number of enterprise beta customers for BlackBerry Mobile Fusion, however, none yet in the federal sector. So some advice for RIM: Leverage your existing foothold in government by continuing to emphasize and enhance core competencies such as advanced security capabilities and commitments to the most stringent security standards, but stay ahead of the curve by aggressively moving forward with Mobile Fusion for Government.
by Brian Cotton 06 Feb 2012
| Add Comment
Government is under a lot of pressure: Pressure to deliver services to an aging population, Pressure to control spending, and Pressure to be more relevant and accountable to their citizens. Collaborative Government is a strong thread in the discussion around how government can transform to achieve these things (see for instance, the Mowat Centre’s report on the Canadian context.) What is Collaborative Government, and what can Information Technology (IT) contribute to making government more collaborative? One view of collaborative government is that the silos and barriers between administrative departments, agencies, and bureaucracies need to be modified to permit cross-jurisdictional cooperation. Traditional silo-based models of governance would shift to newer collaborative models that enable government to rapidly and efficiently develop, implement, and manage services. These models place government in an ecosystem that facilitates the interaction between internal agencies and the external private sector, including: Communities, Academia, Non-governmental organizations (NGOs), and Foreign governments at the national level. In this view, society is changing, with and because of technological advances, and governance models need to change with it. In response, some leaders in government are recognizing an imperative to transform their governance models to take advantage of the forces of change. Jennifer Granholm, the Governor of Michigan from 2003-2011, called on her peers to recognize and embrace this opportunity. “The 21st century economy is all about speed, access, intelligence, and efficiency. A 21st century government needs to be about the same things”. At the core of this is sharing intelligence and analysis across the ecosystem, with speed based on real-time data access and analysis, and capabilities that are optimized to specific domains or functions of government, all running at a high level of efficiency. Government itself becomes a smoothly functioning system that promotes economic growth by streamlining and simplifying processes and reporting requirements. It also delivers citizen-centered services in offices that address multiple types of services, and by providing high-demand transactions over the Internet. These imperatives play out at all levels of government, but will be most acute at the urban level, where the interplay between stakeholders is particularly close and takes place on a daily basis. Legislative changes are needed to modify some of the barriers to encourage collaboration, but IT can be a critical enabler of collaborative government. The central premise is that data, and analysis of it, produce intelligence that can be shared to control assets and services, issue alerts in emergencies, and be used to develop action plans and strategies. In a technology-enabled collaborative model, a shared intelligence hub takes data from various government domains (e.g., transportation, first responders and public safety, healthcare, education, social services, executive governance) on a continual basis. That data is processed to create a single, master “view”, of a program, geographic area, or of each citizen, for instance. Other relevant data, such as education programming or traffic information, is used with these records in monitoring and planning, to generate predictive models or simulations, or to forecast expenditures for budgeting. This rich source of information is shared, as permitted, among government departments that need it, in integrated and open governance models. Such collaboration can create several levels of benefit. For citizens, dealing with any one department (e.g., renewing a driver’s license) can be linked to other processes that would use similar information (e.g., benefits assistance). Services can be delivered faster, on a more personalized basis. Businesses could receive permits or pay taxes faster with less paperwork and duplicated efforts. Government workers would spend less time in redundant tasks, and more time serving citizen clients. Efficiencies can free up budgets, and leaks in expenditures can be identified and eliminated. Governments that are transforming to collaborative models need an IT infrastructure that supports them. They need a smarter way to use the rich data that are available today, and ensure the security and privacy of citizen and business records. IT can help deliver on these promises, and help government be more relevant and accountable in a changing world. For more information and examples of Technology Enabled Collaborative Government, please download a free copy of my recent paper Smarter Computing to Support 21st Century Governance.
by Chris Rodriguez 03 Feb 2012
| Add Comment
Bounty hunter. The term conjures up images of anti-heroes and rogues with awesome weapons and tactics in pursuit of the most dangerous criminals (or maybe just mullets). So it may be a surprise that the network security industry has its own version of these vigilantes for hire. Only, instead of searching out dangerous criminals, independent researchers are tasked with discovering and reporting software vulnerabilities. Unlike modern bounty hunters, an independent researcher only needs to decide that they would like to hunt for software vulnerabilities, then go out and do it. When researchers discover a software bug they can then choose to report it to the appropriate authority (a software vendor, a security company, or the US-CERT). For years, independent researchers did this for peer recognition or from the goodness of their heart. However, it soon became clear that less scrupulous researchers did this to find and exploit vulnerable programs or to sell these vulnerabilities to criminal organizations. The value of these black market transactions can only be estimated but the most high-impact vulnerabilities can sell for as much as a million dollars. To combat this practice, security vendors developed bounty programs to encourage responsible disclosure and increase research efforts. While many security companies have offered bug bounty programs, there is now a growing trend of vendors offering rewards of their own. For example, Facebook announced its own program in 2011 and has already awarded $190,000 for original, responsibly-disclosed vulnerabilities. Google has awarded $700,000 since its bounty program debuted in 2010. Microsoft refuses to pay for vulnerabilities, but is focusing on a new strategy to block an entire class of vulnerabilities for a single lump sum of $250,000. Whether it works or not, Microsoft’s strategy focuses on a long-term solution, which is admirable. Unfortunately, this strategy neglects to address the immediate threat. In effect, vulnerabilities are the keys to a successful cyber attack; therefore the search for these bugs is a constant arms race. By refusing to pay for vulnerabilities, Microsoft reduces the available outlets for responsible disclosure. Fortunately, companies such as HP TippingPoint and VeriSign iDefense are able to pick up this slack with their highly successful vulnerability bounty programs. In the end, there is a market for everything. The reality is that software vendors face deadlines, budget constraints, and increasing pressure for new features and capabilities. All of these factors ensure that software vendors will not be able to produce flawless programs. Software vendors must improve secure development processes, increase quality testing, and hire penetration testers (and overall have shown tremendous improvement in this area already). This will reduce the number of vulnerabilities and especially the “low-hanging fruit.” But there will always be software bugs. Vendors must be ready and willing to reward researchers for their efforts or prepare to lose business after the next data breach. **** Industry Analyst Chris Rodriguez can be found knee-deep in spreadsheets or messaged here. For additional information about vulnerability research, check out Frost & Sullivan’s quarterly study entitled Analysis of the Global Vulnerability Research Market in Q3 2011 or learn more about Network Security.
by Vidya Subramanian Nath 02 Feb 2012
| Add Comment
Wall Street and many world wide waited long enough for Facebook (FB) to go public. And now that the company has filed for the IPO, there will be incessant chatter- on its success, its 800+ million strong user network, its profitablity and its (hidden) policies and agenda. Though an analyst, I don't want to examine or comment on its valuation. Many IT/Web companies have seemingly inflated values and its hard to debate on it. However, I do not agree with many of my peers in the analyst community and beyond, that FB is bubble and just waiting to burst. The success of a company in this industry depends on creating a product/ solution/ platform that is universally accessible and used worldwide. Facebook goes beyond that. It created a network of people that spans boundaries of regions, cultures, attitudes and age- a vibrant virtual club, where it is fashionable to hang out. Its clutter-free clean Web look and its UI for both users as well as developers over multimedia has endeared itself to all and eventually broken the back of its top competitors (MySpace and Orkut). It has virtually no competitor and is the homepage of millions of screens on earth. However FB has a daunting task to keep up its popularity as well as innovation. I am among the very few users who has become complacent with it. After a phase of logging in every two minutes, and finding friends (and comfortable in the knowledge that I have more than 100 friends in the world), I am lazy with my stop-ins and don't find it virally addictive anymore. I realize that I am perhaps one of just 10-15% of the users who visit Facebook rarely (and some who have completely deleted their account.) In perhaps the world's largest game of public Chinese whispers, there are many who find FB useful to "pass time"- doing shopping, gaming, reading, among other things. FB will have to continue to attract applications that can be unavoidably useful (such as Google search), yet fun for imposing continual recall. Fun is the key word. I was amused to read Zuckerburg's comment that FB was built to be a "social mission." My preacher has a social mission, not FB. About 45% of FB's user demographic comprises of users below 25 and another 20% who are below 35 years of age. Considering that FB started as a network for friends to 'hang out' and communicate, there in lies its ultimate value proposition. Zuckerburg is right when he implies that it is a platform driven by people. FB didn't connect millions of people- people connected to each other through it. FB didn't start the Arab Spring, its users did. FB needs to continue to focus on making tools that can be constantly used by people to drive their thoughts and their actions. Another thing FB will have to be careful about, is to avoid getting into the realm of "networking," which unlike social networking is work and can become intrusive. Companies who "leech" on to the platform to market to their audiences find this hard to resist. FB's marketing and messaging are gradually showing signs of 'corporate' maturity. If that continues, its sheen will wear off. FB's USP unlike Google's lies in its youthful communality, irrespective of the age of its user. It could well become a service provider, or a solutions provider, but will have to ensure that it does so without branding itself as one. Else it will lose its magic. My suggestion- Grow, yes, but...please don't grow up.
by Jake Wengroff 01 Feb 2012
| Add Comment
This blogpost first appeared on Social Media Today. ======================= Janrain and Gigya Ease the Pain of Password Amnesia with Social Login, While Providing Rich Profile Data to Publishers and Brands No doubt you’ve visited some of your favorite websites and forgot your password. And we all know that the ‘Forgot your password?’ is still another nuisance because it requires having to create a new password – which hopefully you’ll remember. Enter social login: the option to ‘Sign in with Facebook’ or ‘Sign in with Twitter’. I recently had a chance to catch up with the two largest providers of this social plugin functionality, Janrain and Gigya, to discuss what this means for marketing, publishing, and beyond. For some insight into consumer adoption of social login, Janrain conducted a study with Blue Research, and learned that a whopping 86% of consumers are bothered by registering at a website, and four in five people are frustrated by the need to create new accounts when registering on a website. Further, 88% admit to having given incorrect information or left forms incomplete when creating a new account at a website (I admit: I’ve done this in the past), and 9 in 10 people (versus 45% in the 2010 study) admit they have left a website if they forgot their password or log-in info, instead of trying to recover their password. The ability to login using an already-familiar username and password – one’s social network credentials, such as those for Facebook or Twitter – could ease the pain. Indeed, according to the Janrain study, almost eight in ten people want social login to be offered as an alternative. The Bigger Picture, and Bigger Data But for marketers, social login is only one small part of the story, as I learned from these companies. Social login, along with its associated plugins, feeds, and analytics, provides access to a rich trove of data which can be used to fuel marketing strategies, advertising creative and ad serving, and content and product recommendations. As we all know, feedback, opinions, and endorsements drive sales, and what better to draw from than a user’s social graph? Social login contributes to a growing set of solutions known as social CRM. While in traditional CRM, sales and customer service professionals are responsible for updating the database and populating it with information that leads to more sales or higher customer service levels, social CRM uses social media, and the information collected from public-facing social networks, to capture data about customers and prospects. Analytics are added to this data to predict behavior, and then the company can decide to engage, interact, and ultimately drive them to sales channels. Counting tweets or Facebook wall posts certainly helps companies understand how their brand and products are received in the market (I recently completed a study on the social media monitoring solutions market, which should be published shortly), but having access to the complete social profiles of the people doing the tweeting or Facebook updating is far, far richer. As such, Janrain and Gigya are on the forefront of the social CRM revolution. As they have advanced analytics products beyond social login, they are well-positioned to integrate their solutions and add a layer of data that can drive overall marketing, content, and product strategy. ‘Social profile data is an emerging category of data, and delivers more insights into registrants and clients,’ notes Lisa Hannah, director of marketing for Janrain. Clearly, marketers have a significant opportunity to increase conversion rates and online engagement by replacing traditional registration with social login. Both Janrain and Gigya have data about increased engagement, interactivity, and conversion by users who have brought their entire social graph into their web experience. Gigya has an infographic here about social login and site engagement. However, if the social networks’ API’s are free (for the most part), why would a company need to engage a provider like Janrain or Gigya? ‘This is not set it and forget it technology,’ explains Victor White, senior marketing manager for Gigya. ‘Clients do not have full-time developers or engineers to ensure that this technology can be implemented and maintained, and it saves them a lot of time in development resources.’ Both companies’ pricing is on a sliding-scale SaaS. Not for Everyone The downside, of course, is that not everyone uses social login – perhaps because they are OK with remembering yet another username and password (23% of the respondents in the Janrain survey think that websites should not offer social login instead of a traditional registration process), or they are aware that their social data would be shared and are concerned with privacy. Another downside is that in certain industries or sectors, social sign-in just doesn’t work. Would you sign in to your online bank account with Facebook? Hardly. The B2B space will see a slower uptake of social login, as personal information scraped from a personal Facebook profile most likely holds little value in B2B or professional services markets. However, a ‘Sign in with LinkedIn’ functionality is available, and while we haven’t really seen much of this – yet – I expect we will. Salesforce.com also has an open API, and interestingly, a ‘Sign in with Salesforce’ option might also become pervasive in the B2B space. Social login is global, too. Gigya has relationships with Mixi (Japan), Orkut (Brazil), and VKontakte (now rebranded VK.com, in Russia) for access to social profiles and data. Fascinatingly, Chinese social network RenRen has an open API, and is also on board. (So much for the secrecy of social networking in China.) Finally, the ultimate purveyor of social CRM is perhaps Google. With the launch of Google+, and including data combed from activities on YouTube, Google Docs, Picasa, Maps, and other applications and sites – all through the ‘social login’ of a Gmail account -- Google is building perhaps one of the largest social CRM databases. As such, Google had social login figured out quite some time ago.