With the release of four new Kindle devices today, much of the media buzz surrounding Amazon is focused on the introduction of its flagship IPS tablet, the Kindle Fire. While the pricepoint seemed to surprise everyone, even those who had handled the device before its announcement today, the real story in my mind is the strong integration of Amazon’s core content into the Kindle Fire, while merely using Google as a stepping stone to run the tablet’s backend.
This is no more prominent than in Amazon’s own appstore, which straddles the line between the Android market and the more holistic App Store approach taken by Apple. Chris Ziegler elaborates on why this matters:
Initially, I’d figured the Appstore wouldn’t be much more than a way for Amazon to earn a little coin off the booming, highly profitable mobile app business. Instead, they’ve picked the very lock that Google uses to control Android. By creating a legitimate Market alternative with over 10,000 apps (at last count) and the full backing of the Amazon juggernaut, Jeff Bezos no longer needs a thing from Google.
Google’s own ecosystem is what many of the large tablet manufacturers (your Motorolas, Samsungs, LGs, etc.) rely on to sell tablets; without it, there is literally nothing to run on them, OS or otherwise. The problem with this methodology is that even with the Android market, there is very little content seamlessly available to this breed of tablets out of the box, and as a result, tablet manufacturers have struggled to differentiate themselves from the competition (namely the iPad) through hardware superiority and performance claims.
But Amazon’s approach is different. They understand that they are specialists in content, whether it be Kindle books, streaming video, music, or the backup of all of these services through whispersync. And it is from this position as a purveyor of content they have chosen to move forward, as John Gruber points out:
Attack from a position of strength. Build on your previous successes. That’s what Apple does. That’s what Amazon is doing here. The other guys — the Samsungs, HTCs, Motorolas, RIMs — can’t match Apple’s hardware design, don’t even try to match Apple in terms of original and differentiated software, and struggle to match Apple’s prices because they don’t have the economy of scale advantages Apple does. Those guys can’t match Amazon either, because they have no content to sell. Amazon can give away the razor because they’re already in the business of selling blades. The other guys don’t even have blades to sell.
In my mind, the Kindle Fire represents not a direct competitor to the iPad, but a direct competitor to the very platform it’s based on. By undercutting the market and providing a rich, content-infused product right out of the box, Amazon is in a position to bring the first truly successful Android tablet to market, and potentially sound the death knell for every other over-spec’d Android tablet currently available.
When Apple issues a press release, the trigger has already been pulled. Really looking forward to the announcement.
Information is the new currency and medium of competition in the modern marketplace; businesses that supply it quickly and effectively to consumers are more successful and able to adapt to the frequently changing tastes of the public. But as new tools arise to make producing and distributing content easier than ever, it becomes increasingly difficult to differentiate oneself from your competitors. The obvious solution to this problem (and the ideal one, by design) is to create content and experiences that are unique to your field, to specialize and cater to your users, or provide a service that can be utilized unilaterally across a wide audience.
The unfortunate reality is that many businesses choose a different path, manipulating search results by creating content farms or bogus aggregators to raise their search results above providers of more legitimate content. Search Engine Optimization (SEO) at its core was designed to improve natural (or organic) search results; a process by which original, genuine content would receive recognition by users, and thus appear more frequently or higher up in search results. But as the new business of online advertising grew and developed, (to the point at which it comprised more than $28 billion in revenue last year for Google) businesses discovered that there were ways to “game the system” to generate artificial relevancy and thus better search results. This realization has spawned what I believe to be one of the greatest threats to the Internet, alongside the loss of net neutrality, for maintaining quality content and equality of exposure.
Marco Arment (of Instapaper fame) wrote an insightful article recently about the ongoing over-aggregation of his content by Business Insider, which has a long history of farming content for ad-revenue and high click counts. Content scraping is not a new concept, but what I found to be particularly egregious about Mr. Arment’s experience was the various ways in which Business Insider attempted to co-opt his content as their own:
But what offends me even more than rewriting my titles and burying my links is how their layout so strongly implies that I’m a Business Insider writer and I endorse my name and writing being splattered all over their site…
Why wouldn’t I want to be associated with Business Insider? It has nearly everything that offends me as a web reader and writer: linkbait headlines, more ads than content, more sharing buttons than original words, top-list “slideshows” that make readers click for every item and defraud advertisers into thinking that their pageviews are legitimate, Tynt messing with copy and paste, Vibrant Media’s double-green-underline ads, generic images slapped next to each post (often poorly Photoshopped®), and tabloid coverage of every rumor and inflammatory non-event so they can fight all of the other tabloids for Google’s pennies.
Business Insider’s mass replication of my writing is the only downside that has ever made me reconsider my Creative Commons license. If they’ve had any beneficial effect whatsoever, I haven’t noticed.
That last paragraph is particularly upsetting; Mr. Arment’s willingness to offer his content under a Creative Commons license has been met in this case with an abuse of his original content for the sole purpose of commercial gain. Although commercial use is within the provisions of the Creative Commons Attribution license, it is only under the condition that he is properly attributed, which has not been the case.
I’m asked sometimes for advice on building an internet presence, and I usually have to fumble for an answer – because I haven’t pursued any particular strategy beyond the glaringly obvious: create original, relevant content repeatedly.
The key thing to understand is that the rules of SEO aren’t magic or arbitrary. They’re based on the goals of a search engine, which is to find relevant results. Relevance implies genuineness, and genuineness implies trust. So, shockingly, you should try to make your site’s content trustworthy, genuine and relevant. All of the rules have come about due to their utility in detecting those three positive metrics. Good SEO is a by-product of not being a dick on the internet.
While perhaps not a tagline you would put on a training brochure, “don’t be a dick on the internet” sums up the need for both individuals and businesses alike to reconsider their strategies for content promotion, starting with a return to the web’s roots in collaborative sharing. That sharing only occurs when people find your content truly worth sharing to others, which Mr. Gemmell rightly correlates with the degree of trust that your users have with you and your content. By viewing your users as more than a means to an end or a product, but rather as partners in your enterprise, one can shape SEO around the relationships between quality content and the individuals who consume and distribute it.
In hindsight, I slid into arrogance based upon past success. We have done very well for a long time by steadily improving our service, without doing much CEO communication. Inside Netflix I say, “Actions speak louder than words,” and we should just keep improving our service.
But now I see that given the huge changes we have been recently making, I should have personally given a full justification to our members of why we are separating DVD and streaming, and charging for both. It wouldn’t have changed the price increase, but it would have been the right thing to do.
Another advantage of separate websites is simplicity for our members. Each website will be focused on just one thing (DVDs or streaming) and will be even easier to use. A negative of the renaming and separation is that the Qwikster.com and Netflix.com websites will not be integrated. So if you subscribe to both services, and if you need to change your credit card or email address, you would need to do it in two places. Similarly, if you rate or review a movie on Qwikster, it doesn’t show up on Netflix, and vice-versa.
Some members will likely feel that we shouldn’t split the businesses, and that we shouldn’t rename our DVD by mail service. Our view is with this split of the businesses, we will be better at streaming, and we will be better at DVD by mail. It is possible we are moving too fast – it is hard to say.
While I think it is for the best in the long run to leave the DVD-by-mail business behind and do anything possible to push streaming forward, I can’t help but feel that the content providers are winning.
I heard from someone working with Google that Google is working on a Flipboard competitor for both Android and iPad. My source says that the versions he’s seen so far are mind-blowing good. – Robert Scoble, via his Google+ page
Google Reader is one of my most frequently used applications, on any device or platform. An application similar to Reader with an additional social layer could have the potential to be huge, or it could go the way of Google Wave. But that’s perhaps why I like their “pet projects” division so much.
A crucial element in building any tool that enables creativity is defining the right set of constraints. You have to start with a box before you can think outside of it. Though sometimes, it’s the absence of limitations in just the right places that drive creative thought.
Very slick service for shuttling all of your content around using native APIs, or what the author calls “Digital Duct Tape”. Really neat to see what combinations people are coming up with; now if we could just get Google+ to open up.
One thing I always seem to check when I first launch an app on a tablet or smartphone is whether it supports both landscape and portrait orientation. You can find no shortage of articles openly musing whether the tablet market is really an essential device, or whether they are destined to be merely accessories to “real” computers. I’ve even gone so far to suggest that tablets are more optimized for content-consumption rather than content creation, at least in their current state.
But I think Ben Brooks is onto something with his survey on display orientation preferences among tablet users:
Initially I suspected that most users (more than say 60%) would prefer the vertical orientation, but as you can see this is not the case. My guess is that it really depends on what and how each user is using the device.
What’s telling about this data, the take away, after looking it over is this:
Users, by-in-large, use the iPad in whatever way they see fit for the task at hand — not in line with their screen orientation preference. That is if it is best to use the iPad in portrait than so be it — even if the user hates portrait devices.
Think about this for a moment, because it represents a very important industry shift.
For the very first time in computing, the user has been put in control of how best to utilize the display portal they have been given — not the manufacturer.
In fact it doesn’t matter that a slight majority uses the iPad more in portrait view than in landscape. What matters is the split — it’s close to even — because that shows that both views are important and crucial to the device.
This realization becomes all the more important when you combine the flexibility of orientation control with the concept of frames; with tablets, users gain the ability to alter the orientation of the device itself to fit the data in a way that works best for them, rather than the other way around. By reducing UX friction in this way, tablets carve a niche in the computing market that laptops and desktops can’t easily fill.
Of course, this does place the burden of proof on developers to make these experiences possible; but with the imminent release of iOS 5 and Android’s ICS (each of which will provide greater continuity between the smartphone and tablet platforms), the prevalence of these devices will only become more attractive and accessible.
Who remembers the Android Update Alliance? It was just three short months ago at Google I/O when Android users were promised at least 18 months of dedicated OS updates, but we have yet to hear of an overarching plan from Google or any of its myriad partners. And as new devices roll out on a weekly (and in the case of Samsung at IFA recently, hourly) basis, consumers are rightly questioning the ability of all parties involved to keep this promise.
Justin Shapcott provides a very thorough writeup (complete with some of the best Android-related infographics I’ve seen to date) about the current state of affairs with regards to the Alliance, and poses some hard questions for Google (which I’ll comment on below):
Things we still don’t know about the Android Update Alliance:
1. Is Google working with the manufacturers and carriers to get these updates out the door? Or is Google merely setting forth a guideline and expecting adherence?
Based on the radio silence up to this point, I feel it has drifted towards the latter; there may be minor involvement when it’s in Google’s best interest to do so (see the Nexus line of phones, which represent the “pure” Android experience), but otherwise, I think they would rather see the carriers tow the line.
2. Are devices released before this announcement that are still within this eighteen month update time frame intended to be a part of this agreement?
I would be surprised if they were grandfathered in. With an 18 month rate of turnover considered to be extremely slow (ala the iPhone development timeline), the desire amongst hardware manufacturers and carriers to push new product leaves them little incentive to support their devices for very long after they’ve vanished from the display shelf.
3. Are there any guidelines relating to how long it should take for devices to receive an update after a new version of Android is released?
Not that I’ve heard, but it certainly would be one of those nice-to-haves. Not to answer and question with a question, but how would you address updates for the same device (or even class of devices) across multiple carriers?
4. Are minor version updates (which often include important security fixes) intended to be released as part of this agreement?
We’ve seen carriers push OTA security updates before, but the advent of increasingly complex configurations for Android devices has resulted in the shuffling of responsibility that only has increased fragmentation thus far; a vicious cycle. As Shapcott’s infographics demonstrate clearly, just over half of all devices currently on the market are running Gingerbread at this point; if only half have the current major build, how can we expect to see minor version updates implemented any more quickly?
5. Who determines if a device is capable of receiving an upgrade?
Carriers, carriers, carriers. Verizon, AT&T, Sprint, and T-Mobile are more than happy to sign on to an Alliance with their benefactor if it results in increased sales, but the minute that Alliance interferes with their ability to push their own proprietary services (I’m looking at you, Verizon) all those guarantees go out the window. This in my mind represents the biggest hurdle Google has to clear to make this work, and its an obstacle that’s essential to reducing the fragmentation of the platform as a whole.
While Bartz has streamlined certain areas and made some strong management hires, her performance has been decidedly bumpy and mostly downhill.
The share price has settled in at about $12.50 (just about where it was when Bartz took over), Yahoo’s recent financial results have been weak, its key advertising business is struggling, its attrition rate among engineers and others is startlingly high and its product innovation cycle seems stopped up.
…and Yahoo’s stock rose nearly 7% in after-hours trading following the announcement. Ouch.
Great article over at HP’s Next Bench blog about the history of one of their most iconic devices that’s still in use today, 30 years later. Whether it’s a corporate- or consumer-facing business, there are certain characteristics of a device that are indicative of a successful product:
“I remember when I first demoed the machine to Bill [Hewlett],” Dennis told me fondly, “he was holding it and asked me how it would perform a bond calculation. I tried to take the calculator back from him to show Bill how it worked. But he stopped me and said, ‘I want to do it, just tell me how.’ He wouldn’t let it go.” It’s funny because that’s the same reaction you’d find from most people that have used a 12C over the years. In fact, it is the longest – selling product that HP has ever released.
The HP-12C represents almost the exact opposite end of the spectrum from today’s modern consumer electronics: filled with buttons, with minimal visual display output (albeit that output was the first of its kind in the form of a much more power-conscious LCD screen).
But even despite these differences, there was one other key distinction about the HP-12C’s design (and that also no doubt contributed to its success): its R2D2 onboard chip:
“R2D2 was ROM, RAM, and display driver all combined into one chip. It was the major contributor in cutting down on the number of components, contributing to quality and decreased power consumption.”
That sounds awfully familiar…