Monday, November 15, 2010

The Anti-Piracy Fiscal Maelstrom

There are recent reports of Microsoft spending upwards of $200M (yes, million!) a year on anti-piracy technology. See the New York Times feature article:

http://www.nytimes.com/2010/11/07/technology/07piracy.html?scp=4&sq=microsoft&st=cse

This is an astounding figure, particularly given that in general, Microsoft software is available at vastly reduced costs from the pirates.

While it may be tempting to conclude from this that software piracy is unstoppable, I thought I would share my perspective based on my company, Arxan’s, experience. Frankly, we've seen time and again that our technology (for instance), properly applied on top of a thoughtful design from a security perspective can and does stop piracy. We've had major successes in a wide variety of market segments, from low end extremely high volume gaming software, to very low volume but extremely high value geophysical software, and all kinds of interesting applications between those two extremes.

We are also familiar with failure. That's right, I'm not here to claim our solution is a panacea. It doesn't work that way. It's a continuous arms race in general, and on a software title by software title basis, it sometimes feels like hand to hand combat.

What we have learned is that a solid design in the security dimension is critical. A weak security design can't be easily "protected" later! A design that seriously considers the threats to the software in general, how those threats are directly mitigated by the design, and then on top of that, how the design and implementation itself is protected from undermining through reverse engineering and code tampering, is required.

Secondly, we've learned that you have to stay right on top of latest technique used by the cracking community. As an example, we are now to "anti-anti-anti-debug" techniques. That's right, we deploy anti-debug techniques...and the crackers have deployed anti-anti-debug techniques...and we are deploying techniques to detect those, hence "anti-anti-anti-debug".

It's a brave new world indeed!

Microsoft's piracy problems are complicated by the fact that they have such a broad array of products, from multiple disparate design and development teams, with different licensing schemes, different distribution models and a wide diversity of distribution channels. As anyone who attempts to run their business on Microsoft software knows, Microsoft does NOT look like "one company" when viewed through the lens of purchasing and licensing their software!

Few companies have the financial wherewithal for this level of security investment, both in absolute terms and even in 'relative to revenues" terms. For these companies, it's critical that application security be integrated into their product lifecycle, as a "must" design attribute. Letting a team rip on a major product development program, then starting to think about "how do we address this piracy problem?" after the product has been shipping for a few days, weeks or months is to take a step in the direction of Microsoft levels of relative spend. Don't do that! Just as reliability, usability, and supportability are, these days, critical requirements that are considered through the software product lifecycle, so must software security be considered and addressed.

The end result can be a secure, un-pirated product. We know this for a fact, we've succeeded with many customers in achieving this result. So don't end up staring down the tunnel of extravagant anti-piracy costs: think application security early, and often.

Tuesday, September 28, 2010

Digital Media Security

The HDCP copy protection technology has been successfully hacked, through the generation and publication of the overall master key:

http://www.eweek.com/c/a/Security/Intel-Investigating-HDCP-Master-Key-Exposure-384053/

What does this really mean? It is in fact a bit complicated. The content on Blu-Ray disks is protected with something called AACS, and optionally with additional technology called BD+. The Blu-Ray player itself decrypts the content, de-compresses it, and re-scales it as needed for the target display device. Then this content is re-encrypted using HDCP and sent through HDMI to the target display. The display device decrypts the HDCP encrypted content for presentation on the monitor.

With this master key, it is possible to build external devices that will appear as legitimate recipients of HDCP encrypted content with an ability to decode that content, and then do whatever is desired with it (such as re-compress it and make it available through download sites). Will someone do this? It's a good bet; where's there's money to be made via piracy, people will take advantage.

How did this happen? After all, isn't encryption based security supposed to be based on an "ultimate level of obscurity", namely, the problem of "can you figure out which # of our 100 billion possibilities I'm using?".

Yes but...in this case the overall system had a flaw, that allows someone to use some heavy math to "back compute" the master key from a sufficiently sized (but still small, somewhere between 30 and 50) set of "device keys", which get generated through use of the master key.

Overall, what does this say about our digital media security systems?

The answer is a hard pill to swallow: our digital media security system can't really be trusted. Nothing about their basis on "hard cryptography" makes them immune from cracking, and nothing about their implementation directly in custom hardware makes them immune.

So what's needed? What is needed is multiple layers of defense, ideally implemented with both hardware and software mechanisms. Arxan Technologies is predicated on the exponentially increasing difficulty of fully cracking a protected system, when that system is protected by multiple layers of relatively independent security mechanisms. Additionally, the overall architecture should be designed with not just the concept of stopping cracking, but also of anticipating and detecting a cracked environment...and them compromising that environment in a new, subtle but pernicious way.

Always seek to detect and create trouble for the cracker and/or for the user of the crack. I recommend an approach of multiple layers of defense, with both crack blocking strategies and crack detection strategies, all coupled to overt and subtle response strategies.

Intel, in response to this crack, has said they will sue anyone using the master key. Legal solutions to piracy historically have had very limited success. Our technology can and should do better in presenting very difficult barriers to those willing to act outside of the law.

Friday, September 10, 2010

Apple vs. Open: Doomed to Repeat History?

Continuing this recent theme of Android specific blog posts, I'd like to point out the remarkable repeat of history we have going on here.

Consider Apple and their "market creation" and "market leader" position they've achieved with the iPhone. Consider its key attributes: a closed system environment in every respect: closed operating system, a tightly controlled 3rd party application solution set, strict limitations on what software is allowed on the devices, and a supporting Apple proprietary media solution (iTunes).

On the scene arrives Android, an open operating platform available to all. And quickly a new business ecology is born, consisting of a myriad of companies building Android/ARM based devices to rival Apple's, all similar but all unique as well.

Does any of this sound familiar? I hope at least a few readers are old enough to remember Apple's position in "personal computing" in the early 1980's with the Apple II (and later MacIntosh) computer. They were "dominant", with their closed proprietary technology. Along came IBM with an open component approach, with all the critical components (DOS, Intel x86 microprocessors, and boot loaders, backplane and I/O specifications), generally available to all comers. I remember the full page "welcome" add put out by Apple, welcoming IBM to the party, and of course the "once only" Super Bowl ad announcing the Mac a few years later.

So what happened back then? We all know the story: the IBM PC "clone" business got rocking, and soon Apple's share in the market dropped to less than 20%. Open, clone-able, with lots of choice and variety from a multitude of vendors won out handily over single vendor, closed, more expensive and arguably "better".

The story is repeating with the iPhone and Android, and in my opinion, the story will continue to repeat. In three years, Apple smart phone share is likely to be down to a fraction of their current leadership share, and you will see massive innovation, variety and choice in the Android based product field. Apple's closed "complete ecosystem" solution will be better...and still won't win.

A side note to all of this is the question of where is Microsoft? Here we have what I believe is a fundamental shift in the computing paradigm for the masses, from personal computers to "intimate computers", computers that stay yet closer at all times to your body than those big and bulky "personal" computers. Where is Microsoft in this transition? Answer: nowhere in sight, at least thus far. The Windows environment has failed to be successful in the multiple attempts to adapt it to the smart phone form factor. The Kin product was a complete disaster and potentially reflective of a real inability to innovate successfully inside Microsoft. Apparently they will be making a fresh try soon with a "Window 7 phone"; it will be fascinating to see if they can recover and establish a serious market position.

In the meantime, the Apple vs. Android wars heat up. Apple yesterday announced a loosening of restrictions on iPhone developers, and everyone thinks this change is a function of competitive pressure from Android, and I'd have to agree. Competition is fundamental to successful capitalism and generally promoting market openness and freedom, and while I am a happy iPhone user, I like to see competition, choice and a lessening of market controlling restrictions.

To sum it up, if I was a betting man, I'd put my personal bet on Android to be the winner here. History tells us it's the likely outcome --- unless Apple will challenge that outcome by signficiantly opening up their walled garden.

Tuesday, September 7, 2010

Android Application Security

Android based devices are exploding onto our consumer products scene. By my recent count at the wikipedia list of Android devices (http://en.wikipedia.org/wiki/List_of_Android_devices), there are 97 devices shipping today, and another 57 in the delivery pipeline.

On the volume side, Android devices are also showing rampaging growth. Gartner numbers show Android smart phones at 1.8% market share in Q209, rising to an astounding 17.2% share in Q210. While I don't have "smart devices (not phones)" figures, I'd expect even larger percentages for Android. (Who are the big losers of smart phone share you wonder? No surprises there: Windows Mobile and Symbian.).

Yet the Android model is fundamentally suspect at the level of 3rd party applications. Why? Simple: the bulk of these applications are from "boutique" developers or development shops, and there is absolutely no vetting of what exactly these applications do. The potential for malware in these applications is enormous.

Android does have a mechanism that requires applications "request" capabilities at installation time. However, it appears few pay much attention to that. A few million downloads of wallpaper applications that requested sufficient capabilities to send phone specific information (SIM ID, phone #'s, etc.) to a server in China certainly proved that point (why would you grant your wallpaper application internet access? Because it asks for it and you want the wallpaper, so..."yes" you click!). A security researcher, Jon Oberheide, demonstrated the potentially malicious application "Rootstrap", which bootstrapped a rootkit on an Android device. The app, (a preview of the popular movie Twilight Eclipse) routinely polled a server to see if new Android exploit code was available, and if so would download it into the application and execute it. About 200 people installed this app, and while in this case the compromised app didn't inject malware, it's a sober reminder of how you really have no clue what you are getting when dealing with Android applications.

Is the iPhone model better? From a security perspective, absolutely. Apple is doing something with apps to vet them. What exactly they do in this vetting process they don't share (and I like many others would like them to be much more transparent about this), but personally I'm reasonably comfortable loading an iPhone app onto my phone, but would hesitate long and hard before loading any application written by any unknown publisher onto my Android device.

Don't get me wrong, I do like the broader openness that Android devices offer. After all, it's my device is it not? I should have the right to load any software I want, in my opinion. At the same time, the marketplace needs to provide me with options for ensuring or identifying problems with the security of my choices. Those options don't exist today.

This leads different parties involved with the Android device phenomena to different (sometimes overlapped) sets of requirements.

First, for the consumer, there's an enormous need for Android application vetting, some high quality "seal of goodness" that is arrived at through a reasonably thorough review of the actual code in the app and what it is doing.

For the enterprise IT professional, there's a need for the same vetting service, and of course some device management services. Corporate phones should not be allowed to be loaded with arbitrary applications; all apps should be required to come from a secure enterprise location that holds only vetted (and dare I say business appropriate?) applications, or alternatively, a vetting service can offer a means for particular enterprise phones to only download applications marked as appropriate by that same enterprise's IT organization.

For the application developer, whether a small shop or an enterprise, there is another critical need. While Android applications are signed, they are self-signed. It is not difficult to take (as an example) a well known bank's application, insert into it high value malware, re-sign it, and publish it in way that gives an illusion that it is still from or works with the original bank. Applications need protection from this kind of malware insertion. Additionally there is the usual piracy problem. Recently Google initiated an attempted solution to this, with a licensing service. However, that led to immediately demonstrated trivial cracks allowing applications to run without licensing. In response, Google has said "oh, you need to obfuscate your application code". Why, thank you Google! What have I been prattling on about in this blog for the last year? Guard your application software folks, because if you don't, others will open it and have a field day stealing and modifying it to serve their own economic agendas.

Did I mention that Arxan is announcing support for guarding of native code in Android applications yet? Yes indeed: watch for our announcement this week.

Friday, August 27, 2010

DLL Hijacking Redux

Someone once suggested there is nothing new under the sun, and that's certainly true with this week's spate of reports about DLL hijacking attacks in Windows.

This is a well known vulnerability dating back many years. New reports that that specific Microsoft applications fall prey to this vulnerability are not at all surprising (http://tinyurl.com/33btjkh).

Microsoft is quoted by Computerworld (http://tinyurl.com/23ag8kb), saying:

"We're not talking about a vulnerability in a Microsoft product," said Christopher Budd, a senior communications manager with the company's MSRC, or Microsoft Security Response Center. "This is an attack vector that tricks an application into loading an untrusted library."

Assessing this statement requires a brief review of the facts. First, this is a vulnerability driven by the fact that Windows will search in all kinds of places to find a DLL that your application requests to be loaded, if you application is so "unsecure" as to identify that DLL only by file name instead of a fully specified pathname.

Why would applications fail to use a fully specified pathname? One good reason is compatibility: Microsoft DLL's are not consistently in the same location across different versions of Windows! Therefore software striving for compatibility needs to allow Windows to search for the DLL, or search itself. A second reason is simply because Windows allows it and thereby "it's easier".

Windows first looks for the DLL by name in the current process's current ("working") directory. That's where an attacker can easily load their own replacement DLL under the same DLL name (through a wide variety of means, none legitimate but all relatively easy to perpetrate), if (as is usually the case) the current directory is not where the named DLL resides. The next time the application runs, viola, they have their own software now running on the computer. What can it do? Literally, just about anything, including quickly load other more subtle and pernicious bot-ware, key loggers, system scanners, etcetera.

Can applications operate in a manner to avoid the vulnerability? Yes, they can, but doing so is more complicated for the application developer. The key is to always load a specific DLL in a specific directory using a fully specified pathname. This in turn can create its own application compatibility issues, as any given path name to a system DLL is not guaranteed to be the same from Windows version to version! This is the true heart of the design issue, because any attempt to deal with this multiplicity of DLL locations across Windows versions in a single version of an application requires the application perform a "search" for the DLL across different directories...which is exactly what Windows does automatically for you and which opens up the application to a replacement DLL attack!

We here at Arxan are looking at this problem in an orthogonal manner, by identifying opportunities to validate that the proper DLL was loaded, regardless of its originating location. Those are the kinds of application internal security features we are quite good at. Note the elegance of this kind of solution: it doesn't require any application source code changes (because our technology inserts such checks directly into the binary application code), it creates no new dependencies on Windows specifics such as specific DLL locations in this or that version of Windows, and it is a security solution that migrates with the application itself.

To end with another ancient aphorism, if you want a job done, best to do it yourself. If you want your applications secure, don’t trust in the operating system to provide that security: secure your applications yourself!