Monday, November 9, 2009

Security in the Cloud

Cloud computing is one of the "big new things" in commercial computing today. The promises of cloud computing are broad and deep: lowered capital costs, lowered operational costs, ease of scale, broad accessibility, high availability, and more.

And then there's security. It's the usual follow-on question after hearing about all the benefits, "yes, great, and...what about security?".

The simple truth is that cloud computing carries with it each and every security risk that already existing in your commercial computing environment, and unfortunately significantly increased risks.

Why is this so? Simply because at the highest levels, there is little structural change in shifting elements of your computing infrastructure from "here" (inside your corporate data center) to "there" (inside an external vendors corporate data center). The same security controls you needed (and in many cases didn't have) are needed in your cloud providors environment (and in many cases they don't have), and the same fundamental attack vectors and risks are present.

As we drill down into the details however, it will become clear that the situation is worse than this, for two fundamental reasons: one is shared infrastructure, the second is a general loss of control. Let's look at each of these.

The foundation of the cost benefit premise of cloud computing rests on the leverage achieved through a shared computing infrastructure, with the cost benefits of scale and higher average utilization. But shared with who? That's risk #1; you don't know who, and you can't control who. "Other companies, other users." Shared at what level? At all levels: shared storage, shared networking, shared routers, shared firewalls, right on down to operating your applications on the same physical hardware being used by other cloud clients (though always in a separate virtual machine instance).

So what's the risk of that? The risk is the ease of access to your data and application software. By definition, an environment where "others" are running their software and maintaining their data in the same physical environment that you are running your software and maintaining your data creates very substantial incremental security risk, because environmental access is the first step in any and every IP and data theft attack. If I'm "in" the general computing environment, and I can run arbitrary application software, I've got a launching pad for attacks on local data and applications.

Another element of shared infrastructure in cloud computing is the extension of the insider risk. Many of your own insiders will still have cloud environment access similar to the access they had when you were running inside your own data center. However, you've now added a whole new class of insiders: the cloud provider employees! And unlike your own insider threats, where you can take active steps to reduce risk, with the cloud provider you have no controls and no influence. Relative to these unknown people, you applications and data might as well be considered "fully available", with all that that implies.

The second general area of risk is in a loss of controls. This loss of control is across the board, starting at the level of physical access; when you operated in your data center, you controlled physical access, and with a cloud provider you don't. Logical access is no different; what people (administrators or otherwise) can access your databases and your applications? You have vague assurances from the cloud provider, but you have no direct control whatsoever.

This control issue extends out to more subtle yet extremely significant areas. Take the example of web application security risks. These are the most pernicious security risks in computing today, with SQL injection attacks alone (just one of many types of web application security risks) resulting in the theft of millions of credit card numbers. The most recent attempt to harden web applications is through the deployment of so called web application firewalls. These are networking appliances that monitor networking traffic looking for evidence of a web application attack. These devices require a very high amount of customization in their specific monitoring practices, effectively to "tune" the firewall the specifics of the applications and their operations being protected. Can such a solution be applied in your shift to a cloud computing environment? Generally no, due to the difficulty of assuring the application firewall is both "in the right place" relative to what is now managed as a highly mobile set of applications within the large cloud infrastructure environment, and the need for your application firewall rules to apply to your applications data flow and your applications data flow only.

Control issues cut right through all traditional required practices in commercial computing. Backup? Of course the cloud vendor provides backup! Can you test that it's actually occurring and the data is recoverable? There have already been major examples of commercial cloud providers losing customer data. It's a risk, and it's driven by your loss of control when shifting your computing practices to an external provider, and those risks are exacerbated by the shared infrastructure nature of that environment.

All of this said, cloud computing is here and it's expanding it's footprint dramatically across the commercial computing landscape. Cost saving attracts commercial usage likes light attracts moths. The issues cited here are going to get incrementally addressed over time, as part of high value cloud solutions.

The better news is that some fundamental solution technology exists today. The essence of security protection in a cloud environment is to take advantage of what you do control to implement security mechanisms to the level required by your business. The two critical control points are, simply put, your applications and your data.

Data security solutions have been increasingly developed and deployed over the last ten years, and these solutions generally can be deployed coupled directly into the cloud hosting environment. Any computing solution migration to the cloud must seriously consider the addition of such security technologies.

Application internal security solutions are a relatively new technology area. This kind of technology derives from military grade technology utilized to protect critical military technology assets from reverse engineering and tampering. This technology is now available for and being applied to commercial software.

Application internal security technology puts security functions directly into the application software. These security functions start with obscuring the code flow, the instruction sequencing, and even the unencrypted presence of critical blocks of code, to protect against reverse engineering and through reversing, the identification of critical value components and/or critical points for effective tampering. They extend to dynamic monitoring of code correctness both in terms of actual instruction to dynamic code behaviors. And such security units can, internally within the application, monitor data flows to detect and respond to evidence of web application security attacks.

The tremendous benefit of application internal security technology is the complete independence such technology has from location considerations. An internally secured application carries it's security properties with it, where ever it goes: in your data center, on your employee's laptops and cellphones, or in a external provider's cloud computing environment. Such technology is immune to network topology changes, and protects the application in private and shared infrastructures.

Cloud computing is still in it's infancy, and it's reasonable to say that cloud computing is one of several fundamental change agents that is transforming our information world at a faster rate than ever before. While cloud computing has dramatic benefits and is highly attractive as a computing environment solution, it must be approached extremely cautiously from a security perspective. The shared nature of the cloud and the loss of controls that occur when utilizing the cloud dramatically increase your security risk footprint. The best and most immediately available technologies for dealing with these two factors are the deployment of application internal security technologies and data security technologies.

Tuesday, October 20, 2009

The Democratization of Software

It's a strange new software world!

For those of us old enough to remember things like mainframes (my first ever computer programs ran on an IBM 360 model E22 at a local community college!), minicomputers (Dec PDP's, HP1000's, Data General Nova's, etc.), then the world changing arrival of the "PC" in 1983, the world of software was generally a "dark art". Very very few people knew what software was, and the population of those who actually wrote software was even smaller.

I personally learned my programming chops first in that same community college's computer center, writing Cobol code to schedule the lazy counselors to appointments with students (a brilliant idea of a new school VP administrator promoted out of the computer center, knowing that since the kid doing the programming was the son of a member of the board of trustee's, they couldn't effectively fight it!). Then it was on to writing assembly code for a PDP 11/35 running a customized version of RT-11, to drive and test a custom data acquisition board built by a small shop (Acroamatics) for the Navy. Then on to kernel level operating system development in HP, working again in assembly language on kernel level code for the HP1000 and the RTE (IV, VI, and A) operating system.

In those days, the late 70's and the 80's, software was generally incomprehensible to the masses. Literally. People just had no clue. By the early 90's that was changing pretty fast; people "knew about" software, but for the most part, in the same way they "knew about" automobile engines. That is, they knew software was there, was important, and "made the computer go", but not much more.

This starting changing in a major way with the development of the web and web site programming, starting with HTML (arguably not a programming language but let's not quibble). Suddenly a lot of "non-technical" people (non-computer scientists) were "programming". And as abilities to link in actual run-time software into web pages (PHP, Perl, Javascript, etc.) have become prevalent, this same group advanced into what is definitely the world of writing procedural software.

Now we have the iPhone and an open development environment for it. We are witnessing another huge shift in the breadth of activity in the creation of software, driven by this new ubiquitous platform. The opportunity to sell a few hundred thousand copies of a cool little application for a buck apiece suddenly brings the opportunity of "software for profit" right into the mainstream...and the mainstream is responding. We are seeing an explosion of a new cottage industry right before our eyes. I don't know the actual numbers of downloads of the objective C development environment for the iPhone, but I'm certain the numbers are staggering. The volume of applications available for the iPhone from this cottage industry is certainly staggering, and considering what a small percentage of actual development activity out there that represents, we have to acknowledge that a seismic level expansion of software development is underway.

Again, here's the point: for the FIRST time ever, we've are experiencing a "grand conjunction" of a widely popular platform with broad computing and I/O capabilities, with a freely available development environment, with a effective channel with a strong demand pull, with a world wide population who through web programming already has some awareness, skill and inclination. And viola...instant massive cottage software industry.

What are the longer term impacts of these "force vectors" going to be? I have several projections.

First, in the world of personal computing devices (which how I think of the iPhone by the way; the "phone" part of it I consider to merely be one of it's many I/O features), a free and open development platform is going to be a must. A single company can't compete against the forces of "solution" innovation and availability that Apple has shown can be unleashed.

Second, this "democratization" of software development isn't going to stop. SW skills are expanding across the population at an unprecedented rate, and that growth is going to continue and even accelerate. What exactly the impact of that will be is hard to predict, but I do believe as the world increasingly is driven by and supported by software, this is an enabler for the world's economy.

Third, the world of software cracking (finding technological ways to run this commercial software for free or for a black market low price) is going to continue to be a huge technology area and force in the industry. You can't discuss iPhone apps too long with friends and colleagues before hearing about the ability to "unlock" all the apps available "for free". There is a dark side of this democratization, a black market side. The technology race to fight those black market forces is just getting going in this particular market. Of course my company, Arxan Technologies, has been working for years with more serious users of such technologies, namely the US Department of Defense. These technologies are becoming more prevalent in the mass market consumer software space, helping to protect the product software that your son, your sister, and maybe even YOU wrote and published yourself!

Monday, August 3, 2009

Code protection is critical in a web 2.0 world!

Neil McDonald of Gartner blogged on the differences between byte code and binary code analysis:

http://blogs.gartner.com/neil_macdonald/2009/07/24/byte-code-analysis-is-not-the-same-as-binary-analysis/

His points are important at a deeper level as it relates to the risk of reverse engineering and tampering. Specifically, byte code (.NET and Java) is almost trivially reversed engineered, and fairly easily tampered with using available tools...unless active steps are taken to address the risk.

Byte code representations of programs contain sufficient information to allow a complete inverse compilation back to source code. To address this problem, use of a .NET or Java obfuscator is necessary. The best in class obfuscators can perform a host of transformations with minimal to no impact on performance that raise very large hurdles for the would be theft. The transformations include general code encryption, code restructuring to create complexity that is not understood by inverse compilers (and difficult to understand by human analysis as well), string encryption so that variable and static data names become unintelligable, deletion of meta-data that describes program attributes, and even insertion of code for dynamic detection of evidence of tampering.

This kind of code protection becomes paramount in a Web 2.0 world were significant application components are being deployed to and executed by customers. Additionally, this kind of code protection is critical in a highly mobile world where applications and data frequently are on the move with employees.

Friday, July 10, 2009

Source Code Stolen by Insider at GS...Where Are Your Assets Tonight?

So the news is full of a source code theft by an insider (a "programmer") at Goldman Sachs, specifically some proprietary trading system code. Security industry analysts are talking about it (http://blogs.gartner.com/neil_macdonald/2009/07/07/security-no-brainer-7-if-you-have-intellectual-property-embedded-in-software-protect-it/) and it's a very current example of a couple of significant trends:

  • Enterprise security is now defending against organized crime, not merely casual hackers or disgruntled employees.
  • Insider threats are a tremendous problem.
A recent study executed by Cerias at Purdue found average IP theft levels from enterprises operating globally to be $4M/year, across over 900 companies. This is serious crime for serious money, and the opportunity for serious theft attracts the professionals.

How best to execute such thievery? Find new and innovative ways to penetrate network firewalls, avoid application firewalls, dodge data leak detection circuits, avoid application tamper detectors, and the like? That's an approach and it is actively used and every enterprise must utilize all of these security methods (and more) to fight against such attacks.

But there's an easier way, is there not? A bag of cash up front, with a promise of another bag of cash on delivery, to the right employee with access. Bingo bango bongo! Got the goods, everyone is happy. Well, except the company losing their assets.

A fascinating aspect of the Goldman Sachs story is the fact that their data leak prevent software was just enough security to help them know they'd been robbed...but not enough to catch the thief in the act and stop the theft. Why? Because he copied the source code to another computer inside the company, then took that computer (or disk drive) out with him. The DLP system noticed the unusual traffic of the source code, but since the code wasn't leaving the perimeter, didn't block its transfer. In the past, such a theft was rarely noticed. So I will acknowledge that what looks like a major trend might in fact be growing visibility of a long standing problem. I suspect both are the case.

What can be done? The only real answer is "more", in the way of security mechanisms. The core assets must be encrypted and decrypted only under managed legitimate usage situations. The applications operating on internal systems must be self protecting from tampering. The application firewalling must be complete. Data flows in general must be monitoring to look for unusual activitivies. Security practices must be rigorous in s/w development.

On the human side, the most pragmatic solution is a combination of training and awareness of the risks. Awareness takes two forms: awareness inside the company of the potential for insider execution theft, and awareness across all employees of the stringent security practices and the severe cost of getting caught executing any such theft. Faced with a high likelihood of detection and serious jail time, people are much less likely to have the discussion with the high tech mobster who just wants to chat. It's when they think it'll be easy and low risk that people start bowing to temptation.

Friday, May 29, 2009

Yes a Cyber Czar IS Necessary!

So we've all watched and/or read or read about Obama's cyber security speech today, and his call for a new high level federal "coordinator" to lead the solutions charge.

Some are saying "it's about time", and some are saying "is this really necessary?".

I'm here to tell you YES, it is about time, and YES, it really is necessary. And here's why.

First, speaking at a broad philosophical level, systems tend to optimize locally, for and around local optima. What does that mean you ask? It means for example that Microsoft at one level doesn't really care much about the security of their products...unless and until the lack of security in their products affects their bottom line. Local optimization, for local profits. If the whole country (and world) has insecure s/w as a result, but Microsoft has maximized revenue while minimizing costs (let's face it, it costs more to product high quality secure software that it does to ship garbage), then it's a win for Microsoft!

This applies across the spectrum of computer activities: s/w development, personal computer usage, enterprise systems, you name it. Security is a "as and when needed" component, and the learning of as and when needed is usually driven by the sharp end of a sequence of costly or even crippling attacks. Think about it: when did YOU finally start using firewalls and anti-virus software? I'm guessing that it wasn't until you experience the sharp end of the malware spear!

Now the second point: government, among other things, must serve to guide social action (broadly speaking; I'm including business action here) for global optima, versus pure local optima. Security comes through making security a high priority, across many fields of endeavor that result in computer based "solutions". Government has a wide variety of tools at their disposal to guide social action and thereby drive priorities, including taxation practices, government led investment, and government procurement practices both in the civilian (federal and state government) and in the defense domains. All these need to be utilized in a coordinated manner, to drive computer security in general as a priority, and to drive the specifics of that priority in consistent manner. That level of coordination is NOT going to happen through random, chaotic governmental processes. A high level federal "coordinator" is needed to lead, guide, and drive through multiple areas of government, in a consistent manner and in an aggressive manner.

Of course there are an infinitude of risks here. Largest perhaps are dictates of what "must" happen at an inappropriate levels of specificity, which invite solutions that salute the requirement at a superficial level. Result? More cost for all, and no improvement. Another is requirements that drives lots of bureaucracy that slows down innovation and adoption and deployment of improved security solutions.

These kinds of risks are why, in my opinion, the specific leader chosen is so enormously critical. They must be a solid systems thinker, someone who understands how to enable and support virtuous cycles, rather than merely create more "requirements". How do you enable, drive, encourage and support a security focus through the computer enabled and driven business world...without creating a crippling mess?

Tuesday, May 5, 2009

Devices All Around Us Are NOT SAFE!!

Conficker has now invaded medical devices: http://tinyurl.com/ck3z3n

Why and how is pretty easy to understand:

- medical devices with "intelligence" embedded in them (microprocessors and a lot of software to control the device) are sometimes designed using Windows. Yes I think this is a horrible horrible choice but it is a choice that is often made.
- once developed and certified, these devices rarely get updated. So "old" security flaws in Windows stay there, "forever".
- sometimes the devices are not supposed to get connected to the internet, but do anyway.
- viola, detection and infection...

So what are the device types in general we have to worry about potentially be targetted by viruses or other takeovers by "bad guys"?

Well, let's see, not too many, it only includes:

- internal systems on automobiles
- internal systems on airplanes
- home networking equipment
- home TV's (my 42" high def LCD TV is running Windows inside, I'm almost certain!)
- digital video recorders
- DVD players, particularly Blu-Ray devices
- medical equipment, both hospital based and advance home care devices
- automated tellers
- traffic control systems
- railway control systems
- power control systems

Folks I could go on. The point is, increasingly, the world around us is "controlled" by "intelligent" devices. And these devices are hugely suscpetible to being compromised in their operations, through software/network based attacks.

I don't want the owners of conficker effectively "owning" my TV, much less the system that controls the local mass transit system, much less systems on the Boeing or Airbus plane I'll be on later today.

The world needs secure software and systems, and we need it NOW. Getting there includes:

- better security training for s/w development engineers
- better security requirements managed through the software lifecycle
- use of best of breed tools for security assessment of code, both through static and dynamic analysis
- use of defensive mechanism in code to detect, defend and react to internal security breachs (yes this is where my company, Arxan Technologies, has solutions).
- use of updating capabilities and processes to ensure that security faults in ALL devices are addressed quickly and responsibily, rather than left to be taken advantage of in later months or years.
- choice of appropriate operating systems and other tools for the task, rather than use of known low security quality software such as Microsoft Windows

So are the conficker owners going to issue an update that is specific to a medical device to cause it to misbehave? Not likely...but they could. It's really quite unbelievable. We are giving control of the world around us away, to those whose only interest is leverage their control for profit and/or mayhem.

Funny, that hunting and gathering life is sounding more and more appealing. No you may not take over my spear!!

Friday, May 1, 2009

Cyber attack on an American City

Bruce Perens, a well known technologist and open source evangelical, wrote a fascinating review and analysis of the recent attack on the city of Morgan Hill in California, via the simple but highly effective means of merely popping manhole covers, entering and cutting fibre optic lines. Read the story here: http://perens.com/works/articles/MorganHill/

I believe this story points out what I've been suggesting in my recent blogs regarding conficker: we are a society highly dependent on a live, running internet. Hugely dependent. This story is direct evidence.

So I ask again, how effectively could several million computers be, working in concert, in shutting down sections of the internet, or targeted commercial properties from operating on the internet? Because that is the power the owners of conficker have. The latest usage appears to be the more traditional usage of heisted computers: spambots and capturing keystrokes to capture credit card information or other high $ value information from the user.

If that's all they can come up with, I have to say I'm unimpressed with the meta-level creativity of the owners of this worm. Yes they've shown some create technical creativity and implementation skills in what they've done, but to what effective end? Sure they should be able to make some $'s from stealing CC#'s and from selling spam services. But that's pennies compared to leveraging what might be within their capability set at this point.

Think about it: shut down Citibank for a day. Wait a few days. Then send a private message to their president saying they will be randomly shut down again, over and over...until they pay a $50M ransom into such and such bank account. That's serious, serious criminality on a scale that's Bond film worthy, if you ask me.

I just can't figure out why they aren't executing on it. And I can't figure out why some serious brainpower isn't being applied to figure out how to stop them.

Maybe it is and we just don't know it. I can only hope so. Because the nonsense about "check and make sure your computer isn't infected and you have latest Windows patches applied" is both important...and completely irrelevant at this point. The owners of conficker already have a fascinating and potentially extraordinarily potent weapon under their control.

Does anyone really know how powerful? I'm don't know! I guess it's good so far that we haven't found out. But as the attack on Morgan Hill demonstrates, the western world at least is far, far more vulnerable to this weapon than we believe or understand.

Wednesday, April 15, 2009

"Secure software" is enough anymore!

Lots of folks are talking about "securing software" in the rather traditional context of "writing secure software", and this is being broadened out to a complete security focus through the entire lifecycle. You can hear me discuss this on this recorded webinar:

http://www.arxan.com/software-protection-resources/webinar-series/application-security-360-view-webinar.php

and colleagues at Fortify and Cigital have developed a "Building Security In Maturity Model", which is here:

http://www.bsi-mm.com/

However, I'm here to tell you folks, IT ISN'T ENOUGH.

What's that you say? What more is there? What more can we do than ensure our applications don't have security flaws?

The answer is that applications have to go on the offensive. Applications must not just be "defensively secure" by not having code vulnerabilities, they must take active measures to detect and respond to attacks directed against themselves.

Of course my company is in this business and of course this is a blatant advertisement...but darn it folks, it is absolutely true and knowing what I know, I'd be saying this even if I worked as a used car salesman. Applications in the enterprise, in the cloud, distributed applications (ISV s/w) and applications in end point devices (phones, set top boxes, automobiles, home gaming systems, the list is endless) are the new focused target of attack by organized crime. And these applications CAN be engineered to have multiple layers of active defense ("offensive defense").

Applications can and should check themselves for code integrity. Applications can and should authenticate components that are dynamically attached (DLL's). Applications can and should detect and notify of debugger attachments. Applications can and should protect critically sensitive code through encryption and dynamic decrypt/execute/re-encrypt operations. Applications should utilize multiple levels of networks of these self-guarding techniques, with a variety of overt and subtle response actions, to ensure that persistent attacks are foiled at some level. Enterprise applications should have these response actions wired into the security monitoring systems deployed by the enterprise.

These practices needs to become commonplace and part of our general software lifecycles. The world is too dangerous a place for it not to happen. We need to keep up with the organized criminals, and right now our software is falling woefully behind.

Wednesday, April 8, 2009

Cyberwar is Real: US Electrical Grid Attacked and Compromised

The Wall Street Journal has reported what many of those of us on the inside of the cyber security world already knew, namely that their is a very serious warfare going on today between Russia and China, and the US. Read the report here:

http://online.wsj.com/article/SB123914805204099085.html?mod=googlenews_wsj

We could call this a "cold war" on the network/computer ("cyber") battlefield in the sense that damaging actions are not yet being taken. Instead, "footholds" are being created from which highly effective attacks can be mounted. In this case, it's footholds in the heart of a critical area of infrastructure, our power systems.

The report speaks to the North American Electrical Reliability Corp. being responsible for oversight of the security of our electrical systems, and setting standards for firewalls between administrative and actual control systems.

Sorry to be overly colloquial but, "well duh!".

In general, control systems shouldn't have any connections to the internet, period. Interconnects between "administrative" systems that are internet connect and the control systems should not exist, or utilize proprietary and highly secured lines and technologies. Obviously this isn't the case. It's a safe assumption that a casual attitude in the evolution of the internal systems in the power industry, combined with a real lack of understanding of the ability of hackers to thread malware through a wide variety of industry standard communications interfaces, has lead to a high degree of interconnection and thereby to an easy to penetrate set of control systems.

Unfortunately the problem certainly isn't limited to power systems. Is the situation likely to be any different in our telecommunications infrastructure? Our water management infrastructure? Our police and civil defense infrastructure? Our hospital and emergency response infrastures? If our power control systems can be subverted, is there much of anything in the civil arena that isn't in all likelihood subject to successful intrusion and subversion?

One area of real concern I have is the lack of computing security expertise that your typical power systems organization, and all other civil infrastructure computing systems, are going to have. Simply put, they don't have the right soldiers in the field to fight the type of war being waged.

It's no wonder that Obama's administration is issuing a call to action in the general area of "cyber security". While we are busy designing and building jet fighters that can take out anything China might produce by the year 2100, China and Russia are thinking and operating strategically.

We in the US (and other western nations) must think and act strategically too. The plane of combat has expanded in new dimensions, with the network being the enabler, and the computer control system being the field of battle. Of course we shouldn't forget that there may very well be offensive actions well under way by the US Department of Defense. However, that doesn't address our own weaknesses. If we were thinking and acting strategically and comprehensively, wouldn't there already be clear efforts underway to secure our infrastructure from cyber attack? Unfortunately this line of thinking, combined with the evidence at hand, is not comforting.

Let's go back to Conficker for a moment (see previous blogs); if I was the "owner" of that worm, my perspective would be that I have a pretty darn powerful "bomb" available, potentially an ability to bring down certainly selected targets that operate on or via the internet, and potentially even wide swathes internet based economic activity, through leveraging the power the +/- 5 million computers under my control. Personally I know what I would do with this capability; I auction it off to the highest bidder, and I'd go to Russia and China first and formost to start the bidding process. (Then I'd go retire to a life of surfing, pool and internet poker in the Maldives.)

It's a strange new world in all respects, and this strange new world includes a new Cyber Cold War. We'll acronymize it and call it CCW (you heard it here first!). It's real, it's serious, and it is a threat to our economy and even our daily creature comforts of power, phone and internet. Obviously Arxan Technologies, Inc. is in the the business of helping, both "confidentially" through our Defense Systems organization, and more openly and publically on the commercial side through commercial products and technologies. What's needed is an active and investing government, stepping up to the plate to enable the investments by our infrastructure organizations to devise and deploy the necessary re-architecting and defensing of our infrastructure computing systems.

Tuesday, April 7, 2009

Digital Piracy and How to Slow It

New reports (http://tinyurl.com/d2jfae) are putting digital piracy of media at $20B worth of content every year and rising. Much of this content is from US media companies, and as you see from the article, these kinds of figures start generating a lot of political churn.

However, realistically, can lawmakers make the slightest dent in this activity? Simply put I think the answer no. The methods and channels are just not subject to any serious action feasible from a legal perspective.

Can technology ala DRM throughout the production and distribution channel solve this problem? To some degree, yes. However, as the recent theft of the new Wolverine movie demonstrates, the problem is not strictly one of technology, amenable to technology solutions; in this instance, it's virtually certain an "insider" in the studio lifted an early (unencrypted) "digital print" of the movie for illicit distribution. More extreme internal controls on access may help here, but obviously are difficult given the breadth of people involved in film production in general, particularly ones with high special effects content. There's also the simple low budget and low quality, but still effective, pirating approach of simply "filming" (videoing? interesting how all our terms are out of date with current technology!) the film in the theatre, a pirating approach that is only amenable to full body searches at the doors of theatres. While posturing lawmakers might suggest it, it's obviously never going to happen.

So where does that leave those companies that are getting robbed blind?

I don't think it's beyond rationality to think that they might just take matters into their own hands. After all, people and organizations that are losing serious money eventually will resort to "serious" actions to solve the problem. What am I implying here? I'm implying the use of questionable at best if not outright illegal actions to attempt to impede the business of the distribution organizations involved in the piracy, particularly those using the internet as a distribution channel.

What kinds of actions? Web site attacks, in general, via all the "usual" means that hackers use to access company intranets today for illicit ends: penetration attempts followed by operation of s/w that would compromise the piracy delivery operations, and denial of service attacks as a start. A kind of "fight back with the tools available" approach...even if those tools are on the wrong side of the law.

Let's be clear: I'm not promoting illegal activities by "the good guys", and taking this kind of action would move "the good guys" into a difficult morale area, at best (vigilante action is always questionable, but it is sometimes popular as a means of getting justice). I'm merely raising the question: at what point does Big Money Lost move to Serious And Illegal Action in order to get on the offensive against the thieves robbing them blind? Does it start to happen at $20B? I'm suspicious it just might...

Monday, April 6, 2009

Revolution in Smart Phone Design?

The new Motorola "Evoke" phone uses a single ARM processor, without a second processor (which is frequently a DSP, or digital signal processor). Typical smart phone designs use a two processor configuration. One processor (the ARM, frequently called the "application processor") runs a full-up operating system and general applications including the graphical user interface. This OS is typically WinCE, Symbian, Linux, Apple's OS-X, PalmOS, etc. The second processor runs s/w that is responsible for servicing the radio, including accepting/processing inbound calls, initiating outbound calls, etc. This s/w is called the "modem stack". The modem stacks requires real-time processing, meaning responses and transactions must occur within a deterministic period of time, frequently measured in the range of tens of micro-seconds. Longer delays can cause phone operation glitches and call failures.

By separating out the application OS and applications themselves from the modem s/w via separate processors, phone designs assure that the modem processing is not affected by the applications, and the phone (as a phone, vs. as a computer) operates correctly and reliably.

The Evoke phone merges these separate functions onto a single processor. It does this by utilizing a "micro-kernel". A micro-kernel virtualizes the hardware, giving each higher level OS the perception that it is running directly on the hardware, controlling and manipulationg hw resources, when in fact the micro-kernel is really doing that work. The micro-kernel can make decisions about which OS environment gets priority. By being extremely lightweight, the micro-kernel can add very little overhead to overall operations.

The OKL4 microkernel is a design based on the L4 microkernel design that originates from physics researchers in Germany. Researchers in Australia implemented their own version of the design, then created Open Kernel Labs to commercialize the technology, around 2002. While at MontaVista Software (an embedded Linux company which is a leader in providing Linux for cell phone designs), while I can't give specifics I'll say I was "aware of" OKL4 and it's slowly growing traction in the phone industry. The key work there is "slow".

Well, this "Evoke" phone shifts the gears up from slow to fast, in my opinion. The cost benefit of being able to use a much simpler, lower cost, and lower power core "system on a chip" is huge. Simply put, within 18 months, I would expect the majority of new smart phone product releases to have moved to this general architecture, using a variety of specific micro-kernels.

Who are those micro-kernel players? Open Kernel Labs, VMWare (who purchased Trango Virtual Processors a while back to broaden their portfolio and enter this market), Chorus produced by Jaluna in France, RTLinux now owned by Wind River, and probably others.

One interesting question to be answered is whether this integration of application and modem functions on a single processor overly compromises the user experience on the application side. My guess is "no", for the simple reason that when being used as a phone, application execution is not important!

The L4 design is considered to be extremely high performance relative to most micro-kernel designs, due to careful cache management to insure high performance low level IPC (inter-process communications) operations. Open Kernel Labs could end up being a big winner with this technology, and thereby a new significant player overall in the OS market. Before you discount this as a niche, yes it's a niche but consider the unit volumes, and remember that VMWare started with very similar technology for a market area with far smaller unit volumes (though obviously far larger budgets spent on the equipment overall).

VMWare is sure to be a player in this new market as well, though it's technology will tell, as marketing hype is not sufficient to win in this market!

It will be interesting to watch how this all develops.

Friday, April 3, 2009

Application Security 1A

There's a fascinating demo and supporting tool to be shown and released at Blackhat in Amsterdam upcoming (http://tinyurl.com/djad82). The researcher is showing techniques to use SQL injection (typically used to get to inappropriate/inaccessible database contents) to "take over" the SQL server, and from there, to upload arbitrary privileged code onto the server, effectively allowing complete server takeover.

Gad zooks. The researcher says this is enabled by taking advantage of default settings in the SQL server, combined with SQL and OS code that have flaws enabling buffer overflow attacks (don't understand those yet? Try here: http://en.wikipedia.org/wiki/Buffer_overflow).

A week ago I presented a webinar on "Application Security: A 360 Degree View" (which you should be able to find/watch here: http://www.arxan.com), and the focus was on the need for comprehensive security practices throughout the software development lifecycle.

So what's the final word from Mr. BlackHat researcher (Bernardo Guimaraes)? "I think that the attacks described are realistic threats when the Web application does not follow a proper security development life cycle and the database server is used with default configurations in place or is badly configured."

Ding dong! As Pogo said oh so long ago, "we have met the enemy, and they are us...".

Thursday, April 2, 2009

We Need a New OS!

It's time for a new operating system.

Windows (and Linux and BSD) as the foundation operating systems for the Computer Economy Age just don't cut it folks.

BSD, with it's security minded focus, is best but still far from rigorous, Linux is worse and Windows is downright obscene when it comes to security. And I'm not just talking about security flaws, like the defect that allowed a buffer overflow attack used by Conficker. I'm talking about fundamental design.

I "grew up" (professionally that is) in an industrial OS R&D lab, at Hewlett-Packard. While we were dealing with OS kernel basics, the notions of security (and robustness, the idea of system ever ever ever going down was absolutely unacceptable, a system crash in the field was an all hands on deck and send out the best engineers on site exercise and rarely happened) were deep and strong in our designs. Windows for example casually allows external objects to create and launch a new thread in a running process...say what? Hijack system entry points...hello??
Memory access permissions are loose and can be over-ridden. From the perspective of an old school old guys, it's completely nuts what's allowed in a Windows environment.

I suppose the thought process of the designers was "enable flexibility", but the result is an environment where anything goes, and unfortunately just about anything can and does, including all kinds of subversive activities by the criminal technologists.

On top of this sinful licentiousness of the OS is the complexity, and when you add the two together, you enable the bad s/w to pull all kinds of shenanigans and hide itself extremely well in the process. Conficker is a great example: it uses multiple techniques to make itself just not show up or otherwise hide itself in a sea of other crap in running process, DLL and/or registry scans.

It's important to think about this pretty deeply because let's face it, the world is already deeply dependent on the operation of our computers and their continuous communications on the internet. I'm not talking about just the "convenience" of email and chat (though just shut down those and imagine the chaos to the economy!), I'm talking about the world of finance and general B2B transactions that are computer and internet based.

Can we really afford to have the fundamental computing and communications infrastructure of our world economy dependent on crappy s/w designs?

Unfortunately today we have no choice. But it sure would be nice if we could have a new operating system, one that is well organized, properly modular, with appropriate levels of security and complexity.

The problem of course is the extraordinary amount of s/w that already exists in the world that depends on a Windows or Linux environment. However, this shouldn't completely block the attempt, as reasonable emulation environments for applications can be crafted and run on top of a true modern OS, one of sufficient quality to actually base business operations on.

Note that a "root of trust" design around which Windows could be wrapped doesn't really cut it, for the the reason that you still have the Windows environment with all of it's fundamental lack of secure processing models. Root of trust designs can enable secure functions with secure access to particular hardware (a good model for a cell phone design where you want a secure core for come things but a broad application OS for "the general public"), but don't address the broader OS environment as a whole.

I don't know how a new modern, secure and highly adopted OS is going to come about. Linux and BSD are pretty amazing developments, and each took 10+ years to get to significant mainstream adoption. But they DID happen, and it can happen again. So I encourage all you smart and motivated s/w engineers out there, don't be shy, MAKE IT HAPPEN! Not for me, but "for our children". Because running our businesses and increasingly our lives on fundamentally non-secure computing platforms it just a bit insane, if you ask me.

Tuesday, March 31, 2009

More thoughts on Conficker worm

How much damage can someone with "remote control" of somewhere between 2 million and 15 million computers (the estimated number of conficker infected computers worldwide) actually do?

Think about that. Whoever is "running" this worm has the ability to update the worm, in general, within a few days time, effectively issuing new operating instructions to this vast arsenal of internet connected systems.

So what kind of attack can be launched? How much and/or what specific critical areas of world economy or infrastructure can be attacked?

Is it conceivable that a vast amount of the world economy can be brought to its knees?

I honestly don't know. And so far, I can't find anyone who's saying, aside from vague comments such as "in the worst case, Conficker could be turned into a powerful offensive weapon for performing concerted information warfare attacks that could disrupt not just countries, but the Internet itself."

What not addressed in this kind of comment is the question of degree of impact on the world economy, if attacks could in fact "disrupt the internet itself"? To the degree such a general statement is true...I'd say the world economy could be pretty much brought to a standstill, don't you agree? The world economy is extraordinarily dependent upon the internet, in a way that we haven't really grokked...but we need to, and quickly.

Are we potentially facing one of those comic book moments where Dr. Doom truly causes mass disruption of the world economy, then announces to the world that he'll only re-enable operations if he's sent $100B? Or named world dictator? Or both.

I know it's fantastical...but if there aren't people RIGHT NOW sitting in Pentagon think tanks analyzing the potential level of disruption, I'd be awfully shocked. Unless of course the level of potential disruption is well understood already, in which case they are probably analyzing just what can be done about it.

I've spent some time reading through the detailed descriptions of this nasty little worm, and it's a son of a gun. The problem is that unless people owning/managing the infected computers wipe it out themselves by retaking control of their own computers and running appropriate "disinfectant" software, there's just no darn way to recover control of these 2-15 million computer systems! However, most such people have no clue their machine is infected.

A few key notes if you aren't aware of them:

- this all only started in October of last year, and in that time, this worm has gone through three updates (A=>B=>B++=>C). Of course, not all older version of the worm have successfully evolved to the new versions, so what's out there now is a range of types. These updates have include active measures to counter the "counter-measures" that security researchers has been deploying to block or disable the worm.
- developers are using absolute state of the art technology, literally within days of development (such as MD6; they included first versions only a few weeks after initial development and release to the public, including defects, and then in an update early in '09 included the very new corrections to those defects).
- the worm uses a variety of methods to access updates to itself, the most powerful being a "find a domain on the internet where my master has new code for me to download". A new method for doing this is what "turns on" on April 1 2009. Whether or not April 1 will be a date for a new version of the worm to be downloaded to all the infected machines is rather independent of this "mode switch" date.
- most perniciously, the worm performs sophisticated public/private key based validation of the veracity of the new worm version to be downloaded. The private key is only know by the creators of the worm, and at a key length of 4096 bytes, is quite immune to a brute force attack to derive the private key from the worm code and the public key.

The "easy" way to turn this thing off is to build a "good worm" if you will (some benign code that will terminate itself and stop operating once it has replaced the old version on an infected system), sign this "good worm" with the private key, and put it where all the infected systems will find it and download it. Then all the "bad conficker worms" replace themselves with a new benign version over time, and viola, threat is over. (Remind you of Data sending the "go to sleep" command into the Borg collective in Star Trek TNG? It should...)

Easy right? The core issue is WHAT IS THE PRIVATE KEY? How do we make sure the Borg accept the sleep message as a valid message?

The way to get the private key is "easy", and it's call the rubber hose method. You simply find the criminals responsible for conficker, put them under the rubber hose (if you will), until they share the private key. So the problem becomes one of tracking down the SOB's responsible, which is unfortunately not easy at all. There are some indications that the criminal group of Baka Software (who distributed "anti-virus software for Windows" as a product that was itself a virus, sneaky crooks) MIGHT be responsible. Baka is apparently in Kiev, Ukraine. However, the vague signs pointing their way could also be intentional mis-direction by the real developers.

We can only hope to hear the news reports soon of the attack by a multi-national SWAT team that execute the rubber hose method of private key extraction. I for one will be cheering on the sidelines. I don't want to have to wake up every morning and recite a Pledge of Allegiance to Dr. Doom...

Monday, March 30, 2009

Conficker (note: sometimes you'll find this on the web as "conflicker", but apparently the roots of the name are dirtier than that...) is "due" to update itself on April 1 (2009). What's conficker all about and what are the implications?

First, it's on millions of PC's. Second, to date, it hasn't explicitly done anything deeply "wrong", beyond propogating and protecting itself by doing things like blocking virus scans etc.

The question is: what IS it going to do? What's the purpose? Well, it's a tool for the "owners"; they have access to millions of computers, for doing just about whatever they darn well please, when they please. If they wake up on the wrong side of the bed and decide to toast them via disk wipes, they can have the virus download new instructions to do just that. However, that's not likely, because to understand this we have to understand the mindset of the creators.

The creators are, in all likelihood, part of a crime syndicate. Availability of millions of computers is a tool not to be wasted for non-monetary purposes. How exactly they will be used over time will be revealed, one conficker update cycle at a time, starting in two days.

How does such extensive infestation happen in this day and age? Unfortunately it happens because organizations and individuals don't keep their system software current, end of story. Anyone at Windows XP SP2 or anything more recent than that have access to both patches that prevent conficker, and to s/w that will find it and remove it.

The problem is all the computers that run older s/w. MS doesn't support XP SP1 anymore, for example, and have not and will not release a patch to correct the security flaw that enable infestation by conficker. While we can berate MS for this, I don't think that's appropriate, because if an organization (or individual) doesn't bother to upgrade to SP2...then are they really like to bother with a security patch to the SP1 system? I'd guess probably not.

So while we can moan and groan about "insecure software", unfortunately this is as much or even more a human and organizational behavior (and economic) issue than it is a technology issue. Or to put it a different way, folk there's always going to be security issues in s/w. (Well, maybe someday that won't be true but for the next good while it is and will be!) At the same time, there will always be ways to REACT to the security issues that come to light...which requires not just "dumb users" but involved users, caring users, thoughtful users, and perhaps most importantly...users that understand that keeping their computer systems secure through updates is a COST OF OWNERSHIP REQUIREMENT.

Conficker is a fascinating testament to the problem: it's insidious, and it causes, to date, no overt and obvious (to the casual user) harm to the computer. So "all is well", and infected machines work hard to infect others unprotected systems, and so it spreads. It's taking great advantage of our "feel no evil, there is no evil" attitude toward technology. I'd guess than 95% of the people sitting in front of computers infected with conficker have never heard of the worm (it is a "worm", a self-replicating computer program which, unlike a virus, does not need to attach itself to an already existing computer program). Kind of like getting a disease for which there are no clear symptoms for some time until...whoops, something bad happens.

In the case of conficker, the "something bad" may still not be anything overt obvious to the infected systems. For example, they could be harnessed to launch mass denial of service attacks at specific targets, or perform spam mailings, etc. The impact on any particular infected system may be very minor.

I still hear many of you reading this asking "but isn't there a technical solution?". Sure there is! Don't run older MS operating system software that is vulnerable to a RPC buffer overflow attack; keep your system software current and updated with all latest security patches! Don't run your computer in a ADMIN$ share using NetBIOS that doesn't use strong passwords (this is probably the method through which large groups of commercial computers have been infected).

Other steps: go here and download a latest/greatest copy of the microsoft malware scanner, and then run it by navigating to c:\WINDOWS\system32 and running the program mrt.exe:

http://www.microsoft.com/downloads/thankyou.aspx?familyId=ad724ae0-e72d-4f54-9ab3-75b8eb148356&displayLang=en

(Note: if you are unable to access this page with your browser...you are probably infected! Conficker blocks access to most security/virus related sites...)

Can system software or some kind of add on security software be designed to "automatically detect" and report an infestation? Well...yes. And the newest versions of the anti-virus software do exactly this, they find and neutralizes conficker, which is why one of conficker's actions is to disable the updating and execution of virus detection s/w! Okay you might ask, how about "built in" safeguards, built in sentries? Well...yes. And the anti-virus s/w on your computer (assuming you are current w/your updates and not infected) does this. "How about something generic that is always there from the get go that can find and notify or even destroy this and any and all new and different viruses and worms?". Ah yes, the holy grail.

No.

Why not? Because step 1b of every serious virus/worm designer is "counter all the existing defenses". So it becomes a game of "can we develop a non-counter-able defense, which can find and deal with any arbitrary infestation?".

My friends that is seriously difficult, perhaps bordering on impossible, and perhaps even formally "uncomputable" (for you computer scientist types). Perhaps in an extremely well structured operating system environment with extremely formal interfaces and controls...which of course Windows definitely is not.

That said, you CAN enable programs to monitor themselves for changes, you CAN enable programs to validate the "correctness" and "appropriateness" of any attaching modules, and Arxan is in this business. But it takes proactive effort by the owners of all those programs to take such actions. Additionally worms such as confiker don't operate this way, it comes in as a separate body of code, hiding itself. Can all such "inappropriate" s/w be seen/found and root out as it lands in a computing system? That's tough, because again, a Windows (or Linux etc.) environment is one very complicated environment, with a wide range of dynamic content including many different programs being loaded and run.

The analogies with biological viruses and human behaviors here are just too strong to ignore. "Can we protect ourselves from viruses?". Well, yes...to a degree. Wash your hands often particularly after being in public places or having human interactions, take your vitamen C (500 mg 2x/day folks, it's working for me!), get regular aerobic exercise (20+ minutes 3-4x/week), practice safe sex, etc. So...does everyone do this? Hah, not even close. So we are far far more sick than we need to be.

Our computers are too. Between 9 and 14 million of them by last estimate, to confiker infection alone.

That's sad. So check your computer and keep it current.

Wednesday, March 25, 2009

Cerias Security Conference - Purdue

I attended sessions at the Cerias security conference on the Purdue conference today, and participated as a panelist on a discussion around the recent report "Unsecured Economies", performed by Dr. Karthik Kannan, Dr. Jackie Rees and Dr. Eugene Spafford, and funded by McAfee. (The report can be accessed through this request page: http://resources.mcafee.com/content/NAUnsecuredEconomiesReport).

The report is based on a study of 1000 senior IT decision makers across 800 companies, the distribution of their IP and data assets around the world, and the IP and data theft they have experienced in the last year. The numbers are rather staggering: $4.6M in AVERAGE losses per company in a single recent year.

Some interesting questions were asked during the panel, including "how were the values of the losses assessed?". Indeed "how we count" here is a tricky question. At Arxan, while we could look for example at the direct cost of any piracy of our software ("whoops, would/could have been a customer so that's a loss of $x of income/revenue"), the larger costs are in how such pirated s/w could be misused to compromise the value proposition of the company, and the resulting damage over the longer term to the company valuation.

My opening statement, boiled down, amounted to the following: enterprises today utilize vastly distributed computing elements, with no well defined perimeter, and each of which maintains and/or processes company data and/or IP. Perimeters defenses are ineffective, and even when in place around concentrations of computing elements, are too easily compromised through direct and indirect attack. Therefore, our security model must directly address the security of the fundamental data, the enterprise applications that process that data, and the keys that enable the legitimate usage of the data and applications.

This is where Arxan plays and represents Arxan core vision. And the reality is that's it's a journey and quest, by both us and our customers, because it's an ongoing battle with the criminals who are always seeking to overcome our latest and greatest solutions and defenses.

A few other notes on the conference. Dr. Ron Ritchey of Booz Allen (and also an adjunct professor at George Mason teaching a course in secure software development) gave the keynote this morning, and focused on the questions of how security flaws do or do not scale with the size and/or complexity of the code base. He had some fascinating data from the operating system world show the find rate of security issues in OS, particularly various Microsoft OS's, over time.

At one level (to me anyway) it's "obvious" that security issues scale with size and complexity. The questions are a bit more subtle than that: can security issues be taken out of a given code base over time, and can complexity management be applied to the continued development of or addition to that code base to keep aggregate security issues "constant" (or even on a downward sloping trend line)? The most obvious driver it seems to me are the s/w lifecycle practices utilized in the enhancement/maintenance process itself. Additionally, usage levels are a critical factor re: the resulting metrics. For example, Dr. Ritchey shared data showing nicely downtrending security find rates for NT starting around year 5 or 6 of deployment...but mightn't this be primarily a function of constently decreasing usage vs. newer Windows versions vs. any indication that NT was better or is being maintained "better"?

The other interesting data was on Vista as it compared with XP and other olders MS OS's. The find rate curve for Vista for the first two years is dramatically sharper that it was for XP (which in turn was higher than for the previous version), in fact the increase in slope was to me rather alarming. There were only two data points, but the trend is clearly in the wrong direction by a long shot, and this for an OS where increased security was a primary business objective (or so I understood). Of course the code size and complexity level of Vista vs. XP is much larger/higher, so..."to be expected", but that's not a good answer for us the users nor for the software industry in general, is it?

Ciao for now,

-Kevin

Thursday, March 19, 2009

Software Security from the Arxan CTO

Hello world!

Yes, very puny for you computer scientists (for you others, a program that prints "Hello world!" is the first program in the original book by Kernihan and Ritchie on the C programming language...).

I'm Kevin Morgan and for over two years I've been managing product R&D (and support and training development and now professional services) here at Arxan Technologies. We specialize in application software protection; protection from what you ask? Protection from reverse engineering to steal your intellectual property; protection from tampering to unlock features that customers haven't paid for or are not allowed to access; protection from tampering to break license management or activation so they can run the software without paying for it; protection from tampering so they can steal unencrypted digital content. Protection from tampering so they can access your company internal business data or intellectual property, or worse yet, to perform illicit financial transactions. The list is literally infinite.

Now they pinned a CTO title on my chest, and among other things, asked me to blog about what's up in the world of software security in general. So here's my first post just to get started, with I'm sure many more (with real content!) to come...

Be blogging at you soon.

-Kevin Morgan
kmorgan@arxan.com
VP of Engineering
Chief Technology Officer (Commercial)