Wednesday, April 15, 2009

"Secure software" is enough anymore!

Lots of folks are talking about "securing software" in the rather traditional context of "writing secure software", and this is being broadened out to a complete security focus through the entire lifecycle. You can hear me discuss this on this recorded webinar:

http://www.arxan.com/software-protection-resources/webinar-series/application-security-360-view-webinar.php

and colleagues at Fortify and Cigital have developed a "Building Security In Maturity Model", which is here:

http://www.bsi-mm.com/

However, I'm here to tell you folks, IT ISN'T ENOUGH.

What's that you say? What more is there? What more can we do than ensure our applications don't have security flaws?

The answer is that applications have to go on the offensive. Applications must not just be "defensively secure" by not having code vulnerabilities, they must take active measures to detect and respond to attacks directed against themselves.

Of course my company is in this business and of course this is a blatant advertisement...but darn it folks, it is absolutely true and knowing what I know, I'd be saying this even if I worked as a used car salesman. Applications in the enterprise, in the cloud, distributed applications (ISV s/w) and applications in end point devices (phones, set top boxes, automobiles, home gaming systems, the list is endless) are the new focused target of attack by organized crime. And these applications CAN be engineered to have multiple layers of active defense ("offensive defense").

Applications can and should check themselves for code integrity. Applications can and should authenticate components that are dynamically attached (DLL's). Applications can and should detect and notify of debugger attachments. Applications can and should protect critically sensitive code through encryption and dynamic decrypt/execute/re-encrypt operations. Applications should utilize multiple levels of networks of these self-guarding techniques, with a variety of overt and subtle response actions, to ensure that persistent attacks are foiled at some level. Enterprise applications should have these response actions wired into the security monitoring systems deployed by the enterprise.

These practices needs to become commonplace and part of our general software lifecycles. The world is too dangerous a place for it not to happen. We need to keep up with the organized criminals, and right now our software is falling woefully behind.

Wednesday, April 8, 2009

Cyberwar is Real: US Electrical Grid Attacked and Compromised

The Wall Street Journal has reported what many of those of us on the inside of the cyber security world already knew, namely that their is a very serious warfare going on today between Russia and China, and the US. Read the report here:

http://online.wsj.com/article/SB123914805204099085.html?mod=googlenews_wsj

We could call this a "cold war" on the network/computer ("cyber") battlefield in the sense that damaging actions are not yet being taken. Instead, "footholds" are being created from which highly effective attacks can be mounted. In this case, it's footholds in the heart of a critical area of infrastructure, our power systems.

The report speaks to the North American Electrical Reliability Corp. being responsible for oversight of the security of our electrical systems, and setting standards for firewalls between administrative and actual control systems.

Sorry to be overly colloquial but, "well duh!".

In general, control systems shouldn't have any connections to the internet, period. Interconnects between "administrative" systems that are internet connect and the control systems should not exist, or utilize proprietary and highly secured lines and technologies. Obviously this isn't the case. It's a safe assumption that a casual attitude in the evolution of the internal systems in the power industry, combined with a real lack of understanding of the ability of hackers to thread malware through a wide variety of industry standard communications interfaces, has lead to a high degree of interconnection and thereby to an easy to penetrate set of control systems.

Unfortunately the problem certainly isn't limited to power systems. Is the situation likely to be any different in our telecommunications infrastructure? Our water management infrastructure? Our police and civil defense infrastructure? Our hospital and emergency response infrastures? If our power control systems can be subverted, is there much of anything in the civil arena that isn't in all likelihood subject to successful intrusion and subversion?

One area of real concern I have is the lack of computing security expertise that your typical power systems organization, and all other civil infrastructure computing systems, are going to have. Simply put, they don't have the right soldiers in the field to fight the type of war being waged.

It's no wonder that Obama's administration is issuing a call to action in the general area of "cyber security". While we are busy designing and building jet fighters that can take out anything China might produce by the year 2100, China and Russia are thinking and operating strategically.

We in the US (and other western nations) must think and act strategically too. The plane of combat has expanded in new dimensions, with the network being the enabler, and the computer control system being the field of battle. Of course we shouldn't forget that there may very well be offensive actions well under way by the US Department of Defense. However, that doesn't address our own weaknesses. If we were thinking and acting strategically and comprehensively, wouldn't there already be clear efforts underway to secure our infrastructure from cyber attack? Unfortunately this line of thinking, combined with the evidence at hand, is not comforting.

Let's go back to Conficker for a moment (see previous blogs); if I was the "owner" of that worm, my perspective would be that I have a pretty darn powerful "bomb" available, potentially an ability to bring down certainly selected targets that operate on or via the internet, and potentially even wide swathes internet based economic activity, through leveraging the power the +/- 5 million computers under my control. Personally I know what I would do with this capability; I auction it off to the highest bidder, and I'd go to Russia and China first and formost to start the bidding process. (Then I'd go retire to a life of surfing, pool and internet poker in the Maldives.)

It's a strange new world in all respects, and this strange new world includes a new Cyber Cold War. We'll acronymize it and call it CCW (you heard it here first!). It's real, it's serious, and it is a threat to our economy and even our daily creature comforts of power, phone and internet. Obviously Arxan Technologies, Inc. is in the the business of helping, both "confidentially" through our Defense Systems organization, and more openly and publically on the commercial side through commercial products and technologies. What's needed is an active and investing government, stepping up to the plate to enable the investments by our infrastructure organizations to devise and deploy the necessary re-architecting and defensing of our infrastructure computing systems.

Tuesday, April 7, 2009

Digital Piracy and How to Slow It

New reports (http://tinyurl.com/d2jfae) are putting digital piracy of media at $20B worth of content every year and rising. Much of this content is from US media companies, and as you see from the article, these kinds of figures start generating a lot of political churn.

However, realistically, can lawmakers make the slightest dent in this activity? Simply put I think the answer no. The methods and channels are just not subject to any serious action feasible from a legal perspective.

Can technology ala DRM throughout the production and distribution channel solve this problem? To some degree, yes. However, as the recent theft of the new Wolverine movie demonstrates, the problem is not strictly one of technology, amenable to technology solutions; in this instance, it's virtually certain an "insider" in the studio lifted an early (unencrypted) "digital print" of the movie for illicit distribution. More extreme internal controls on access may help here, but obviously are difficult given the breadth of people involved in film production in general, particularly ones with high special effects content. There's also the simple low budget and low quality, but still effective, pirating approach of simply "filming" (videoing? interesting how all our terms are out of date with current technology!) the film in the theatre, a pirating approach that is only amenable to full body searches at the doors of theatres. While posturing lawmakers might suggest it, it's obviously never going to happen.

So where does that leave those companies that are getting robbed blind?

I don't think it's beyond rationality to think that they might just take matters into their own hands. After all, people and organizations that are losing serious money eventually will resort to "serious" actions to solve the problem. What am I implying here? I'm implying the use of questionable at best if not outright illegal actions to attempt to impede the business of the distribution organizations involved in the piracy, particularly those using the internet as a distribution channel.

What kinds of actions? Web site attacks, in general, via all the "usual" means that hackers use to access company intranets today for illicit ends: penetration attempts followed by operation of s/w that would compromise the piracy delivery operations, and denial of service attacks as a start. A kind of "fight back with the tools available" approach...even if those tools are on the wrong side of the law.

Let's be clear: I'm not promoting illegal activities by "the good guys", and taking this kind of action would move "the good guys" into a difficult morale area, at best (vigilante action is always questionable, but it is sometimes popular as a means of getting justice). I'm merely raising the question: at what point does Big Money Lost move to Serious And Illegal Action in order to get on the offensive against the thieves robbing them blind? Does it start to happen at $20B? I'm suspicious it just might...

Monday, April 6, 2009

Revolution in Smart Phone Design?

The new Motorola "Evoke" phone uses a single ARM processor, without a second processor (which is frequently a DSP, or digital signal processor). Typical smart phone designs use a two processor configuration. One processor (the ARM, frequently called the "application processor") runs a full-up operating system and general applications including the graphical user interface. This OS is typically WinCE, Symbian, Linux, Apple's OS-X, PalmOS, etc. The second processor runs s/w that is responsible for servicing the radio, including accepting/processing inbound calls, initiating outbound calls, etc. This s/w is called the "modem stack". The modem stacks requires real-time processing, meaning responses and transactions must occur within a deterministic period of time, frequently measured in the range of tens of micro-seconds. Longer delays can cause phone operation glitches and call failures.

By separating out the application OS and applications themselves from the modem s/w via separate processors, phone designs assure that the modem processing is not affected by the applications, and the phone (as a phone, vs. as a computer) operates correctly and reliably.

The Evoke phone merges these separate functions onto a single processor. It does this by utilizing a "micro-kernel". A micro-kernel virtualizes the hardware, giving each higher level OS the perception that it is running directly on the hardware, controlling and manipulationg hw resources, when in fact the micro-kernel is really doing that work. The micro-kernel can make decisions about which OS environment gets priority. By being extremely lightweight, the micro-kernel can add very little overhead to overall operations.

The OKL4 microkernel is a design based on the L4 microkernel design that originates from physics researchers in Germany. Researchers in Australia implemented their own version of the design, then created Open Kernel Labs to commercialize the technology, around 2002. While at MontaVista Software (an embedded Linux company which is a leader in providing Linux for cell phone designs), while I can't give specifics I'll say I was "aware of" OKL4 and it's slowly growing traction in the phone industry. The key work there is "slow".

Well, this "Evoke" phone shifts the gears up from slow to fast, in my opinion. The cost benefit of being able to use a much simpler, lower cost, and lower power core "system on a chip" is huge. Simply put, within 18 months, I would expect the majority of new smart phone product releases to have moved to this general architecture, using a variety of specific micro-kernels.

Who are those micro-kernel players? Open Kernel Labs, VMWare (who purchased Trango Virtual Processors a while back to broaden their portfolio and enter this market), Chorus produced by Jaluna in France, RTLinux now owned by Wind River, and probably others.

One interesting question to be answered is whether this integration of application and modem functions on a single processor overly compromises the user experience on the application side. My guess is "no", for the simple reason that when being used as a phone, application execution is not important!

The L4 design is considered to be extremely high performance relative to most micro-kernel designs, due to careful cache management to insure high performance low level IPC (inter-process communications) operations. Open Kernel Labs could end up being a big winner with this technology, and thereby a new significant player overall in the OS market. Before you discount this as a niche, yes it's a niche but consider the unit volumes, and remember that VMWare started with very similar technology for a market area with far smaller unit volumes (though obviously far larger budgets spent on the equipment overall).

VMWare is sure to be a player in this new market as well, though it's technology will tell, as marketing hype is not sufficient to win in this market!

It will be interesting to watch how this all develops.

Friday, April 3, 2009

Application Security 1A

There's a fascinating demo and supporting tool to be shown and released at Blackhat in Amsterdam upcoming (http://tinyurl.com/djad82). The researcher is showing techniques to use SQL injection (typically used to get to inappropriate/inaccessible database contents) to "take over" the SQL server, and from there, to upload arbitrary privileged code onto the server, effectively allowing complete server takeover.

Gad zooks. The researcher says this is enabled by taking advantage of default settings in the SQL server, combined with SQL and OS code that have flaws enabling buffer overflow attacks (don't understand those yet? Try here: http://en.wikipedia.org/wiki/Buffer_overflow).

A week ago I presented a webinar on "Application Security: A 360 Degree View" (which you should be able to find/watch here: http://www.arxan.com), and the focus was on the need for comprehensive security practices throughout the software development lifecycle.

So what's the final word from Mr. BlackHat researcher (Bernardo Guimaraes)? "I think that the attacks described are realistic threats when the Web application does not follow a proper security development life cycle and the database server is used with default configurations in place or is badly configured."

Ding dong! As Pogo said oh so long ago, "we have met the enemy, and they are us...".

Thursday, April 2, 2009

We Need a New OS!

It's time for a new operating system.

Windows (and Linux and BSD) as the foundation operating systems for the Computer Economy Age just don't cut it folks.

BSD, with it's security minded focus, is best but still far from rigorous, Linux is worse and Windows is downright obscene when it comes to security. And I'm not just talking about security flaws, like the defect that allowed a buffer overflow attack used by Conficker. I'm talking about fundamental design.

I "grew up" (professionally that is) in an industrial OS R&D lab, at Hewlett-Packard. While we were dealing with OS kernel basics, the notions of security (and robustness, the idea of system ever ever ever going down was absolutely unacceptable, a system crash in the field was an all hands on deck and send out the best engineers on site exercise and rarely happened) were deep and strong in our designs. Windows for example casually allows external objects to create and launch a new thread in a running process...say what? Hijack system entry points...hello??
Memory access permissions are loose and can be over-ridden. From the perspective of an old school old guys, it's completely nuts what's allowed in a Windows environment.

I suppose the thought process of the designers was "enable flexibility", but the result is an environment where anything goes, and unfortunately just about anything can and does, including all kinds of subversive activities by the criminal technologists.

On top of this sinful licentiousness of the OS is the complexity, and when you add the two together, you enable the bad s/w to pull all kinds of shenanigans and hide itself extremely well in the process. Conficker is a great example: it uses multiple techniques to make itself just not show up or otherwise hide itself in a sea of other crap in running process, DLL and/or registry scans.

It's important to think about this pretty deeply because let's face it, the world is already deeply dependent on the operation of our computers and their continuous communications on the internet. I'm not talking about just the "convenience" of email and chat (though just shut down those and imagine the chaos to the economy!), I'm talking about the world of finance and general B2B transactions that are computer and internet based.

Can we really afford to have the fundamental computing and communications infrastructure of our world economy dependent on crappy s/w designs?

Unfortunately today we have no choice. But it sure would be nice if we could have a new operating system, one that is well organized, properly modular, with appropriate levels of security and complexity.

The problem of course is the extraordinary amount of s/w that already exists in the world that depends on a Windows or Linux environment. However, this shouldn't completely block the attempt, as reasonable emulation environments for applications can be crafted and run on top of a true modern OS, one of sufficient quality to actually base business operations on.

Note that a "root of trust" design around which Windows could be wrapped doesn't really cut it, for the the reason that you still have the Windows environment with all of it's fundamental lack of secure processing models. Root of trust designs can enable secure functions with secure access to particular hardware (a good model for a cell phone design where you want a secure core for come things but a broad application OS for "the general public"), but don't address the broader OS environment as a whole.

I don't know how a new modern, secure and highly adopted OS is going to come about. Linux and BSD are pretty amazing developments, and each took 10+ years to get to significant mainstream adoption. But they DID happen, and it can happen again. So I encourage all you smart and motivated s/w engineers out there, don't be shy, MAKE IT HAPPEN! Not for me, but "for our children". Because running our businesses and increasingly our lives on fundamentally non-secure computing platforms it just a bit insane, if you ask me.