Monthly Archives: May 2015

Study: Over 30% of Official Images in Docker Hub Contain High Priority Security Vulnerabilities

Really interesting post from BanyanOps that screams for supply chain management solutions:

Docker Hub is a central repository for Docker developers to pull and push container images. We performed a detailed study on Docker Hub images to understand how vulnerable they are to security threats. Surprisingly, we found that more than 30% of official repositories contain images that are highly susceptible to a variety of security attacks (e.g., Shellshock, Heartbleed, Poodle, etc.). For general images – images pushed by docker users, but not explicitly verified by any authority – this number jumps up to ~40% with a sampling error bound of 3%.”

For Want of a Patch (& a Supply Chain)

(originally published at CTOVision.com, December 8, 2014 )

For Want of a Patch

For want of a patch the component was lost.

For want of a component the stack was lost.

For want of a stack the system was lost.

For want of a system the message was lost.

For want of a message the cyberbattle was lost.

For want of a battle the enterprise was lost.

And all for the want of a software patch.

As the old proverb reminds us, logistics is important: most battles are over before they’ve begun due to having or not having a solid logistics tail. During WW2 the Allies found out the hard way with the invasions of Africa: ships loaded incorrectly led to delays in material onto the beaches and towns, things like ammunition, fuel and medical supplies are needed before typewriters and tents. As subsequent amphibious invasions progressed (North Africa, Sicily, Italy) the military learned how to coordinate better the planning and ultimate loading and unloading of material and manpower to have the largest effect in the fight. These processes ultimately culminated with the successful massive invasion of Normandy to end the 3rd Riech’s hold on Europe. 

The key lesson was to view logistics in war as a continuous process that feeds a fast and continuously maneuvering Army. 

Cyberwar is no different and more closely follows the proverb: one unpatched line of code can leave an entire enterprise open to assault. Why? Accelerated use of software, more software dependencies on other pieces of software AND all that software is constantly in need of being updated. Current organizational processes to keep software updated can’t keep up with the change being generated by the outside world. 

Example: Amazon software deployments for May 2014 for production hosts and environments: 11.6 seconds is the mean time for deployments and 1,079 max deployment in one hour: how many military systems can claim that many deployed changes in a month? (Ref: Gene Kim, slide 23 http://www.slideshare.net/realgenekim/why-everyone-needs-devops-now) I doubt any, but this is what the military (and modern enterprises like Sony) must prepare for: never ending change and updating on near random cycles.

More to the point: continual and unscheduled software patches are the landscape in this new maneuver environment. And since they can’t be planned for, organizations need to learn to evolve for change and deploy software and new capabilities continually. 

Software supply chain planning is no longer something that can be starved of funds. Malware, continuous monitoring, and network scanners can tell you which barn doors are open and that the horses are leaving, but leave enterprises with a massive punch list of fix it items. Funding, time and effort need to spent on the supply chain. It is the first true line of cyber-defense. 

Parting shot, question for CIOs/CTOs: Can you patch all of your systems in the next hour, using existing processes and not bypassing things? For most organizations the answer is no, OpenSSL patches (seriously!) get emailed around from dubious sources is akin to Mom mailing ammo to her son in a care box in Afghanistan.

For want of a message the cyberbattle was lost.

For want of a battle the enterprise was lost.

And all for the want of a software patch.

Code Supply Chain: its Not going to fix itself

Article from WSJ today that points out supply chain is becoming an issue of concern for banks, especially as the wider corporate IT infrastructure becomes more diverse and outsourced. Snip:

“Now they may want screenshots of the last time servers were patched, periodic testing of the patching status of those servers and information about the work that Fair Isaac outsources to others. Some financial institutions have even asked for credit scores and drug testing of employees with access to those servers. The company tries to be as transparent as it can while still preserving the privacy of its employees, said Ms. Miller.”

“Some regulators are also considering applying similar standards beyond providers to banking business partners. One Fortune 500 bank, for example, knows that several of its servers have not been patched for a serious bug called Heartbleed. If it patches those servers, though, it will break continuity with several European banks that have not upgraded their systems, said the chief information security officer of the bank, who declined to be name for security reasons. The bank must be able to share data with its overseas partners so disconnecting is not an option.”

More here:

Financial Firms Grapple With Cyber Risk in the Supply Chain, WSJ May 25, 2015

 

Scale, Software and Throughput

Software is eating the world” (ala Mark Andreesen) and becoming increasingly pervasive: phones, app’s, TVs, routers, fridges (why? no idea), drones, SCADA systems, industrial controls, cars, etc. etc., the boundary of things connected to the internet fabric, opens up new types of services but also massive new types of vulnerabilities. The Internet of Things (IoT) is creating a world of always connected, changing and updating devices.

Scale also refers to the flood of changes that occur in software on a daily basis either due to new features or reactions to needed and required fixes. Keep in mind that software is a collection of building blocks, dependencies within dependencies.

Software has become a deep, wide and fast moving river of change.

Henry Ford and the the manufacturing industry figured out how to deal with scale: simplify, automate and control to optimize throughput. The first thing Ford and Company did was simplify how a car could be built, this required reengineering of parts, but more often than not it required simplifying how the person interacted with the car build process, breaking down tasks into small units, simplifying movements, getting atomic. This created the modern moving assembly line with manned unskilled labor. The skilled labor moved up the design stack and became the engineers who designed the car AND the mechanics of the tooling, build systems and delivery processes. In effect the skilled labor was able to scale their efforts through simplified processes and optimize simply and build chains.

DevOps culture and techniques are pushing this mentality into software creation and delivery: the developer is the designer, after that administrators and operations requires as much automation as possible to deploy change, scale and manage issues as quickly as possible.

Oftentimes we come across great point solutions (or products) to help with software development and delivery, but they don’t address the needs of an end-to-end enterprise supply chain.

JIT

An older book that does a great job of describing the accelerating change occurring in the software development and operations industry is The Goal (http://www.amazon.com/The-Goal-Process-Ongoing-Improvement/dp/0884270610). It’s focus was on the revolution of merging just-in-time (JIT) delivery/logistics of material with updated manufacturing and information technology systems to simplify and lower the costs associated with create/build/delivery of physical goods. It is a must read for anyone currently in the software delivery game who is seeking to optimize, manage and understand software supply chains, i.e., moving from install DVDs to always on software stream.

Software is being forced into this model to handle the scale required by validation and verification, security and vulnerability analysis, and target platform support (mobile, desktop, cloud and all of the OS variants).

Speed Matching Evolution

There are many gaps to be filled in the cybersecurity world. AirGap is focused on providing solutions to problems in the combined realms of cybersecurity and software supply chain.

When it comes to security and software we see three problems:

  • Speed: of software development and change
  • Scale: volume of software (its everywhere and eating more of the world)
  • Supply chain and up-stream dependencies

Each of these three interrelate and must be solved in concert to enable an enterprise to be secure and up-to-date.

We view ‘speed’ impacting the software supply chain speed along two axis: development and operations. The speed at which software is developed and the speed it expects to be delivered and deployed.

Software now evolves much more rapidly due to number of factors, but chief among them are the development and delivery of changes like bug-fixes and enhancements based on changes in up-stream dependencies. The “don’t reinvent the wheel” mentality of software projects, simply because they can’t afford to do so, scales up the volume of software. As software is released in both realtime (as changes happen) and on rapid schedules down-stream projects want to consume the fixes and new features as quickly as possible.

Speed of Development: As mentioned above, the pace of software development is increasing. Agile practices, open source software frameworks and libraries, dynamic languages and improvements in tooling and delivery are combining to improve the effectiveness and efficiency of developers. New software and updates to existing software are being released readily to various open channels.

Speed of Operations: Movements such as DevOps, Continuous Deployment and Continuous Delivery are simplifying the full scale industrialization of software coding, streamlining pipelines of source to various endpoints. Manufacturing-style automation has finally made its way to software, enabling a few people to run large chunks of engineering infrastructure at a fraction of the cost of five years ago. This automation also means less mistakes and errors via accountability and repeatability. But it does mean that speed must be built into the equation of software operation and maintenance.

Many enterprises haven’t found it possible to keep up with either the speed of software development or the speed of operations so that software can evolve to meet current needs or threats. The path just does does not exist, creating a performance and security situation which can be unmanageable. Worse, as vulnerabilities are identified in operational software, organizations struggle to “roll forward” or patch their environments.

Why? Simply, the frog has been cooked slowly or the enterprise hasn’t built up its organizational IT bureaucracy to match the speed of how software needed to be updated, maybe once a year? In many organizations the dependency on vendors to provide updates creates a lag and an environment ripe for attack. For open source software, most IT infrastructures just aren’t responsive enough to process/validate the flow of changes.