All posts by jms3rd@gmail.com

Finding (Risky) Signals in the Open Software Noise

Recently the Linux Foundation teamed up with DHS to create the Census Project analyze those open source software projects that should be considered risky and to define what risk might be. Risky might be: small community, slow update, no website, IRC, no listed people, etc. People need more Code Intelligence (CodeINT) on the source code they use and ways of classifying things.

Check it out:  The Census represents CII’s current view of the open source ecosystem and which projects are at risk. The Heartbleed vulnerability in OpenSSL highlighted that while some open source software (OSS) is widely used and depended on, vulnerabilities can have serious ramifications, and yet some projects have not received the level of security analysis appropriate to their importance. Some OSS projects have many participants, perform in-depth security analyses, and produce software that is widely considered to have high quality and strong security. However, other OSS projects have small teams that have limited time to do the tasks necessary for strong security. The trick is to identify quickly which critical projects fall into the second bucket.

The Census Project focuses on automatically gathering metrics, especially those that suggest less active projects (such as a low contributor count). We also provided a human estimate of the program’s exposure to attack, and developed a scoring system to heuristically combine these metrics. These heuristics identified especially plausible candidates for further consideration. For the initial set of projects to examine, we took the set of packages installed by Debian base and added a set of packages that were identified as potentially concerning. A natural outcome of the census will be a list of projects to consider funding. The decision to fund a project in need is not automated by any means.

 

Cost Overruns in Large Systems

From my Masters Thesis on cost overruns, I wonder how much planning went into Healthcare.gov… so if NASA who has the brains about how to build build systems has a tough time getting it right, HHS has no hope. There were also time over runs as well.

… design decisions made early in the life cycle can have a large impact later when the technology is in operation.  An example of this in the graph by Werner M. Gruhl (Cost and Analysis Branch, NASA) presented at a INCOSE (International Council of System Engineering) System Engineering Seminar in 1998.

Pict

In the Figure 1-1, Phase A & B Costs are the costs associated with early stage conceptual planning and design of a technology. One early stage decision in the life cycle of a technology is how much money to allocate to the early stages of development and design. The chart shows that (for systems at NASA) if less than ten percent of the total cost is allocated to these early stages there can be an expectation of cost overruns. Making the decision to allocate more money to these earlier stages allows for the use of more resources to make sure the technology being developed closely matches what was envisioned.

TechFAR, IT & Development

I wrote this about a year ago… sad not much has changed in the US Fed government:

Government get your Sh** Together

Commenting on the ‘new’ government digital service and TechFAR.

Great article here:  New US Digital Service Looks to Avoid IT Catastrophes 

Discussion by Gunner: US Digital Service is Born

Steve Kelman, FCW:  The REAL regulatory challenge of agile development

This simply isn’t bold enough. This effort simply try’s to fix whats broken instead of looking more deeply at why its broken and how to ‘reformat’ the way IT services are built and most importantly used by the government and citizens.

The Service and TechFAR is akin to what the military seems to plan for: planning and building systems to fix yesterdays battles instead of really and deeply thunking for the future.

UK: the UK gov digital service worked because its much, much smaller than our government (think California), two the UK gov is setup very differently than our government bureaucracy. The UK digital service reports directly to the PM and as I understand it, really could walk into any UK Agency (save MoD) and demand changes + take over projects. Also in a Parliamentary system, what the PM says goes, no discussing with Parliament, arguing over budgets, etc. a number of projects get killed pretty fast when there is a gov changeover.

The US has neither.

GSA is a pretty widely ignored Agency (for a number of historical reasons), maybe this time it will be different, I’m sure they can help web pages load faster. But the Digital service is going to report a few levels down from POTUS and the head of GSA, both of which have many other things on their plate.

Also US Agencies have two masters they play pretty well off each other, Congress and the Pres.  Scale: the gov is friggin huge. Also some government systems have very unique and multiple functions for only one customer (the Gov + citizens).  Fixing FAA and SSA isn’t a few agile sprints over pizza and Dew.

Instead of fixing past problems, we need deeper thinking about HOW and WHY the government should provide services.

Examples:

– The government no longer runs motor pools, it outsources the entire job to companies with specific service levels and agreements (SLA).

– Failed example: SABRE (the airline reservation company) came to DoD and pitched the idea that they would take care of all military travel for something like $60 a ticket. Some govvies pitched they could do it cheaper, they tried and build a disaster of a service ++ Defense Travel continues to eat funds way above and beyond. And continues to frustrate and strand military travelers overseas.

There are many more, but the basic take away is this: the government must not recreate services the private sector does cheaper, better and faster unless its part of its’ core mission.

  • Bombs, check – part of core military mission.
  • HR? (outside of the military and CIA), the government should license or buy as a service HR capabilities.
  • Websites? I’m having a tough time with this. If the government could come up with a list of requirements and define a serious set of SLAs, they are any number of companies who would gladly offer website as a service, with the government managing the input.
  • IRS systems: parts of it could defiantly be outsourced, especially all of fraud monitoring.

One last point:

YAR – yet another review, I don’t see how another YAR by the Digital Service is going to add to government agility and flexibility. Make no mistake, the gov is setup with a number of very expensive and time consuming YARs already, each which must be planned for and dumbed down to senior management.

TechFAR inculcates YAR + yet another ref doc to read, that  won’t apply to any Agency that doesn’t adopt it.

Many of these suggestions sound good for a few projects, but crumple and slow down system creation when scaled to 1000s of project in an $80 billion portfolio.

Some question to ask as the Gov builds systems:

1. Is the thing your Agency needs to develop a core competency?

2. if not, define how to buy it as a fixed price service offering w/ a tight SLA

3. if yes, first start developing small + draft off of any existing efforts (open source software, or other state, local, international gov’s) ++ be open to the outside

Errata:

  • stop fixing current problems with past thinking
  • stop telling industry how to suck the egg (i.e., stop dictating to industry what methods to use to develop technologies, CMMI was great for adding bodies – thanks for the revenue, is Agile really the endpoint of development methods? Lets not hard code something (again) into how the gov does procurement)
  • automate eixsting jobs, rethink how the work gets done (IRS a body shop)
  • automate existing process and rethink if you need them at all
  • Start to collapse Agencies and processes. Does every Dept and Agency really need a CIO and associated staff?
  • What things could Agencies outsource to each other (like paychecks that Dept of Ag does for smaller Agencies)

Source Code is Maneuver Warfare

(posted on medium.com, March 12, 2015)

The US Military is a software based fighting force. If software doesn’t work, or is out-of-date or is hacked; planes don’t fly or get refueled, paychecks don’t get cut, weapons don’t get delivered, travel orders get delayed, networks don’t work, maps don’t get shipped and email goes down — leading to less than desirable battlefield outcomes.

Software source code is central to how the U.S. Military fights wars and projects power. Software and source code is not treated as a thing of value in the military: further the management, governance, maintenance and operational reuse of software is an after thought.

The context has changed, radically. Not only has the nature and tactics of our adversaries changed (cyber hacks, suicide attacks, IEDs, loosely coupled non-state actors, etc.), but the technological state of play in the private sector (where our adversaries source their technologies) has completely transformed in ways that leave military program managers at a loss. The global technology bazaar is driven by highly competitive, accelerated innovation, cheap off-the-shelf hardware and instantaneous communication. While the U.S. government wades through protracted acquisition cycles with large defense contractors, our enemies are shoplifting at Radio Shack.

In this context, where missions depend on perishable tactical intelligence and the disruption of networks (human and technological), speed and adaptability becomes far more important than in the past, not as a good in and of itself, but as a necessary condition for success. Access to real-time data (and software code), regardless of the application or device used to generate that data, becomes a requirement. Information flow across services and agencies makes (for instance) non-interoperable systems and proprietary formats a show-stopper. “If only that remote, under-resourced unit had a copy of our company’s software, they’d be able to display the location of the target” is NOT an acceptable concept of operations.

Without a sense of the strategic context, discussions about technology acquisitions and development tend to devolve, either into religious wars between rival schools of engineering methodology or turf battles about which processes, rules and regulations should or could be followed. Most of these conflicts about how and what to build are enmeshed in an industrial age acquisition system matured during the Cold War and NASAs race to the moon.

This system was set up to build tanks, aircraft carriers and missiles — massive amounts of hardware that take a long time to develop and manufacture — to counter a slow, bureaucratically hidebound adversary that’s trying to do the same thing in the same way. But it made sense: developing military hardware is all about optimizing a design to be cheap to manufacture at a high rate, just like GM, Ford and Toyota do, but software is a different beast. In software: software is never complete, it is always being updated, costs are spread more evenly out in it lifecycle versus hardware systems.

I’m most worried about software since that evolves more rapidly than hardware. The government still uses a hardware-based model to buy software-based systems. Which is the wrong method for software because in hardware systems the design and costs are front-loaded and the design is optimized for large builds, where as software costs are exposed over its entire lifecycle.

The rapid adaptation and evolution of enemy tactics means that when a new capability becomes available to the military, it must be possible to plug in that capability without a massive and expensive and slow integration effort. Being able to shrink and accelerate innovation cycles and leverage technical expertise across the enterprise becomes a strategic advantage on this kind of battlefield. These big contextual shifts, rather than philosophical leanings or new technologies per se, tilt the game in favor of open systems.[1]

In the software domain, the ability to rapidly modify existing systems in response to unanticipated threats and opportunities depends on access to that systems development supply chain. Do developers of new capabilities have to use non-proprietary standards, formats and interfaces so that data can be exported and used by other applications? Are technical architectures required to be modular enough to improve or replace components without the exit costs of vendor lock-in? Can code developed on the government dime be leveraged across programs?

These are not technology issues, per se. These are business issues, and they drive competitive military advantage. The key part to making any of this possible is access: access to the intellectual property (IP) investments made by the military on behalf of the American taxpayer.

Large companies have know software is a competitive advantage and have taken steps to actively manage its creation, use and dissemination. Companies like General Electric, Amazon, Microsoft, Facebook and Google have released software as open source to ensure they commoditize technologies and markets faster to ensure they always have vendor options and are never locked in or out of opportunities.

This is why intellectually property governance becomes so important, by allowing one military contractor to in effect own the monopoly on that taxpayer funded piece of software, the military is making a big bet that, that one contractor is the best to manage that software line. This limits competition, slows tech progress and drives costs up both total and increases the cost of technical debt. (Technical debt is how much time and effort it takes to change a systems design).

The government (and taxpayer) funds a massive amount of software IP development, which doesn’t effectively get reused. The military needs executive strategic direction describing why intellectual property is important to the Nations defense and more importantly defining how it should be managed to maximize its return on investment for the military. There are a number of tactical methods (e.g., various field manuals and acronyms: Intellectual Property: Navigating Through Commercial Waters, MOSA, FAR, DFAR, etc.), but there is nothing that lays of the strategic imperative for why software IP is a strategic asset to be actively managed.

Organizing Principles

We must rebuild the government and military acquisitions process around how modern software is built. This kills two typical development problems: hardware platforms and software since hardware is the same process, just slower.

Initial design principles:

1. Code is maneuver. Software needs to be treated as something that has as much value as the Soldier, Sailor, Marine and Airman. Their lives depend on how software is build developed, deployed and ultimately updated.

2. Continuous & Speed. Software is never done, it always needs evolving, its use is accelerating and its update cycle is accelerating at the same time. Automation must be pushed as an imperative to ensure maximum speed advantage of technology deployment and supply chain replenishment.

This is important because all successful companies have very clear lines, limits and directions around how company-to-contractor funded IP should be treated.

Too often in the military, taxpayer funded IP around software is treated as something not of value (if it was it would be better controlled).

Note: Jim Stogdill coined the phrase “Code is Maneuver”http://limnthis.typepad.com/limn_this/2007/09/in-cyberwar-cod.html

[1] Ref: The DoD SoftwareTech News June 2007, Vol. 10 # 2, COTR Warriors: Open Technologies and the Business of War

Ounce of Prevention Costs too Much

Evidently an ounce of prevention costs too much for a majority of enterprises if you believe this study: Organizations taking months to remediate vulnerabilities

“On average, nearly half a year passes by the time organizations in the financial services industry and the education sector remediate security vulnerabilities, according to new research from NopSec.

For the study, the security firm analyzed all the vulnerabilities in the National Vulnerability Database and then looked at a subset of more than 21,000 vulnerabilities identified in all industries across NopSec’s client network, Michelangelo Sidagni, NopSec Chief Technology Officer and Head of NopSec Labs, told SCMagazine.com in a Tuesday email correspondence.

According to the findings, organizations in the financial services industry and the education sector remediate security vulnerabilities in 176 days, on average. Meanwhile, the healthcare industry takes roughly 97 days to address bugs, and cloud providers fix flaws in about 50 days.”

Study: Over 30% of Official Images in Docker Hub Contain High Priority Security Vulnerabilities

Really interesting post from BanyanOps that screams for supply chain management solutions:

Docker Hub is a central repository for Docker developers to pull and push container images. We performed a detailed study on Docker Hub images to understand how vulnerable they are to security threats. Surprisingly, we found that more than 30% of official repositories contain images that are highly susceptible to a variety of security attacks (e.g., Shellshock, Heartbleed, Poodle, etc.). For general images – images pushed by docker users, but not explicitly verified by any authority – this number jumps up to ~40% with a sampling error bound of 3%.”

For Want of a Patch (& a Supply Chain)

(originally published at CTOVision.com, December 8, 2014 )

For Want of a Patch

For want of a patch the component was lost.

For want of a component the stack was lost.

For want of a stack the system was lost.

For want of a system the message was lost.

For want of a message the cyberbattle was lost.

For want of a battle the enterprise was lost.

And all for the want of a software patch.

As the old proverb reminds us, logistics is important: most battles are over before they’ve begun due to having or not having a solid logistics tail. During WW2 the Allies found out the hard way with the invasions of Africa: ships loaded incorrectly led to delays in material onto the beaches and towns, things like ammunition, fuel and medical supplies are needed before typewriters and tents. As subsequent amphibious invasions progressed (North Africa, Sicily, Italy) the military learned how to coordinate better the planning and ultimate loading and unloading of material and manpower to have the largest effect in the fight. These processes ultimately culminated with the successful massive invasion of Normandy to end the 3rd Riech’s hold on Europe. 

The key lesson was to view logistics in war as a continuous process that feeds a fast and continuously maneuvering Army. 

Cyberwar is no different and more closely follows the proverb: one unpatched line of code can leave an entire enterprise open to assault. Why? Accelerated use of software, more software dependencies on other pieces of software AND all that software is constantly in need of being updated. Current organizational processes to keep software updated can’t keep up with the change being generated by the outside world. 

Example: Amazon software deployments for May 2014 for production hosts and environments: 11.6 seconds is the mean time for deployments and 1,079 max deployment in one hour: how many military systems can claim that many deployed changes in a month? (Ref: Gene Kim, slide 23 http://www.slideshare.net/realgenekim/why-everyone-needs-devops-now) I doubt any, but this is what the military (and modern enterprises like Sony) must prepare for: never ending change and updating on near random cycles.

More to the point: continual and unscheduled software patches are the landscape in this new maneuver environment. And since they can’t be planned for, organizations need to learn to evolve for change and deploy software and new capabilities continually. 

Software supply chain planning is no longer something that can be starved of funds. Malware, continuous monitoring, and network scanners can tell you which barn doors are open and that the horses are leaving, but leave enterprises with a massive punch list of fix it items. Funding, time and effort need to spent on the supply chain. It is the first true line of cyber-defense. 

Parting shot, question for CIOs/CTOs: Can you patch all of your systems in the next hour, using existing processes and not bypassing things? For most organizations the answer is no, OpenSSL patches (seriously!) get emailed around from dubious sources is akin to Mom mailing ammo to her son in a care box in Afghanistan.

For want of a message the cyberbattle was lost.

For want of a battle the enterprise was lost.

And all for the want of a software patch.

Code Supply Chain: its Not going to fix itself

Article from WSJ today that points out supply chain is becoming an issue of concern for banks, especially as the wider corporate IT infrastructure becomes more diverse and outsourced. Snip:

“Now they may want screenshots of the last time servers were patched, periodic testing of the patching status of those servers and information about the work that Fair Isaac outsources to others. Some financial institutions have even asked for credit scores and drug testing of employees with access to those servers. The company tries to be as transparent as it can while still preserving the privacy of its employees, said Ms. Miller.”

“Some regulators are also considering applying similar standards beyond providers to banking business partners. One Fortune 500 bank, for example, knows that several of its servers have not been patched for a serious bug called Heartbleed. If it patches those servers, though, it will break continuity with several European banks that have not upgraded their systems, said the chief information security officer of the bank, who declined to be name for security reasons. The bank must be able to share data with its overseas partners so disconnecting is not an option.”

More here:

Financial Firms Grapple With Cyber Risk in the Supply Chain, WSJ May 25, 2015

 

Scale, Software and Throughput

Software is eating the world” (ala Mark Andreesen) and becoming increasingly pervasive: phones, app’s, TVs, routers, fridges (why? no idea), drones, SCADA systems, industrial controls, cars, etc. etc., the boundary of things connected to the internet fabric, opens up new types of services but also massive new types of vulnerabilities. The Internet of Things (IoT) is creating a world of always connected, changing and updating devices.

Scale also refers to the flood of changes that occur in software on a daily basis either due to new features or reactions to needed and required fixes. Keep in mind that software is a collection of building blocks, dependencies within dependencies.

Software has become a deep, wide and fast moving river of change.

Henry Ford and the the manufacturing industry figured out how to deal with scale: simplify, automate and control to optimize throughput. The first thing Ford and Company did was simplify how a car could be built, this required reengineering of parts, but more often than not it required simplifying how the person interacted with the car build process, breaking down tasks into small units, simplifying movements, getting atomic. This created the modern moving assembly line with manned unskilled labor. The skilled labor moved up the design stack and became the engineers who designed the car AND the mechanics of the tooling, build systems and delivery processes. In effect the skilled labor was able to scale their efforts through simplified processes and optimize simply and build chains.

DevOps culture and techniques are pushing this mentality into software creation and delivery: the developer is the designer, after that administrators and operations requires as much automation as possible to deploy change, scale and manage issues as quickly as possible.

Oftentimes we come across great point solutions (or products) to help with software development and delivery, but they don’t address the needs of an end-to-end enterprise supply chain.

JIT

An older book that does a great job of describing the accelerating change occurring in the software development and operations industry is The Goal (http://www.amazon.com/The-Goal-Process-Ongoing-Improvement/dp/0884270610). It’s focus was on the revolution of merging just-in-time (JIT) delivery/logistics of material with updated manufacturing and information technology systems to simplify and lower the costs associated with create/build/delivery of physical goods. It is a must read for anyone currently in the software delivery game who is seeking to optimize, manage and understand software supply chains, i.e., moving from install DVDs to always on software stream.

Software is being forced into this model to handle the scale required by validation and verification, security and vulnerability analysis, and target platform support (mobile, desktop, cloud and all of the OS variants).

Speed Matching Evolution

There are many gaps to be filled in the cybersecurity world. AirGap is focused on providing solutions to problems in the combined realms of cybersecurity and software supply chain.

When it comes to security and software we see three problems:

  • Speed: of software development and change
  • Scale: volume of software (its everywhere and eating more of the world)
  • Supply chain and up-stream dependencies

Each of these three interrelate and must be solved in concert to enable an enterprise to be secure and up-to-date.

We view ‘speed’ impacting the software supply chain speed along two axis: development and operations. The speed at which software is developed and the speed it expects to be delivered and deployed.

Software now evolves much more rapidly due to number of factors, but chief among them are the development and delivery of changes like bug-fixes and enhancements based on changes in up-stream dependencies. The “don’t reinvent the wheel” mentality of software projects, simply because they can’t afford to do so, scales up the volume of software. As software is released in both realtime (as changes happen) and on rapid schedules down-stream projects want to consume the fixes and new features as quickly as possible.

Speed of Development: As mentioned above, the pace of software development is increasing. Agile practices, open source software frameworks and libraries, dynamic languages and improvements in tooling and delivery are combining to improve the effectiveness and efficiency of developers. New software and updates to existing software are being released readily to various open channels.

Speed of Operations: Movements such as DevOps, Continuous Deployment and Continuous Delivery are simplifying the full scale industrialization of software coding, streamlining pipelines of source to various endpoints. Manufacturing-style automation has finally made its way to software, enabling a few people to run large chunks of engineering infrastructure at a fraction of the cost of five years ago. This automation also means less mistakes and errors via accountability and repeatability. But it does mean that speed must be built into the equation of software operation and maintenance.

Many enterprises haven’t found it possible to keep up with either the speed of software development or the speed of operations so that software can evolve to meet current needs or threats. The path just does does not exist, creating a performance and security situation which can be unmanageable. Worse, as vulnerabilities are identified in operational software, organizations struggle to “roll forward” or patch their environments.

Why? Simply, the frog has been cooked slowly or the enterprise hasn’t built up its organizational IT bureaucracy to match the speed of how software needed to be updated, maybe once a year? In many organizations the dependency on vendors to provide updates creates a lag and an environment ripe for attack. For open source software, most IT infrastructures just aren’t responsive enough to process/validate the flow of changes.