Category Archives: opensource

Finding (Risky) Signals in the Open Software Noise

Recently the Linux Foundation teamed up with DHS to create the Census Project analyze those open source software projects that should be considered risky and to define what risk might be. Risky might be: small community, slow update, no website, IRC, no listed people, etc. People need more Code Intelligence (CodeINT) on the source code they use and ways of classifying things.

Check it out:  The Census represents CII’s current view of the open source ecosystem and which projects are at risk. The Heartbleed vulnerability in OpenSSL highlighted that while some open source software (OSS) is widely used and depended on, vulnerabilities can have serious ramifications, and yet some projects have not received the level of security analysis appropriate to their importance. Some OSS projects have many participants, perform in-depth security analyses, and produce software that is widely considered to have high quality and strong security. However, other OSS projects have small teams that have limited time to do the tasks necessary for strong security. The trick is to identify quickly which critical projects fall into the second bucket.

The Census Project focuses on automatically gathering metrics, especially those that suggest less active projects (such as a low contributor count). We also provided a human estimate of the program’s exposure to attack, and developed a scoring system to heuristically combine these metrics. These heuristics identified especially plausible candidates for further consideration. For the initial set of projects to examine, we took the set of packages installed by Debian base and added a set of packages that were identified as potentially concerning. A natural outcome of the census will be a list of projects to consider funding. The decision to fund a project in need is not automated by any means.

 

TechFAR, IT & Development

I wrote this about a year ago… sad not much has changed in the US Fed government:

Government get your Sh** Together

Commenting on the ‘new’ government digital service and TechFAR.

Great article here:  New US Digital Service Looks to Avoid IT Catastrophes 

Discussion by Gunner: US Digital Service is Born

Steve Kelman, FCW:  The REAL regulatory challenge of agile development

This simply isn’t bold enough. This effort simply try’s to fix whats broken instead of looking more deeply at why its broken and how to ‘reformat’ the way IT services are built and most importantly used by the government and citizens.

The Service and TechFAR is akin to what the military seems to plan for: planning and building systems to fix yesterdays battles instead of really and deeply thunking for the future.

UK: the UK gov digital service worked because its much, much smaller than our government (think California), two the UK gov is setup very differently than our government bureaucracy. The UK digital service reports directly to the PM and as I understand it, really could walk into any UK Agency (save MoD) and demand changes + take over projects. Also in a Parliamentary system, what the PM says goes, no discussing with Parliament, arguing over budgets, etc. a number of projects get killed pretty fast when there is a gov changeover.

The US has neither.

GSA is a pretty widely ignored Agency (for a number of historical reasons), maybe this time it will be different, I’m sure they can help web pages load faster. But the Digital service is going to report a few levels down from POTUS and the head of GSA, both of which have many other things on their plate.

Also US Agencies have two masters they play pretty well off each other, Congress and the Pres.  Scale: the gov is friggin huge. Also some government systems have very unique and multiple functions for only one customer (the Gov + citizens).  Fixing FAA and SSA isn’t a few agile sprints over pizza and Dew.

Instead of fixing past problems, we need deeper thinking about HOW and WHY the government should provide services.

Examples:

– The government no longer runs motor pools, it outsources the entire job to companies with specific service levels and agreements (SLA).

– Failed example: SABRE (the airline reservation company) came to DoD and pitched the idea that they would take care of all military travel for something like $60 a ticket. Some govvies pitched they could do it cheaper, they tried and build a disaster of a service ++ Defense Travel continues to eat funds way above and beyond. And continues to frustrate and strand military travelers overseas.

There are many more, but the basic take away is this: the government must not recreate services the private sector does cheaper, better and faster unless its part of its’ core mission.

  • Bombs, check – part of core military mission.
  • HR? (outside of the military and CIA), the government should license or buy as a service HR capabilities.
  • Websites? I’m having a tough time with this. If the government could come up with a list of requirements and define a serious set of SLAs, they are any number of companies who would gladly offer website as a service, with the government managing the input.
  • IRS systems: parts of it could defiantly be outsourced, especially all of fraud monitoring.

One last point:

YAR – yet another review, I don’t see how another YAR by the Digital Service is going to add to government agility and flexibility. Make no mistake, the gov is setup with a number of very expensive and time consuming YARs already, each which must be planned for and dumbed down to senior management.

TechFAR inculcates YAR + yet another ref doc to read, that  won’t apply to any Agency that doesn’t adopt it.

Many of these suggestions sound good for a few projects, but crumple and slow down system creation when scaled to 1000s of project in an $80 billion portfolio.

Some question to ask as the Gov builds systems:

1. Is the thing your Agency needs to develop a core competency?

2. if not, define how to buy it as a fixed price service offering w/ a tight SLA

3. if yes, first start developing small + draft off of any existing efforts (open source software, or other state, local, international gov’s) ++ be open to the outside

Errata:

  • stop fixing current problems with past thinking
  • stop telling industry how to suck the egg (i.e., stop dictating to industry what methods to use to develop technologies, CMMI was great for adding bodies – thanks for the revenue, is Agile really the endpoint of development methods? Lets not hard code something (again) into how the gov does procurement)
  • automate eixsting jobs, rethink how the work gets done (IRS a body shop)
  • automate existing process and rethink if you need them at all
  • Start to collapse Agencies and processes. Does every Dept and Agency really need a CIO and associated staff?
  • What things could Agencies outsource to each other (like paychecks that Dept of Ag does for smaller Agencies)

Ounce of Prevention Costs too Much

Evidently an ounce of prevention costs too much for a majority of enterprises if you believe this study: Organizations taking months to remediate vulnerabilities

“On average, nearly half a year passes by the time organizations in the financial services industry and the education sector remediate security vulnerabilities, according to new research from NopSec.

For the study, the security firm analyzed all the vulnerabilities in the National Vulnerability Database and then looked at a subset of more than 21,000 vulnerabilities identified in all industries across NopSec’s client network, Michelangelo Sidagni, NopSec Chief Technology Officer and Head of NopSec Labs, told SCMagazine.com in a Tuesday email correspondence.

According to the findings, organizations in the financial services industry and the education sector remediate security vulnerabilities in 176 days, on average. Meanwhile, the healthcare industry takes roughly 97 days to address bugs, and cloud providers fix flaws in about 50 days.”

Study: Over 30% of Official Images in Docker Hub Contain High Priority Security Vulnerabilities

Really interesting post from BanyanOps that screams for supply chain management solutions:

Docker Hub is a central repository for Docker developers to pull and push container images. We performed a detailed study on Docker Hub images to understand how vulnerable they are to security threats. Surprisingly, we found that more than 30% of official repositories contain images that are highly susceptible to a variety of security attacks (e.g., Shellshock, Heartbleed, Poodle, etc.). For general images – images pushed by docker users, but not explicitly verified by any authority – this number jumps up to ~40% with a sampling error bound of 3%.”

For Want of a Patch (& a Supply Chain)

(originally published at CTOVision.com, December 8, 2014 )

For Want of a Patch

For want of a patch the component was lost.

For want of a component the stack was lost.

For want of a stack the system was lost.

For want of a system the message was lost.

For want of a message the cyberbattle was lost.

For want of a battle the enterprise was lost.

And all for the want of a software patch.

As the old proverb reminds us, logistics is important: most battles are over before they’ve begun due to having or not having a solid logistics tail. During WW2 the Allies found out the hard way with the invasions of Africa: ships loaded incorrectly led to delays in material onto the beaches and towns, things like ammunition, fuel and medical supplies are needed before typewriters and tents. As subsequent amphibious invasions progressed (North Africa, Sicily, Italy) the military learned how to coordinate better the planning and ultimate loading and unloading of material and manpower to have the largest effect in the fight. These processes ultimately culminated with the successful massive invasion of Normandy to end the 3rd Riech’s hold on Europe. 

The key lesson was to view logistics in war as a continuous process that feeds a fast and continuously maneuvering Army. 

Cyberwar is no different and more closely follows the proverb: one unpatched line of code can leave an entire enterprise open to assault. Why? Accelerated use of software, more software dependencies on other pieces of software AND all that software is constantly in need of being updated. Current organizational processes to keep software updated can’t keep up with the change being generated by the outside world. 

Example: Amazon software deployments for May 2014 for production hosts and environments: 11.6 seconds is the mean time for deployments and 1,079 max deployment in one hour: how many military systems can claim that many deployed changes in a month? (Ref: Gene Kim, slide 23 http://www.slideshare.net/realgenekim/why-everyone-needs-devops-now) I doubt any, but this is what the military (and modern enterprises like Sony) must prepare for: never ending change and updating on near random cycles.

More to the point: continual and unscheduled software patches are the landscape in this new maneuver environment. And since they can’t be planned for, organizations need to learn to evolve for change and deploy software and new capabilities continually. 

Software supply chain planning is no longer something that can be starved of funds. Malware, continuous monitoring, and network scanners can tell you which barn doors are open and that the horses are leaving, but leave enterprises with a massive punch list of fix it items. Funding, time and effort need to spent on the supply chain. It is the first true line of cyber-defense. 

Parting shot, question for CIOs/CTOs: Can you patch all of your systems in the next hour, using existing processes and not bypassing things? For most organizations the answer is no, OpenSSL patches (seriously!) get emailed around from dubious sources is akin to Mom mailing ammo to her son in a care box in Afghanistan.

For want of a message the cyberbattle was lost.

For want of a battle the enterprise was lost.

And all for the want of a software patch.

Scale, Software and Throughput

Software is eating the world” (ala Mark Andreesen) and becoming increasingly pervasive: phones, app’s, TVs, routers, fridges (why? no idea), drones, SCADA systems, industrial controls, cars, etc. etc., the boundary of things connected to the internet fabric, opens up new types of services but also massive new types of vulnerabilities. The Internet of Things (IoT) is creating a world of always connected, changing and updating devices.

Scale also refers to the flood of changes that occur in software on a daily basis either due to new features or reactions to needed and required fixes. Keep in mind that software is a collection of building blocks, dependencies within dependencies.

Software has become a deep, wide and fast moving river of change.

Henry Ford and the the manufacturing industry figured out how to deal with scale: simplify, automate and control to optimize throughput. The first thing Ford and Company did was simplify how a car could be built, this required reengineering of parts, but more often than not it required simplifying how the person interacted with the car build process, breaking down tasks into small units, simplifying movements, getting atomic. This created the modern moving assembly line with manned unskilled labor. The skilled labor moved up the design stack and became the engineers who designed the car AND the mechanics of the tooling, build systems and delivery processes. In effect the skilled labor was able to scale their efforts through simplified processes and optimize simply and build chains.

DevOps culture and techniques are pushing this mentality into software creation and delivery: the developer is the designer, after that administrators and operations requires as much automation as possible to deploy change, scale and manage issues as quickly as possible.

Oftentimes we come across great point solutions (or products) to help with software development and delivery, but they don’t address the needs of an end-to-end enterprise supply chain.

JIT

An older book that does a great job of describing the accelerating change occurring in the software development and operations industry is The Goal (http://www.amazon.com/The-Goal-Process-Ongoing-Improvement/dp/0884270610). It’s focus was on the revolution of merging just-in-time (JIT) delivery/logistics of material with updated manufacturing and information technology systems to simplify and lower the costs associated with create/build/delivery of physical goods. It is a must read for anyone currently in the software delivery game who is seeking to optimize, manage and understand software supply chains, i.e., moving from install DVDs to always on software stream.

Software is being forced into this model to handle the scale required by validation and verification, security and vulnerability analysis, and target platform support (mobile, desktop, cloud and all of the OS variants).

Speed Matching Evolution

There are many gaps to be filled in the cybersecurity world. AirGap is focused on providing solutions to problems in the combined realms of cybersecurity and software supply chain.

When it comes to security and software we see three problems:

  • Speed: of software development and change
  • Scale: volume of software (its everywhere and eating more of the world)
  • Supply chain and up-stream dependencies

Each of these three interrelate and must be solved in concert to enable an enterprise to be secure and up-to-date.

We view ‘speed’ impacting the software supply chain speed along two axis: development and operations. The speed at which software is developed and the speed it expects to be delivered and deployed.

Software now evolves much more rapidly due to number of factors, but chief among them are the development and delivery of changes like bug-fixes and enhancements based on changes in up-stream dependencies. The “don’t reinvent the wheel” mentality of software projects, simply because they can’t afford to do so, scales up the volume of software. As software is released in both realtime (as changes happen) and on rapid schedules down-stream projects want to consume the fixes and new features as quickly as possible.

Speed of Development: As mentioned above, the pace of software development is increasing. Agile practices, open source software frameworks and libraries, dynamic languages and improvements in tooling and delivery are combining to improve the effectiveness and efficiency of developers. New software and updates to existing software are being released readily to various open channels.

Speed of Operations: Movements such as DevOps, Continuous Deployment and Continuous Delivery are simplifying the full scale industrialization of software coding, streamlining pipelines of source to various endpoints. Manufacturing-style automation has finally made its way to software, enabling a few people to run large chunks of engineering infrastructure at a fraction of the cost of five years ago. This automation also means less mistakes and errors via accountability and repeatability. But it does mean that speed must be built into the equation of software operation and maintenance.

Many enterprises haven’t found it possible to keep up with either the speed of software development or the speed of operations so that software can evolve to meet current needs or threats. The path just does does not exist, creating a performance and security situation which can be unmanageable. Worse, as vulnerabilities are identified in operational software, organizations struggle to “roll forward” or patch their environments.

Why? Simply, the frog has been cooked slowly or the enterprise hasn’t built up its organizational IT bureaucracy to match the speed of how software needed to be updated, maybe once a year? In many organizations the dependency on vendors to provide updates creates a lag and an environment ripe for attack. For open source software, most IT infrastructures just aren’t responsive enough to process/validate the flow of changes.