Category Archives: logistics

Finding (Risky) Signals in the Open Software Noise

Recently the Linux Foundation teamed up with DHS to create the Census Project analyze those open source software projects that should be considered risky and to define what risk might be. Risky might be: small community, slow update, no website, IRC, no listed people, etc. People need more Code Intelligence (CodeINT) on the source code they use and ways of classifying things.

Check it out:  The Census represents CII’s current view of the open source ecosystem and which projects are at risk. The Heartbleed vulnerability in OpenSSL highlighted that while some open source software (OSS) is widely used and depended on, vulnerabilities can have serious ramifications, and yet some projects have not received the level of security analysis appropriate to their importance. Some OSS projects have many participants, perform in-depth security analyses, and produce software that is widely considered to have high quality and strong security. However, other OSS projects have small teams that have limited time to do the tasks necessary for strong security. The trick is to identify quickly which critical projects fall into the second bucket.

The Census Project focuses on automatically gathering metrics, especially those that suggest less active projects (such as a low contributor count). We also provided a human estimate of the program’s exposure to attack, and developed a scoring system to heuristically combine these metrics. These heuristics identified especially plausible candidates for further consideration. For the initial set of projects to examine, we took the set of packages installed by Debian base and added a set of packages that were identified as potentially concerning. A natural outcome of the census will be a list of projects to consider funding. The decision to fund a project in need is not automated by any means.

 

Cost Overruns in Large Systems

From my Masters Thesis on cost overruns, I wonder how much planning went into Healthcare.gov… so if NASA who has the brains about how to build build systems has a tough time getting it right, HHS has no hope. There were also time over runs as well.

… design decisions made early in the life cycle can have a large impact later when the technology is in operation.  An example of this in the graph by Werner M. Gruhl (Cost and Analysis Branch, NASA) presented at a INCOSE (International Council of System Engineering) System Engineering Seminar in 1998.

Pict

In the Figure 1-1, Phase A & B Costs are the costs associated with early stage conceptual planning and design of a technology. One early stage decision in the life cycle of a technology is how much money to allocate to the early stages of development and design. The chart shows that (for systems at NASA) if less than ten percent of the total cost is allocated to these early stages there can be an expectation of cost overruns. Making the decision to allocate more money to these earlier stages allows for the use of more resources to make sure the technology being developed closely matches what was envisioned.

Code Supply Chain: its Not going to fix itself

Article from WSJ today that points out supply chain is becoming an issue of concern for banks, especially as the wider corporate IT infrastructure becomes more diverse and outsourced. Snip:

“Now they may want screenshots of the last time servers were patched, periodic testing of the patching status of those servers and information about the work that Fair Isaac outsources to others. Some financial institutions have even asked for credit scores and drug testing of employees with access to those servers. The company tries to be as transparent as it can while still preserving the privacy of its employees, said Ms. Miller.”

“Some regulators are also considering applying similar standards beyond providers to banking business partners. One Fortune 500 bank, for example, knows that several of its servers have not been patched for a serious bug called Heartbleed. If it patches those servers, though, it will break continuity with several European banks that have not upgraded their systems, said the chief information security officer of the bank, who declined to be name for security reasons. The bank must be able to share data with its overseas partners so disconnecting is not an option.”

More here:

Financial Firms Grapple With Cyber Risk in the Supply Chain, WSJ May 25, 2015

 

Scale, Software and Throughput

Software is eating the world” (ala Mark Andreesen) and becoming increasingly pervasive: phones, app’s, TVs, routers, fridges (why? no idea), drones, SCADA systems, industrial controls, cars, etc. etc., the boundary of things connected to the internet fabric, opens up new types of services but also massive new types of vulnerabilities. The Internet of Things (IoT) is creating a world of always connected, changing and updating devices.

Scale also refers to the flood of changes that occur in software on a daily basis either due to new features or reactions to needed and required fixes. Keep in mind that software is a collection of building blocks, dependencies within dependencies.

Software has become a deep, wide and fast moving river of change.

Henry Ford and the the manufacturing industry figured out how to deal with scale: simplify, automate and control to optimize throughput. The first thing Ford and Company did was simplify how a car could be built, this required reengineering of parts, but more often than not it required simplifying how the person interacted with the car build process, breaking down tasks into small units, simplifying movements, getting atomic. This created the modern moving assembly line with manned unskilled labor. The skilled labor moved up the design stack and became the engineers who designed the car AND the mechanics of the tooling, build systems and delivery processes. In effect the skilled labor was able to scale their efforts through simplified processes and optimize simply and build chains.

DevOps culture and techniques are pushing this mentality into software creation and delivery: the developer is the designer, after that administrators and operations requires as much automation as possible to deploy change, scale and manage issues as quickly as possible.

Oftentimes we come across great point solutions (or products) to help with software development and delivery, but they don’t address the needs of an end-to-end enterprise supply chain.

JIT

An older book that does a great job of describing the accelerating change occurring in the software development and operations industry is The Goal (http://www.amazon.com/The-Goal-Process-Ongoing-Improvement/dp/0884270610). It’s focus was on the revolution of merging just-in-time (JIT) delivery/logistics of material with updated manufacturing and information technology systems to simplify and lower the costs associated with create/build/delivery of physical goods. It is a must read for anyone currently in the software delivery game who is seeking to optimize, manage and understand software supply chains, i.e., moving from install DVDs to always on software stream.

Software is being forced into this model to handle the scale required by validation and verification, security and vulnerability analysis, and target platform support (mobile, desktop, cloud and all of the OS variants).