Category Archives: devops

Scale, Software and Throughput

Software is eating the world” (ala Mark Andreesen) and becoming increasingly pervasive: phones, app’s, TVs, routers, fridges (why? no idea), drones, SCADA systems, industrial controls, cars, etc. etc., the boundary of things connected to the internet fabric, opens up new types of services but also massive new types of vulnerabilities. The Internet of Things (IoT) is creating a world of always connected, changing and updating devices.

Scale also refers to the flood of changes that occur in software on a daily basis either due to new features or reactions to needed and required fixes. Keep in mind that software is a collection of building blocks, dependencies within dependencies.

Software has become a deep, wide and fast moving river of change.

Henry Ford and the the manufacturing industry figured out how to deal with scale: simplify, automate and control to optimize throughput. The first thing Ford and Company did was simplify how a car could be built, this required reengineering of parts, but more often than not it required simplifying how the person interacted with the car build process, breaking down tasks into small units, simplifying movements, getting atomic. This created the modern moving assembly line with manned unskilled labor. The skilled labor moved up the design stack and became the engineers who designed the car AND the mechanics of the tooling, build systems and delivery processes. In effect the skilled labor was able to scale their efforts through simplified processes and optimize simply and build chains.

DevOps culture and techniques are pushing this mentality into software creation and delivery: the developer is the designer, after that administrators and operations requires as much automation as possible to deploy change, scale and manage issues as quickly as possible.

Oftentimes we come across great point solutions (or products) to help with software development and delivery, but they don’t address the needs of an end-to-end enterprise supply chain.

JIT

An older book that does a great job of describing the accelerating change occurring in the software development and operations industry is The Goal (http://www.amazon.com/The-Goal-Process-Ongoing-Improvement/dp/0884270610). It’s focus was on the revolution of merging just-in-time (JIT) delivery/logistics of material with updated manufacturing and information technology systems to simplify and lower the costs associated with create/build/delivery of physical goods. It is a must read for anyone currently in the software delivery game who is seeking to optimize, manage and understand software supply chains, i.e., moving from install DVDs to always on software stream.

Software is being forced into this model to handle the scale required by validation and verification, security and vulnerability analysis, and target platform support (mobile, desktop, cloud and all of the OS variants).

Speed Matching Evolution

There are many gaps to be filled in the cybersecurity world. AirGap is focused on providing solutions to problems in the combined realms of cybersecurity and software supply chain.

When it comes to security and software we see three problems:

  • Speed: of software development and change
  • Scale: volume of software (its everywhere and eating more of the world)
  • Supply chain and up-stream dependencies

Each of these three interrelate and must be solved in concert to enable an enterprise to be secure and up-to-date.

We view ‘speed’ impacting the software supply chain speed along two axis: development and operations. The speed at which software is developed and the speed it expects to be delivered and deployed.

Software now evolves much more rapidly due to number of factors, but chief among them are the development and delivery of changes like bug-fixes and enhancements based on changes in up-stream dependencies. The “don’t reinvent the wheel” mentality of software projects, simply because they can’t afford to do so, scales up the volume of software. As software is released in both realtime (as changes happen) and on rapid schedules down-stream projects want to consume the fixes and new features as quickly as possible.

Speed of Development: As mentioned above, the pace of software development is increasing. Agile practices, open source software frameworks and libraries, dynamic languages and improvements in tooling and delivery are combining to improve the effectiveness and efficiency of developers. New software and updates to existing software are being released readily to various open channels.

Speed of Operations: Movements such as DevOps, Continuous Deployment and Continuous Delivery are simplifying the full scale industrialization of software coding, streamlining pipelines of source to various endpoints. Manufacturing-style automation has finally made its way to software, enabling a few people to run large chunks of engineering infrastructure at a fraction of the cost of five years ago. This automation also means less mistakes and errors via accountability and repeatability. But it does mean that speed must be built into the equation of software operation and maintenance.

Many enterprises haven’t found it possible to keep up with either the speed of software development or the speed of operations so that software can evolve to meet current needs or threats. The path just does does not exist, creating a performance and security situation which can be unmanageable. Worse, as vulnerabilities are identified in operational software, organizations struggle to “roll forward” or patch their environments.

Why? Simply, the frog has been cooked slowly or the enterprise hasn’t built up its organizational IT bureaucracy to match the speed of how software needed to be updated, maybe once a year? In many organizations the dependency on vendors to provide updates creates a lag and an environment ripe for attack. For open source software, most IT infrastructures just aren’t responsive enough to process/validate the flow of changes.