The Year That Software Ate the Network Hardware Stack

2016: The Year That Software Ate the Network Hardware Stack

1. OpenStack Will Become a Top Choice for Managing Multi-Vendor Clouds

In 2011, Amazon introduced an “AWS connector,” a tiny piece of software (called Amazon EC2 import connector) that migrated VMware workloads to Amazon’s cloud infrastructure. This migration tool is great for enterprises, which want more cloud choices. It also illustrates how quickly the cloud can dismantle vendor lock-in. Today, OpenStack makes it easy to automate network functions and program your network infrastructure like a server. Going a step further, network orchestration platforms like OpenStack Astara can make it easy for enterprises to pick-and-choose the network functions they want. How would this work? Network functions are network applications.   This software lives inside dedicated network appliances and network routers. Today, these software functions can be unbundled from network hardware and run inside OpenStack clouds. That’s great news for enterprises, but not great for hardware vendors.  In 2016, I expect OpenStack will be used as a connector technology to manage multi-vendor clouds.

2. Cloud Infrastructure Makes Routing-as-a-Service Viable

Speaking about the rise of mobile banking and FinTech start-ups, Microsoft founder Bill Gates once said “the world needs banking, it doesn’t need banks.” The same could be said about networking. The world needs networks, it doesn’t need big network vendors.   Today, most network functions can be abstracted into software. What this means is that $20B edge routing market is about to be turned upside down. Cloud infrastructures are interesting alternatives, because they give enterprises unlimited compute, storage and network resources. For certain edge routing functions, there is unlimited compute for network processing, unlimited storage for router applications, and unlimited flash memory for things like routing tables and route forwarding. For these reasons, cloud infrastructures make routing-as-a-service viable or good enough for most network services. In next few years, I expect a large substitution movement from hardware to software. This will put a lot of pressure on network hardware vendors to rethink their product form factors (hardware vs. software) and their business models (buy vs. rent).   In the short-term, network functions like application performance management; traffic load-balancing and security functions will migrate from network hardware to network software.

3. Open Software Movement Has Made Many Standards Bodies Obsolete

Within the open software movement, traditional standards bodies are losing relevance.   They are viewed as slow and plodding – as well as government or vendor-controlled.   And these standards bodies are hardly pillars for innovation or interoperability. In contrast, the power of open networks is best defined by the contributions of its users.   We’re talking about developers with hands on keyboards. They contribute code to groups like OpenStack Foundation, Open NFV and a wide assortment of open compute and open networking projects. It’s meritocracy that is all about speed, efficiency and results. Most DevOps people operate outside the orbit of the IETF, ETSI or the ITU, but their contributions to cloud futures is real and substantial. Interoperability isn’t just about kicking the IP packet down the network pipe. It is about the unbundling or bundling of network services depending on the needs of customers. Regrettably, many standards bodies have forgotten how to stay relevant.


About the Author

Henrik Rosendahl is the CEO of Akanda, the main contributor of the recently launched OpenStack network orchestration platform, Project Astara. Rosendahl has led Akanda since the company’s founding in 2014. A veteran of enterprise software, Rosendahl was previously the co-founder of CloudVolumes (a virtualization company acquired by VMware in 2014). In all, he has four successful exits including Pancetera Software (to Quantum), Thinstall (also to VMware), and Interse A/S (to ScanJour A/S). Rosendahl also invests and advises startups, including Be My Eyes and Lua. He lives in the Bay Area.




Making Multi-Vendor Cloud Management Easier

Screen Shot 2015-12-03 at 1.24.22 PM

This is the goal of Astara, a new OpenStack Project from the OpenStack Foundation.

Link to article

For five years, OpenStack contributors have delivered a steady stream of open source solutions to help automate and simplify cloud operations. It began with cloud computing, where KVM has established itself as the hypervisor of choice for OpenStack virtual clouds. Similarly, Ceph has made itself the defacto standard for OpenStack storage. But until recently, advances in OpenStack networking have lagged compute and storage because managing multi-vendor cloud networks remained too hard and time consuming.

Major Network Vendors Have ‘Right-Hand, Left-Hand’ Problem

While most network vendors collaborate on things like interoperability, IP protocols, and network standards, there’s little harmony on Layer 3 network services. Simply put, a virtualized Cisco firewall is not designed to work on Juniper routers. You can’t run an Alcatel deep packet inspection on an Ericsson managed network. And Layer 4-7 application performance management from F5 cannot be abstracted to run on any edge routing platform you choose without significant backend integration work.

This can make things difficult for multi-tenant cloud operators who want to use OpenStack networking to abstract and stitch together multiple network services from multiple network vendors. It’s no small task: we’re talking about multiple software defined network (SDN) controllers, multiple network plug-ins, and multiple billing and operations systems. Simply put, major network vendors have a “right-hand, left-hand” problem. Most network applications from different vendors don’t talk to each other. This makes service-chaining a challenge at best — and impossible at worst.

Introducing Astara For Multi-Vendor Network Harmony

Astara is a newly official OpenStack project under the full control of the OpenStack Foundation. It’s tasked with taking on this challenge of managing multi-vendor network clouds and unifying Layer 3 network applications. The project provides a vendor agnostic network orchestration platform for OpenStack operators. This is no small feat, and includes sophisticated lifecycle management and new abilities to monitor, configure, and manage any layer 3 through 7 network services. It can abstract and spin-up VMs or containers that deliver routing, firewall, and load balancing as a service.

The goal here is to create a vendor neutral, open networking stack that’s ready to simplify the operation of any multi-tenant OpenStack environment. Because it’s Layer-2 agnostic and structured to work with existing networks rather than require their replacement, the project is designed to be compatible, scalable, and developer-friendly to implement and operate. The platform takes event streams from Neutron to significantly simplify monitoring, allowing Astara to make intelligent decisions and update configurations as needed. The production-ready project has been deployed in data centers across North America, and it saved DreamHost 40% t in operating expenses and 70% in capital expenses over VMware NSX.

Power of Open Networks

The power of open networks is best defined by the contributions of its users. By making Astara an official project, the OpenStack Foundation wants to crowdsource network innovation and take advantage of its strong community of network users. Astara intends to make open source clouds faster to set up, simpler to operate, and more robust by adding new services.

Astara’s Liberty release

Astara’s recent first release as an official OpenStack project came with significant technical enhancements, coinciding with the release of OpenStack Liberty. Along with full compatibility with OpenStack Liberty, Astara also works with clouds running the previous OpenStack releases of Kilo and Juno. New for Liberty is a more highly configurable load balancer driver, allowing OpenStack operators to have Astara load and manage only the resources they select. Operators also gain the advantage of faster provisioning, as a new service more quickly provisions Neutron resources onto appliance VMs and manages pools of hot-standby appliance VMs. Astara’s latest also brings higher availability and scaling improvements, making the platform even more ready for primetime than before.

The Prize for OpenStack Cloud Operators

The effects of empowering OpenStack cloud operators with a capable platform for network orchestration and centralized management are important and have been sought after. Astara has the ability to ease the networking complexity most operators currently live with as part of their daily duties running OpenStack clouds. Simplified cloud networking management leads directly to increased network stability and improvements in consistency, performance, and the flexibility with which these systems can be operated. Easier orchestration also is a friend to operating budgets, and can free organizations from the relatively high costs that can come with single vendor lock-in. And, because Astara is open and extensible, cloud operators can both rely upon the platform across their OpenStack networks, and count on it as a fixture in their long-term planning.



Blog: Why OpenStack Neutron needs a Hardware Agnostic Network Stack


By Henrik Rosendahl, CEO Akanda Inc. @hrosendahl

Why OpenStack Neutron needs a Hardware Agnostic Network Stack

Welcome to the Astara Project!

OpenStack is clearly the foundational element of the majority of public and private cloud projects today, enabling the DevOps movement with a platform for agile development and delivery of new applications and services.  And while agility translates well into service creation and delivery in most IaaS and PaaS scenarios, there is still no easy answer for agile creation and management of network services in clouds.  There are great core components in virtual switches and Neutron APIs and plugins but there is still no simple layer for networking in the cloud world.

We as an industry like to talk generically about SDN and NFV as if they are simplifying networks and we continue to create controllers, drivers, plugins and network VMs to drive more network functionality into the cloud, however none of these have really addressed the fundamental need to provide access to network services in a way that is meaningful to the DevOps team that is actually building and using the cloud environment.

All of these early efforts have not been wasted.  The result is a very powerful set of network services (both open source and commercial) that perform well in a virtualized software environment (big progress).  What we still need is an integrated way to orchestrate this network stack, especially in a multi-vendor OpenStack environment.

That’s where the Astara project comes in…

The Astara project is meant to provide an integrated network service orchestration for connecting and securing multi-tenant OpenStack environments. Astara provides deployer configurable multi-vendor orchestration for Layer-3 through 7 network services (e.g. load balancing, routing).

Astara (formerly called Akanda) was designed from the beginning to integrate with OpenStack and further OpenStack’s mission. Astara features a driver based orchestrator to manage network functions from different providers on bare metal, in VMs and containers.

So what about Akanda?

Akanda will continue to provide development resources to the project, along with commercial subscriptions and enterprise support to customers and partners.




Blog: Astara Liberty Release

By Adam Gandelman, PTL Astara

This week marks the end of the first full development cycle for Astara as an OpenStack project.  It’s been an interesting six months.  I’m happy to cut the final tags and push the Astara Liberty release out the door.  Here’s a recap of some of the major technical changes to the project this cycle:


Project Akanda becomes Project Astara

Toward the end of the Kilo cycle, we made the effort to condense the entire Akanda codebase from a handful of random repositories scattered around github, to a small set of core repositories that lived on stackforge.  This allowed the project to adopt the same familiar developer workflows that the rest of the OpenStack community knows so well, and take advantage of all of the resources that the wonderful OpenStack Infra team provides to projects (code hosting, CI, etc).  As the OpenStack ecosystem continues? to adopt the Big Tent, it was decided that stackforge would be retired.  Rather than go back to a github-centric world and drift back away from upstream alignment, we decided it would be best to move the project into the OpenStack namespace.  In doing so, we also decided it would be best to decouple the open-source project from the company itself.  To that end, we’ve begun the process of renaming the project from Akanda to Astara.  All core Akanda repositories are in the process of being renamed accordingly: akanda-rug -> astara-rug, akanda-neutron -> astara-neutron, akanda-horizon->astara-horizon, akanda-appliance->astara-appliance.  Depending on when you go look, you may see some confusing references to old names where new names are expected.  We plan to finish up the rename and migration when we are all back from the Tokyo Summit.


Closer alignment with the rest of OpenStack

Astara began its life as a project that was developed as a downstream project at Dreamhost.  There it lived for several upstream cycles while the rest of the OpenStack world began solidifying best practices and common patterns across projects.  The original Astara codebase needed quite a bit of work if it was going to play well with the current state of OpenStack.  To that end, we spent time early in the cycle getting all the code aligned with OpenStack global-requirements, ensuring a consistent set of dependencies between Astara repos and other OpenStack projects that may live on the same system.  We also went through and began leveraging oslo libraries wherever possible.  The most noticeable change was the migration to oslo.messaging, which removed quite a bit of implementation-specific things from the messaging layer of the system.  We’ve also migrated the tooling used to build our appliance VM images to diskimage-builder and ensured support for the Keystone V3 API.


Beyound Routers

One of Astara’s original goals was to ease the management of Neutron routers by treating them as cloud-native applications that could live in Nova instances.  It turns out this model works well for the other Neutron advanced services, so we set out to break up the core of the astara-rug and began refactoring things into a pluggable driver layer.  The existing router-specific code moved into the initial router driver, and we added a new driver that manages Neutron load balancers, exposed to users via the neutron-lbaas project.  We’ve extended the code that runs in our appliance VMs to understand load balancers, with the first implementation built on Nginx.  This work sets the stage for extending the entire system to provide support for managing a richer set of advanced services.  We are planning to add support for firewalls and VPNs in the coming Mitaka cycle.


Scaling the Rug beyond a single node and providing better availability

We have been working on figuring out the best approach to scale the astara-rug orchestration service beyond a single node to provide High Availability.  The service itself is multi-threaded and multi-process, but it is a busy little daemon; given the loads of large systems with many tenants, resources and messaging loads, it could easily have trouble keeping up.  Additionally, our HA story needed to evolve–the only option for providing HA previously was to put the service under the management of Pacemaker or another cluster manager and run in active/passive mode.  To fix both these issues, we’ve took a page out of the Ironic playbook and implemented an active/active clustering approach leveraging a distributed hash table built around the openstack/tooz library.  We lean on an external coordination service (via tooz) to track cluster membership, and construct a distributed hash table that maps individual rug processes to the Neutron resources it should be responsible for managing.  This allows operators to easily scale out to many Rug processes in an active/active cluster for load distribution and control plane availability.


Pre-provisioned VM pools for quicker resource provisioning

Each logical Neutron resource managed by astara-rug lives in a Nova instance.  Historically, when a user creates a router, astara-rug requests a Nova instance is booted, ensures the correct ports are attached and pushes configuration into the appliance VM via a REST API exposed by the appliance on the management network.  The most time consuming part of the process is waiting on Nova to boot the instance.  When a user creates a new router, a non-trivial amount of time is spent waiting for Nova to spawn the underlying instance.  This is true when an operator wants to rebuild an existing router and, worse, when astara-rug detects a dead router appliance and decides to bring a new one up.   To address this, we’ve added a new service called astara-pez that is responsible for managing pools for hot-standby appliances that it dispenses to the Rug on request (Dispensing nodes. Pez. Get it?)  This shrinks the time to deliver an active appliance from minutes to seconds, increasing system responsiveness and data plane availability.


That about sums up the major new developments we’ve been busy working on this last cycle.   Check out the official release notes for more details and take a look at LP release page for a full rundown of bugs that were fixed.  The developers of the project plan to convene during the OpenStack Design Summit to hash out what our goals are for Mitaka. We will announce when and where we will meet on openstack-dev mailing list. There’s a lot to get done and we’d love a hand, so if this stuff interests you feel free to check out our docs and the code, come to our talks (see below), hit us up on IRC or say hi in Tokyo — we’re booth T44 on the Marketplace floor.  Kampai!


OpenStack Neutron: A Stadium in the Big Tent

Tuesday, October 27 • 11:15am – 11:55am


Tying the Room Together with Akanda  

Wednesday, October 28 • 12:40pm – 1:00pm


Hierarchical Port Binding  in Practice: Experiences With At-Scale Production

Wednesday, October 28 • 5:30pm – 6:10pm


Neutron Advanced Services Demonstration

Thursday, October 28 • 2:15pm – 2:30pm




By Mark McClain, CTO Akanda @gtwmm

Super? Mythical? Fantasy?

One of my nephews favorite things to talk about is what they’ll be when they grow up and lately the answers range from Spiderman to Santa Clause to Ninja Turtle.  I’m trying to be a good uncle and not crush those dreams but sometimes even the greatest aspirations aren’t achievable.  Hopefully you’ll see from this blog post and our upcoming product release that our aspirations with Astara are rooted in achievable goals and even better that they align with some of your goals as an OpenStack user as well.

Product Design Principles

Before we dig into the initial features and functions I wanted to share some core design ideas that are the drivers behind what we are building. 

The first is simplicity.  We have first hand experience deploying a commercial SDN/NFV platform in a production OpenStack environment and the amount of work it takes to deal with plugins, overlays and 3rd party integration and it is our goal to deliver a solution that ensures no one else ever has to deal with those complexities again.

The second is compatibility. Astara is layer 2 agnostic and designed to work with your existing network, not replace it

The third is that the Astara solution be open source.  This is not a business driver or a marketing plan for us…Astara has been built as open source for the past few years and was born out of DreamHost’s search for an open source solution that would address their own OpenStack networking needs.  We are leveraging open source projects to accelerate our own development and are committed to delivering an open solution to our customers.  

Astara v1

So…What is Akanda and how is it related to Astara?  The simple answer is…Akanda is the company providing commercial support of the OpenStack Astara project.  Astara is an open source network orchestration platform for OpenStack clouds.  But we won’t leave you with the just the marketing answer.  Let’s start with a simple view of the product functions and capabilities and break down from there.

Screen Shot 2015-10-06 at 10.46.37 PM


The best way to think of Astara is as a network orchestration platform that delivers network services (L3-L7) via VMs providing routing, load balancing, firewall and more.  Astara also interacts with any L2 overlay including open source solutions based on OVS and Linux bridge (VLAN, VXLAN, GRE) as well as most proprietary solutions to deliver a centralized management layer for all OpenStack networking decisions.  Astara is designed for scale and HA as the control plane is capable of scaling both up and out.

Astara & Neutron

Astara takes the place of many of the agents that OpenStack Neutron communicates with (L3, DHCP, LBaaS, FWaaS)  and acts as a single control point for all networking services.  By removing the complexity of extra agents Astara can centrally manage DHCP and L3, orchestrate load balancing and VPN Services and overall reduce the number of components required to build, manage and monitor complete virtual networks within your cloud.

Screen Shot 2015-10-06 at 10.44.57 PM

Screen Shot 2015-10-06 at 10.46.52 PM

What does Astara Manage?

By combining network services Astara can centrally manage the critical functions of health monitoring, event processing and all interactions with the OpenStack Neutron API.  Astara takes event streams from Neutron and processes and distributes to individual workers (more in a minute) to manage the lifecycle of the VM.  This greatly simplifies the task of monitoring network service VMs and enables Astara to make intelligent decisions to update configurations as needed.

Astara Workers

Astara workers are deployed to manage communications with individual network service VMs.  Each Astara worker is made up of three components:

1. State Machine – State machines keep track of the lifecycle of the VM

2. VM Manager – Ensures VMs are up and running and manages interface configurations

3. Driver – Drivers enable support for multiple network VMs within Astara including Astara open source network services (routing and load balancing in V1) as well as third party services (open source or proprietary) to benefit from monitoring, notification and lifecycle management that Akanda management provides.

Hopefully this gives you an idea of the inner workings  of Akanda and the simplification it can provide for your OpenStack deployment.