From Forces to Ethan (English)
by
Sasha Shkrebets
—
last modified
Feb 04, 2025 10:15 AM
Welcome back to the course on software defined networking.
In this lesson, we'll be wrapping up our discussion of the history of software
defined networking, and in particular we will
explore the history of control of packet-switched networks.
Welcome back to the course on software defined networking.
In this lesson, we'll be wrapping up our discussion of the history of software
defined networking, and in particular we will
explore the history of control of packet-switched networks.
We've already explored a little bit the history of central
network control, but most of that was in the context
of the phone network and circuit-switched networks, and today, we'll
look at how the control of pack switch networks has evolved.
I'll begin by reminding us why separating network
control from the data plain is a good
idea, and then we'll look at different ways
that have been developed to control packet-switched networks.
We'll start by looking at the FORCES protocol,
which was developed in the internet engineering task force.
Which effectively defined a custom separate control
channel to control switches or forwarding elements.
Then we'll look at the routing control platform, which
used an existing protocol, in particular, the border gateway protocol,
to dictate or control the routing decisions, or forwarding
decisions, that AGP speaking routers in a backbone network made.
Then we'll look at how the emergence of open hardware enabled the much more
widespread adoption of separate control of packet-switched networks.
So, first let's remind ourselves why separate control is a good idea.
First, it enables more rapid innovation, since
control logic isn't directly tied to the hardware.
Second, it enables the controller to potentially see a network-wide view,
thereby making it easier to infer and reason about network behavior.
Finally, having a separate control channel makes
it possible to have a separate software
controller, which facilitates the introduction of new
services to the network much more easily.
Let's now look at three different ways
that were developed to control packet-switched networks.
The first instance of a separate control channel for
packet-switched networks came out of the internet engineering task force.
In the form of the FORCES protocol.
The protocol was first standardized in 2003.
And there were three implementations of this particular standard.
The standard essentially defined protocols that would
allow multiple control elements to control forwarding elements.
Which would essentially be responsible for forwarding packets,
metering, shaping, performing traffic classification and so forth.
So the idea was that the switches or forwarding elements could be controlled.
Over a standard control channel, called the FORCES Interface.
And, there might be multiple such controllers,
controlling the forwarding behavior, of these forwarding elements.
So, this is all well and good.
And, is some ways it looks a lot like the open flow standard that we know today.
But the problem with this particular approach was that it
required standardization, adoption by vendors, and deployment of new hardware.
And these particular hurdles were the same hurdles that were the motivation for some
of the earlier work, such as the active networks projects that we looked at.
Another approach was to essentially use
existing protocols as control channels, essentially
hijacking existing routing protocols, to send
control messages to the forwarding elements.
This approach was taken by the routing control platform.
The idea was that every autonomous system or independently operated network
might have a routing control platform or an RCP, which would compute
routes on behalf of the routers and then use the existing
BGP protocols to communicate these routes to the routers, so the idea.
Was that computation would happen up here in the RCP.
But once the routes were computed, the results of that computation would be
pushed into the routers forwarding tables by standard routing protocols.
So these routers would be under the impression,
they were just speaking to any old router, but
in fact they were speaking to a smart box
that was actually computing the routes on their behalf.
Using these existing In-Band protocols to control a packet-switched network,
effectively makes it easier to transition from the status quo.
To something where all of the control is centralized, in a particular network.
The RCP effectively used BGP as a control channel
so that the forwarding elements thought that they were talking
to just another router, but in fact, all of the
smarts for the network were centralized at a single point.
This approach makes deployment somewhat easier since it doesn't
require standardization on a new set of control protocols.
However, the problem with the approach is that the control that one has
over the network is constrained by what existing protocols like BGP can support.
So effectively, the RCP was limited, because all
it could do is control BGP routing decisions,
when in fact, a network operator might want
to control a much wider range of behaviors.
Ultimately, the architecture still proved useful.
And a version of the RCP is currently running in at
least one large back bone network today, to do things like
automated traffic redirection for security incidents or traffic scrubbing, but
nevertheless, the range of applications that something like the RCP can support.
Is still somewhat relatively limited in comparison to what
might be possible with a general separate control plane.
Customizing the hardware in the data plane, potentially makes it easier
to support a much wider range of applications in the control plane.
The first project to realize this was the
Ethane project, which presented a network architecture for the
enterprise, which allowed a direct enforcement of a single
fine-grained network policy at something called a domain controller.
A domain controller would compute the flow tables.
That should be installed in each of the enterprise's switches,
based on the access control policies defined at the domain controller.
Ethane required the deployment of custom switches.
And the project implemented several of
these based on OpenWrt, NetFPGA, and Linux.
The problem with this approach, of course, is that
it requires custom switches that support the Ethane protocol.
So what we're looking for is something that gets us the best of both worlds.
Something that could operate on existing
protocols, yet wouldn't require customizing the hardware.
The answer to that question was OpenFlow.
And here the idea was to basically take
the capabilities of existing hardware and open those up.
Such that a standard control protocol
could control the behavior of that hardware.
In OpenFlow, a separate controller communicates
with the switch's flow table, to install
forwarding table entries into the switch, that
control the forwarding behavior of the network.
Because most switches already implemented flow tables, the only thing that
was necessary to make OpenFlow a reality was to convince the switch vendors
to open the interface to those flow tables, so that a separate software
controller could dictate what would be populated in those flow tables.
So what have we learned about network control in this lesson?
One of the lessons is that control and data plane should definitely be
decoupled, because vertically integrated switches make it
very difficult to introduce new control planes.
In some sense, FORCES looks a lot like OpenFlow.
But because it required new standardization and adoption and
changes to the hardware, deployment ultimately became very difficult.
The second lesson is that using existing protocols makes deployment easier.
But it also constrains what can be done and we saw that in the RCP.
Finally, we saw that open hardware can allow decoupling
the control and can actually spur adoption, and this
is potentially one of the reasons that made OpenFlow
so incredibly successful relative to these other similar proposals.