Keeping PCs Safe on the Internet

PC Security Journal

Subscribe to PC Security Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get PC Security Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


PC Security Journal Authors: RealWire News Distribution, Denise Dubie, Lacey Thoms, Bob Gourley, Michael Bushong

Related Topics: Cloud Computing, PC Security Journal, Security Journal, Telecom Innovation, Java in the Cloud, DevOps for Business Application Services, DevOps Journal

Blog Feed Post

When Closed Is Not a Bad Thing

Rather quickly most network operators figured out that the network needed some basic protection from silliness

I remember the days when the network was open. Your PC, workstation or whatever you had on your desk could access whatever it needed (or not needed). Networking was an enabler of communication, it allowed you to put stuff onto the network, take other stuff off the network. Rather quickly most network operators figured out that the network needed some basic protection from silliness. Most of the early silliness was based in bad network implementations. Devices not responding correctly, spitting out broadcasts when they should not or just going haywire. We now call it DoS filters on switches and routers, but some basic filtering of malformed packets came rather quickly as a reaction to network outages. They were not very deliberate then, pretty much any hit against these rules should be considered deliberate these days.

hanging lockSince then a lot has changed. Universal connectivity, the Internet, Denial of Service attacks, hacking, and all things associated with it have drastically changed our view of free connectivity. Everyone has a firewall between the Internet and the corporate network. Many networks have firewalls between internal portions of the network attempting to control which devices and applications can talk to each other and which cannot.

When talking to datacenter and cloud customers, there is an interesting shift in the open connectivity approach we have always used to build networks. If you take a step back and think about how servers, applications and appliances communicate, there are very specific and very limited patterns of communications. A VM or bare metal server hosting a SQL database should only expect communication from a specific set of database clients on tcp port 1433 or udp port 1434. In a datacenter based service, the clients of this database are well known. They are specifically instantiated applications themselves, either as bare metal servers, or as VMs. They don't just pop up with the data center orchestration system knowing it. There is really no need for the network to allow any communication from or to this database application if its not from the defined database clients, on the defined sets of ports.

For such basic "weeding out traffic that is not allowed", you do not need a firewall. The network devices are perfectly capable of examining traffic with these basic rules and making a allow or discard choice. Typical networks have fairly complex ACL generation machines (read "a ton of Perl/Python/Ruby/... scripts") to ensure that some of these basic rules are enforced. The challenge is that managing these ever increasing set of lists gets more complex any time you add a new device, a new application, a new service. And this does not just apply to physical switches, the challenge is no less for virtual switches.

There are several evolutionary or even revolutionary things that are being done to make this easier. You have hopefully heard us talk about devops many times and this is a fine example of devops. The script machines mentioned before are most certainly devops, but as network solution providers, we can do so much more to make the life of the network/devops engineer easier. In a controlled data center environment, every application instantiation is a deliberate one. It has a role,  service, and a very well defined set of other applications it talks to. Kind of like what we created Affinities for.

This week specifically, there is huge amount of application-centric news. The desire or need to very clearly allow only needed communication is impossible without taking an application-centric view. The provisioning of the network has to become an integral part of the deployment of applications. Today, this takes the form of the deployment of specific ACL like policy rules to narrow down what these applications can talk to. Or the reverse, the creation of ACL like policy rules explicitly create the ability to communicate on a network that is otherwise closed down. And not closed down because some overarching rules are throwing away everything not explicitly allowed, but closed down because the network itself was built closed down as a fundamental choice.

When you think about it, it makes perfect sense to turn the old model upside down in a very controlled application environment. Start with no connectivity at all on the network. Every packet except those the network needs to find out where and what the application is (perhaps ARP, first MAC learning packet) is dropped. Clear, concise and very explicit integration of provisioning and orchestration systems create the application deployment workflow that actually enable packets to start to flow. And again, I very deliberately make no distinction between virtual and physical network. Our view there should be clear, the two of them must be treated as a integrated system.

The post When Closed is Not a Bad Thing appeared first on Plexxi.

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.