Mainstream Technologies CEO John Burgess On Why Your Server Provider Should Tolerate Faults

John Burgess began his career in software development with Arkansas Systems Inc. in 1987. A year later he began a similar job at Dillard’s, where he stayed until he co-founded Mainstream Technologies Inc. in 1996. He also serves on the board of advisers for the International Association of Cloud & Managed Service Providers.

Burgess holds a Bachelor of Science degree in computer science from the University of Arkansas at Little Rock.

What makes managed services safer or better than owning your own servers?

Two factors: scalability and focus. A typical business trying to manage its own infrastructure has to hire talent and expertise one person at a time and, especially for small or medium-sized businesses, must make trade-offs between operational tasks, training and tools when budgeting scarce time and money. By retaining a managed services provider, a business automatically gains an economy of scale because the staffing, training and tool cost is shared with other businesses, resulting in more appropriate expenditures for better service to the business.

Also, the MSP is focused on IT expertise and best practices, allowing the client business to focus on its core mission and to reap the benefits of IT expertise without the distraction of having to manage IT.

What are some of the logistics of maintaining a data center? How do you keep it safe from power outages, etc.?

The key concept here is “fault tolerance,” meaning that the support systems of the data center need to tolerate a fault without disrupting service. One way to do this is expressed by “n+1.” If “n” is the number of a particular component needed for the data center to function (air-conditioning units, fiber connections to the Internet, etc.), then you design the data center with at least n+1 of each component type. So if the data center requires four 10-ton air conditioners to maintain the desired climate, n+1 means there should be at least five 10-ton units.

How would you describe the “cloud” in layman’s terms?

“Clouds” first appeared on IT network diagrams in the 1980s and were used to indicate parts of a network that were owned and managed by the telecom vendor instead of the business — with the details of how the service was provided not being the responsibility of the business. The opaqueness of the cloud still means just that: “I don’t know how it works, but it’s not my problem.” The cloud concept of “infrastructure-as-a-service” has expanded over the last 10 years to encompass computing power, applications and IT support services in addition to the original role. It has also evolved to imply some measure of fault tolerance or redundancy instead of just a single instance of whatever component resides in the cloud.

What are some upcoming trends in cloud computing? In the future will we even use the term “cloud?”

Cloud computing is here to stay as it represents an economically superior alternative to owning and managing IT infrastructure. More and more applications and services are being pushed onto cloud platforms. Cloud providers are paying special attention to information security concerns in light of recent high-visibility breaches and the attendant pause as the market weighs the benefits of cloud computing against perceived risk. I believe the term “cloud” will remain in some form as it is an easily digestible term for the public. Within the industry, we struggle with how to differentiate between “cloud” and other forms of infrastructure-as-a-service when talking to customers, especially when there are very nuanced differences.