• TRONTRON is one of the largest blockchain-based operating systems in the world. TRON is an ambitious project dedicated to building the infrastructure for a truly decentralized Internet.

  •  Dell EMC Elect 2018 

    Unfortunately the Dell EMC Elect program has been canceled with the merger between dell and emc after 1 year.

    Read more

  • Demo Image Everything you need and nothing you don’t. EMC MOBILE is the essential tool for those of us who live EMC. Now your documentation library is available and in-sync online or off direct from the EMC Cloud


  • Mikes AppMikes.eu App listed in Top Free Business App Windows Store. This simple but very clear app shows all the new posts from this site. Available for Windows 8 and 10. Try it out!

  • Demo ImageWhy Advertise on Mikes.eu

    Mikes.eu traffic varies depending on the post. We have a VERY focused audience in the Virtualization, AI, Blockchain, Storage and Cloud Marketing space. We have been ranked in the top 5 of Favorite Independent Blogger and one of the most visited website.

Cloud is designed to fail

Written by Roy Mikes on Friday, 19 April 2013. Posted in Cloud

The differents between a Datacenter design and Cloud design is Datacenters are build to prevent a failure instead assuming that something will fail. Datacenters are build with a lot of redundancies to prevent outage. One of the main concerns is business continuity. Companies rely on their information systems to run their operations. If a system becomes unavailable, company operations may be impaired or stopped completely. It is necessary to provide a reliable infrastructure for IT operations.  It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls, air conditioning, fire suppression and security devices. But you need to be aware of, there's always a single point of failure. This single point of failure is often find in the application layer.

In a Cloud environment where it often comes down to a SaaS service model it's a huge challenge to design failure-free applications. Let's assume anything can fail, right. Servers fail. Storage fails. Networks fail. Cloud applications are designed to be resilient to failure, and they are designed to be robust at the application level rather than at the infrastructure level. So here is your challenge.

In computing and systems design a loosely coupled system is one in which each of its components has, or makes use of, little or no knowledge of the definitions of other separate components. In other words: Design services to be loosely coupled. Each service should operate independently and be easily called upon, or combined with, other services. This improves design efficiencies and developer productivity. Application developers will be able to build new kinds of applications, with new kinds of capabilities and taking into account cloud solutions. To take advantage of the benefits of cloud computing, application developers will need to think differently about application development. Some of the design features that should be considered are:

  • Design for failure
  • New data models like NoSQL
  • Distributed solutions
  • Scaling out, instead of up
  • And loosely coupled applications

There will undoubtedly be more...

In cloud environments, developers can adopt a loosely coupled design approach to minimize the extra configuration steps. For example, a web server is configured to use a messaging service to talk to the application server, rather than directly to the application server. In this way, the web tier can be distributed without having to worry about specific application tier configurations or dependencies. 

As described previously, designing loosely couple services would be advantageous in a cloud environment to reduce service mobility complexity and availability. These two are strong conditions for a cloud. 

So you know that things can go wrong, right? And it is not to think that the worst that can happen don't happen. Basically the goal of a loose coupling architecture is to reduce risks that a change made within one component will create unanticipated changes within other components. Limiting interconnections can help isolate problems when things go wrong and simplify testing, maintenance and troubleshooting procedures.

There's always a change that software does not work. Limit risks - then you have people... An interesting question is how we can design software to extract the human error as much as possible. In this latter case, it achieved a significant improvement in availability in software that no longer required human intervention or is resilient to human failure.

I guess we remember the 4 day outage of Amazon AWS. Reason network modification! Or Microsoft Sidekick outage. A week long outage and full Data loss. Reason Network modification! Last but not least, Google Mail. 150K user lost emails for up to 4 days. Reason Software modification.

Anywayzzz It comes down to apps. 


Social Bookmarks

About the Author

Roy Mikes

Roy Mikes

Roy Mikes has developed a deep knowledge of virtualization, storage and Cloud in a broad perspective over the past 18 years, but also streamlines his focus recently more and more on AI and Blockchain. Because of that knowledge and focus, Roy works as an Advisory Partner Solution Development Lead & Evangelist at Dell EMC.

Leave a comment

You are commenting as guest. Optional login below.

Download Free Designs http://bigtheme.net/ Free Websites Templates