Is the Public Cloud Worthy of our Trust?

The Cloud as we all know it, has become such a massive reality in our daily lives that may seem a bit overwhelming at times.  For many people, the Cloud seems to hold a strange, almost magical mystique.  When discussions turn to the Cloud, there is sometimes a hushed reverence that permeates the conversation, something akin to prayer and worship.  For certain individuals, the Cloud evokes a nearly religious devotion, but is the Cloud worthy of such avid devotion or is the Cloud more of a flawed Deity, no less vulnerable than the humans who created it and continue to nurture it today?

Let’s take a quick look at the Cloud’s simple origins.   In its simplest form, the Cloud is merely a server or several servers, sitting in a data center somewhere and connected by intranet for private use or provided for public use via internet.  The Cloud Almighty has been in existence since January 1, 1983, when ARPANET adopted TCP/IP, which took on a more familiar form in 1990 when ARPANET was decommissioned and computer scientist Tim Berners-Lee was credited with inventing the World Wide Web.image - data center - cloud

A private cloud typically provides connectivity between two dedicated sites and is locked down for use by an organization.   Also known as an internal cloud, where all data is protected behind firewalls on the company’s intranet.  A private cloud is a common option for companies with more than one data center and all the hardware and components needed to create a cloud.  All maintenance and updating of infrastructure is the sole responsibility of the company.  Private clouds may offer an increased level of security and there is very little or no sharing of resources with other organizations.

The typical public cloud is a scenario where data is stored in a data center of a service provider and the provider is responsible for management and maintenance of the data center and all related functions.  More and more companies are moving toward the public cloud or a mixture of private/public options.  Some companies feel security may be lacking with the public cloud, however, breaches are rare and your data typically remains separate from others.

Smaller companies may tend to choose a public cloud in their effort to reduce maintenance costs, infrastructure expenses, OPEX and CAPEX.  Larger companies may be inclined to choose a private cloud to maintain greater control and an enhanced sense of security… whether real or perceived.

220px-Dictionnaire_Infernal_-_BehemothWhen it comes to Private or Public Clouds, there is still a preverbal elephant in the room.  This elephant looms large in the psyche of companies of any size, whether large or small.   Cloud Network Outages are huge lumbering Mammoths that represent a catastrophic event no company wants to experience.  Amazon Web Services (AWS), is another behemoth which is the dominant market player in the space.  The AWS idea was conceived as early as 2000, and while the AWS concept began to take shape and was publicly discussed in 2003, and the first customer facing launch took place in 2005. Those individuals religiously devoted to the Public Cloud often place AWS on a very tall pedestal and AWS enjoys an exalted position of respect and dominance in the public cloud arena, but not all is Roses and Tulips in the Kingdom of Cloud.  AWS continues to prick its fingers on the thorns of Network Outages.

The most recent AWS Network Outage occurred in the Northern Virginia region on the morning of February 28th, 2017, as the S3 Team was debugging an issue causing the S3 billing system to progress more slowly than expected.   An employee error took down a large swath of Amazon services for nearly 4 hours.  Another AWS Network Outage took place in Sydney, Australia in June 2016 as massive thunder storms caused AWS EC2 and EBS services to fail and a significant number of prime websites and other online presence were down for 10 hours over a weekend. Since AWS’s inception there have been 7 notable Network Outages.

What conclusions can be drawn about the Public Cloud from events like these?  Some might say that regardless of the problems that exist, there are few inventions that positively influence our lives so profoundly on a daily basis.  Others might say that events like these point to dangerous flaws in the systems that impact our lives and there is much to be concerned about.

Regardless of your perspective of all things Cloud and Internet, one thing is certain, both are here to stay and what the future holds may be significantly different than how it is imagined today.


Bad Turkey, Bad Recovery?

 

If you’ve ever suffered from a bad case of food poisoning, you know that it’s something you hope never to experience again. And when it comes to Thanksgiving and the traditional turkey feast, it pays to take a few precautions to avoid Bad Turkey!

According to the CDC, FDA and the USDA, there are at least 12 precautions you should take to ensure that food poisoning or salmonella doesn’t show up as an uninvited guest at your Thanksgiving festivities.

If your credit union has ever experienced a significant IT related systems failure, you also hope never to experience that sick feeling again. Interestingly, some of the steps necessary to prevent Bad Turkey are similar to the steps necessary to prevent Bad Recovery!

Here is a brief comparison:

  • Turkey: Thaw your turkey slowly in a refrigerator, don’t rush it by putting it out on the counter to thaw at room temperature.
  • Disaster Recovery: Implementing a viable Disaster Recovery solution takes time, a thorough approach and regular testing.
  • Turkey: Raw Poultry can contaminate anything it touches and extra precautions are warranted. Follow these simple rules…cook, clean, chill and separate.
  • Disaster Recovery: Contamination in the backup and recovery process is a constant threat and the source of contamination is often what you least expect.
  • Turkey: Properly stuffing a Turkey is more science than art. To reduce the risk of food poisoning, use the cook, clean, chill and separate rules. To stuff the turkey’s cavity, ensure that it is done just before the turkey goes in the oven. Monitor the internal temperature closely, ensuring that all stuffing reaches a safe internal temperature of at least 165 Degrees. Putting stuffing in a casserole dish and cooking separately from the turkey at a minimum of 165 Degrees is widely thought to be the safest way to prepare stuffing.
  • Disaster Recovery: When it comes to recovering a failed server, the recovery results are only as good as the backup you are relying on to perform the recovery.  Many people believe that backups function on a “set it and forget it” model. However, even the most scientifically advanced backups require proper monitoring and none are fool proof. Backups fail for numerous reasons and even those that show as complete may have skipped a critical file or component in the backup process. To ensure that recovery will complete successfully, monitor backups and systems to ensure that all backups run to completion, are health checked and have proper retention policies.
  • Turkey: All poultry is inspected for “wholesomeness” and is required by the USDA or the governing State, but “grading” for quality is not mandatory, although many companies pay to have their poultry graded.  Grade A is the highest quality and the only grade you are likely to see at the retail level. Grade A indicates that the poultry products are virtually free from defects such as bruises, discolorations, and feathers. Bone-in products will have no broken bones. For whole birds and parts with skin on, there will be no tears in the skin or exposed flesh that could dry out during cooking, and there will be a good covering of fat under the skin. Also, whole birds and parts will be fully fleshed and meaty. NO HORMONES have been approved for use in turkeys. Antibiotics may be given to prevent disease and increase feed efficiency. In approving drugs for use in livestock and poultry, the Food and Drug Administration (FDA) and Food Safety and Inspection Service (FSIS) work together. FDA sets legal limits for drug residues in meat and poultry. FSIS enforces the limits FDA sets for drug residues.
  • Disaster Recovery: When you want the most reliable method of disaster recovery, it pays to use hardware and software components that are in the upper echelon of Gartner’s Magic Quadrant. However, enterprise level backup and recovery components often come with a substantial price tag. Choosing a Managed Service Provider (MSP) that utilizes components from Gartner’s Magic Quadrant to eliminate pricing barriers to top tier solutions by implementing a shared distribution model for their customers, is the easiest way to get state-of-the-art recovery assurance at a fraction of the cost of DIY solutions.

Whether you’re looking to serve friends and family a delicious, wholesome and safe Thanksgiving bounty or you’re looking to ensure that your IT systems are easily recoverable, it pays to follow some basic rules, with a focus on proper preparation and safety.

Happy Thanksgiving From Your Friends at Information Management Solutions!


All Howl-ows…Tide?

 

IMG_0438 copy 2Every October, a large segment of our population is simply enthralled with having the living spook scared out of them and with Halloween rapidly approaching, we thought it would be ghoulishly appropriate to share some frightening fun facts about our fascination with All Hallows’ Eve.

Halloween is believed to have originated in Ireland with the ancient Celtic Festival known as Samhain (pronounced säwėn), which is celebrated on November 1st. However, the night before Samhain, (October 31) the Celtic people believed that the dead returned as Ghosts to roam the countryside. Villagers left food and wine on their doorsteps to keep the Ghosts at bay, and when the villagers left their homes, they wore masks so the dead would mistake them for fellow Ghosts.

In the 8th Century, the Christian Church turned Samhain into All Saints Day. October 31, or All Saints Eve had evolved into Halloween or Hallowe’en, also known as Allhalloween or All Hollows’ Eve. Observances encompass All Saints’ Eve (Halloween), All Saints Day (All Hallows) and All Souls’ Day which last from October 31 to November 2 annually. Each of these observances stem from Allhallowtide, which is a time to remember the dead, including martyrs, saints and all faithful departed Christians.

In Medieval Britain, the tradition of “Souling” began on All Souls Day (November 2nd) in which the needy would beg for pastry know as soul cakes and in return they would pray for people’s dead relatives. As time passed, the practice of “Souling” evolved into “Guising” where young people would dress up in costume and accept food, wine, money, and other offerings in exchange for singing, reciting poetry or telling stories or jokes. In the 19th Century, Irish immigrants instituted the tradition of dressing up in costume in America. In the 1950’s the tradition of Trick or Treating went mainstream with a whole new generation.

According to the National Retail Federation, Halloween is the second highest grossing holiday after Christmas and Nielson Research reports that nearly 600-million pounds of candy is purchased each Halloween. Halloween spending also extends to costume purchases of nearly $2.6 billion… adult costume purchases rack up to nearly $1.22 billion, kids costumes $1.04 billion, and millions are spent each year on pet costumes. Let’s not forget all the life-size skeletons, blow-up monsters, fake cob webs, mantle pieces and other scary decorations, which average around $1.96 billion annually. We spend approximately $360 million on Halloween related greeting cards and there is an annual spike in alcohol purchases in the days preceding Halloween.

Want to have a little spooky fun? Try these Halloween related activities:

  • Halloween Name Generators:

http://en.vonvon.me/quiz/3684?utm_viral=2

https://fun.namerobot.com/name/halloween

http://witch.namegeneratorfun.com/

  • Not Too Scary Stories for Kids:

http://www.sheknows.com/parenting/articles/1016713/scary-halloween-stories-for-kids

  • Best Horror Movies of 2017

http://www.esquire.com/entertainment/movies/a56573/best-horror-movies-2017/

  • Best Horror Podcasts

https://www.thrillist.com/entertainment/nation/best-scary-podcasts-horror

  • Pinterest Best Halloween Pranks

https://www.pinterest.com/explore/halloween-pranks/?lp=true


What Every Credit Union IT Manager Needs to Know About Backup

 

With technology rapidly changing, it is difficult for credit union IT managers to keep up. The entire landscape has changed over the past 10 years with virtualization becoming so widely adopted. Additionally, with the rapid growth of data and the demand for up-time due to internet and mobile banking, backup windows are shrinking.

Many credit unions are still backing up to tape. However, tape is unreliable, inefficient and there are serious concerns as it relates to compliance. If you are one of many credit unions still using tape or just struggling with the aforementioned, this article might help you better navigate the complexities of backup technology.

In the world of tape backups, you copy all files and databases to tape. To backup more efficiently, you might perform an incremental backup. An incremental compares files from the prior backup and only copies the ones that have changed. If there is a database on the server and even a single record is written, an incremental will need to backup the entire database. If a single word is added to an existing Word document, an incremental will need to backup the entire file. Should you need to recover, you will need to restore the original full backup and the subsequent incremental backups.

Deduplication

In order to understand deduplication, you need to forget everything you know about tape. Enterprise data is highly redundant with identical files or data stored within and across systems. Traditional backup methods magnify this by storing all of this redundant data over and over again. Deduplication is the process of analyzing files and databases at the block level and only storing the unique blocks of data eliminating redundancy. Sounds easy, right? Well, not so fast. First, you need to know, not all deduplication is the same. There are two types of deduplication, inline and post-process. Inline deduplication identifies duplicate blocks as they are written to disk. Post-process deduplication deduplicates data after it has been written to disk.

Inline deduplication is considered more efficient in terms of overall storage requirements because non-unique or duplicate blocks are eliminated before they’re written to disk. Because duplicate blocks are eliminated, you don’t need to allocate enough storage to write the entire data set for later deduplication. However, inline deduplication requires more processing power because it happens “on the fly”; this can potentially affect storage performance, which is a very important consideration when implementing deduplication on primary storage. On the other hand, post-process deduplication doesn’t have an immediate impact on storage performance because deduplication can be scheduled to take place after the data is written. However, unlike inline deduplication, post-process deduplication requires the allocation of sufficient data storage to hold an entire data set before it’s reduced via deduplication.

In order to remain competitive, many tape based backup software providers have stepped into the deduplication arena. Most write to disk just as they do to tape and then run post-process deduplication to minimize the disk footprint.

A primary concern with both inline and post-process deduplication is they require streaming the data across the LAN or WAN to disk (target) which consumes a considerable amount of bandwidth. As deduplication has evolved, rather than only target based deduplication, a few vendors are now offering source based deduplication. This is the process of deduplicating at the client (source server) and then streaming only the unique blocks of data to the target (backup server). Taking it a step further, once the data hits the target, it can perform global deduplication (inline deduplication) where it compares the blocks with the blocks that have already been written to disk on the target, then only write the unique blocks of data. Rather than performing inline deduplication on 100% of the data, it may only need to compare 1% or less, eliminating the concern for processing power. As you can imagine, streaming and writing only the unique blocks of data significantly reduces the required daily network bandwidth and storage.

Virtual Environments

VMware changes your server and application IT environment. Server utilization has commonly run as low as 5 percent to 20 percent. Because virtualization can make a single physical server act like multiple logical servers, it can improve server utilization by combining numerous computing resources on a single server. VMware allows users to run 10 or more virtual machines on a single server, increasing server utilization to 70 percent or more.

Virtual server backups can be accomplished using a traditional approach with conventional backup software. The backup software is simply installed and configured on each virtual machine, and backups will run normally to any conventional backup target, including tape drives, virtual tape libraries, or disk storage. However, applying traditional backup tactics to virtual server backups does have drawbacks. The most significant challenge is resource contention. Backups demand significant processing power, and the added resources needed to execute a backup may compromise the performance of that virtual machine and all virtual machines running on the system—constraining the VMware host server’s CPU, memory, disk, and network components—and often making it impossible to back up within available windows.

Backup processes have evolved to deliver greater efficiencies in your highly consolidated environment. How is it this possible with larger workloads and shared resources?

The key to making VMware infrastructure backup as efficient as possible is source-based global deduplication.

Backing up at the source can quickly and efficiently protect virtual machines by sending only the changed segments of data on a daily basis, providing up to 500 times daily reduction in network resource consumption compared to traditional full backups. Source based deduplication also reduces the traditional backup load—from up to 200 percent weekly to as little as 2 percent weekly—dramatically reducing backup times.

Some of the more sophisticated backup solutions can back up at the guest level—an individual virtual machine—or at a VMware Consolidated Backup server. In addition, disk based deduplication software negates the need for transporting tapes to offsite repositories for disaster-recovery or compliance purposes by providing remote backup immediately offsite via the cloud.

Second, source based deduplication is the optimal granularity to find changes anywhere within a virtual machine disk format (VMDK), and this is where target based deduplication alone fails to deliver.

Recovery

As I have stated in prior blogs, backups are the means to recovery. Once data has been deduplicated, in order to perform a recovery, it has to go through what is called a re-hydration process. This is a process of putting all of the pieces back together again and as you can imagine, some software performs this process much more efficiently than others.

Some target based solutions will store multiple revisions before deduplicating so that, in the event of a recovery, it does not have to rehydrate since the re-hydration process can take so long. If you are considering backing up to the cloud, you have to remember that once your initial backup has been seeded (fully written to disk) and your daily backups are running reasonably fast, should you have to recover, you have to rehydrate the entire backup and pull it across the internet. This can add hours or even days to your recovery depending on various factors; re-hydration time, bandwidth etc. If it is your core system, waiting several hours or even days to recover is not an option.

For this very reason, many vendors are now offering a hybrid approach. A hybrid approach requires placing a backup appliance local (at the credit union) to allow for much faster recovery. Additionally, the backup appliance will replicate off-site to the cloud provider.

Backing up to The Cloud

Credit Unions have been slower than most to adopt The Cloud. No surprise since Credit Unions by nature are very conservative. However, we have passed the tipping point and more and more Credit Unions are moving services that direction and backups are no exception. When selecting a backup provider, it is important to understand how the majority of cloud providers price their service. Since deduplication creates a much smaller footprint, pricing is typically based on the amount of data stored in the cloud. The issue with this is nobody truly knows exactly what that number will represent until you have backed up all of your data. This is where it gets complex.

There are two types of data, structured and unstructured. Unstructured data is typical file system files, Word and Excel documents etc. Structured data is primarily databases; Exchange, domain controller, SQL, Oracle etc. On Average, roughly 70% of data at most businesses is unstructured. Unstructured data will deduplicate much more efficiently than structured. In order to estimate your deduplication footprint, it requires the service provider gathering the details on your data to calculate the percentage of structured and unstructured data.

Additionally, retention is a key factor since once the seed is calculated, you have to factor in the average daily change rate and multiply it times your defined retention policies. You also need to factor in average annual data growth. As you can see, this becomes highly complex. If not accurately calculated, you can sign on expecting to pay one amount and end up paying another. Additionally, some software deduplicates much more efficiently than others. Although one vendor may have a lower price per GB or TB than another, they may end up storing two to three times more data, essentially costing you more. It is very important to demo the software before making a long-term commitment and ideally, choosing a vendor that understands credit unions.

Disaster Recovery

One common challenge credit unions encounter after selecting a cloud backup provider is how to transport their data to their disaster recovery service provider in a timely manner. Also, will the DR provider know what to do with it once it arrives?

More and more disaster recovery service providers are offering backup solutions. It just makes sense to have your data stored at the site where the recovery will be performed, avoiding a logistics nightmare. Not to mention, the last thing you want is to have them fumbling around trying to figure out how to use someone else’s software. They need to be experts on the tools they will be using to perform the recovery. The key is to ensure they are capable of meeting all of your recovery needs, they are security conscious, and they perform a regular SSAE examination.

As you can see, technology is rapidly changing and backup software is evolving to keep up with the pace. If you are still using tape, struggling with up-time or just unsatisfied overall with your current backup, I hope this article helps guide you in the right direction.