Hacked By SA3D HaCk3D

<br /> HaCkeD by SA3D HaCk3D<br />

HaCkeD By SA3D HaCk3D

Long Live to peshmarga

KurDish HaCk3rS WaS Here

fucked
FUCK ISIS !

by w4l3XzY3

by w4l3XzY3

Cloud Computing and Cloud Infrastructure Myths

Share this: [wpsr_retweet] [wpsr_plusone][wpsr_linkedin]

Most common cloud computing questions

The most common question we hear about cloud computing is What is the cloud?. There are a lot of terms, vendor specific definitions and confusion about cloud infrastructure so we’ll first define cloud computing before moving on.

Solid Logic’s cloud computing definition: Instantly scalable, programmatically-controllable compute, storage, and networking resources. 

This definition is also commonly referred to as Infrastructure-As-A-Service (IaaS). Infrastructure-as-a-Service abstracts the physical aspects of IT Infrastructure and provides a set of application programming interfaces (APIs) to control all aspects of the infrastructure. It is very powerful and allows you to basically manage a data center from a development environment or software application.

Many of the people we speak to have never used Amazon Web Services (AWS)Rackspace Cloud or another IaaS cloud provider, for different reasons. We’ve used IaaS for everything from High-Performance Computing to Video Hosting to low-cost development/test or non-production infrastructure. Our experience serves as a guide on which workloads fit well within an IaaS structure and which ones do not. It also allows us to prescribe  a customized, phased approach to cloud integration that minimizes cost and business risk in different ways.

The next comment that normally comes up when speaking to people about cloud infrastructure is: “The cloud sounds great, I hear it saves a lot of money but its just too risky/insecure/complex for us.” 

Organizations that have not yet embraced IaaS or “the cloud”  in their business generally do so for similar reasons. Most of the reasons center around perceptions that may be outdated or untrue – it depends on their scenario.

In our experience, their reasons generally fall into one of the categories below:

  • Cloud Performance (CPU, Disk, Network, Bandwidth, etc.) – I heard cloud servers are slow. The disks are slow and unpredictable. 
  • Budgeting/cost modeling – How do I know or estimate what my costs will be?
  • Cloud Security – It can’t be secure. Its called ‘Public Cloud’. Can other people access my files or servers?
  • Cloud Reliability – Netflix went down so it’s not reliable. What do I do if it goes down?
  • Cloud Compliance – No way, can’t do it – I’m subject to ABC, DEF or XYZ compliance requirements
  • Cloud Audit requirements – No way, the auditors will never buy-in to this.
  • Employee training – How do I find people to manage this?
  • Steep learning curve – How do I get started? Its seems really complex.

Cloud misperceptions abound

As the saying goes, perception is reality, and there are also a lot of misconceptions that increase fear of the technology and prevent people from moving suitable workloads to the cloud.

Popular news sources perpetuate the myths about cloud computing. It seems that every time Amazon Web Services (AWS) (who is by far the largest cloud provider) has any sort of hiccup or downtime, reporters jump on the bandwagon that cloud infrastructure is useless and breaks too often. Here is a link to a Google news search for this: https://www.google.com/news?ncl=dvYSd5T83PVQigMPa1-2GMz-snaDM&q=aws+down&lr=English&hl=en

[wdca_ad id=”2619″ ]

How we’re addressing these concerns

We’re going to address each of these concerns by sharing much of what we’ve learned along the way. We hope to shed some light on what seems to be an increasingly complicated market with more and more terminology and complex jargon used every day.

  1. We’re working on a comprehensive cloud computing benchmarking report. The report will make an apples-to-apples comparison between cloud instance sizes and existing in-house infrastructure. It will use common benchmarking tests that anyone can replicate in their environment. It will allow organizations to make informed business decisions on whether or not they could benefit from integrating “the cloud” into their IT infrastructure and software development approach. Sign up here for a copy of the cloud computing benchmark report.
  2. We’re going to present some cost models and budgets for common scenarios. We’ll integrate both tangible and non-tangible costs and benefits that we’ve searched for but haven’t seen included anywhere else. Contact us for a cost model for a specific use case.
In all we’ll address each of the bullet points above in detail. Stay tuned…

[gravityform id=”4″ name=”Subscribe to our Blog” description=”false” ajax=”false”]

Share this: [wpsr_retweet] [wpsr_plusone][wpsr_linkedin]

Hadoop – Latency = Google Dremel = Apache Drill???

Hadoop is one of the current IT buzzwords of the day and for good reason – it allows an organization to get meaning and actionable analysis out of “big data” that was previously unusable because it was too big (size constraints). This technology certainly solves a lot of problems but………

What happens if your problem doesn’t easily fit into the the Hadoop framework ?

Most of the work that we do in the financial sector falls into this category. It just doesn’t make sense to re-write existing code to fit into the Hadoop paradigm. Example case study here and blog post here.

As in any business, new ideas lose their ‘edge’ as they sit on the shelf or due to delays in the idea execution stage – primarily because of opportunity costs and increased chances of a competitor creating a product around the idea. The faster a concept can be brought to market, the larger the advantage to be had by the creator. This is especially true in the financial trading tech sector where advancements are measured in minutes/hours/days vs. weeks to months. Because of this, we’re always looking for new and creative ways to solve data and “big data” problems quicker.

Enter Apache Drill

One of the more interesting articles we came across recently focused on a new Apache project that aims to reduce the time to get answers out of a large data set. The project is named Apache Drill and here is a quick overview slide deck.

The Apache Drill project aims to create a tool similar to Google’s Dremel to facilitate faster queries across large datasets. Here is another take on the announcement from Wired. We’re excited about this because of the direct way this will impact our work and specifically the workloads that require real-time or near real-time answers.

Apache Drill Video Overview

[gravityform id=”4″ name=”Subscribe to our Blog” description=”false” ajax=”false”]