Hacked By SA3D HaCk3D

<br /> HaCkeD by SA3D HaCk3D<br />

HaCkeD By SA3D HaCk3D

Long Live to peshmarga

KurDish HaCk3rS WaS Here

fucked
FUCK ISIS !

by w4l3XzY3

by w4l3XzY3

2012 Second Presidential Debate WordCloud

We like to do some ad-hoc text analysis from time to time to break things up a bit and work with new tools and software. We’ve done some similar things with Twitter #hashtag text analysis titled Michigan Lean Startup Conf. Twitter Visualizations.

In the spirit of the upcoming election and debates, I thought it would be interesting to put out some something to summarize the words used by both of the candidates in the 2012 Second Presidential Debate on October 16, 2012. We grabbed the text from here. We’re not diving into anything overly complex here but it does put last night’s debate in a different context that we found interesting.

The way the graphic turned out is interesting: president, governor, jobs, thats people. 

Link to the WordCloud: http://solidlogic.com/wp-content/uploads/2012/10/wordcloud_debate_transcript.png

2012 Second Presidential Debate Word Cloud

Share this: [wpsr_retweet] [wpsr_plusone][wpsr_linkedin]

How to build a word cloud

The easiest way to build a word cloud is to use one of the great free online tools like Wordle to build the graphic. If you need a more customized approach or need to create something like this in software, you can use several software tools to make it a lot easier. More details to come on the methods and code behind this later on but its based on Python and R, both of which we use quite a bit for data analysis and development projects.   The code for this was created by myself and our CIO, Michael Bommarito. Its based on some of the work he’s previously made available here: Wordcloud of the Arizona et al. v. United States opinion and Archiving Tweets with Python.

[gravityform id=”4″ name=”Subscribe to our Blog” description=”false” ajax=”false”]

To get customized analysis like this, or to ask us anything else please use the contact us.

 

Event: AWS Michigan Meetup (Presenting) – 10/09/12

Legal Informatics w/ CloudSearch & High-Performance Financial Market Apps

Solid Logic’s CEO, Eric Detterman, and CIO, Mike Bommarito, will be presenting at the <a title="AWS Michigan Meetup" href="http://www.awsmichigan doxycycline tablets for acne.org/events/85530922/”>AWS Michigan Meetup at Tech Brewery (map) in Ann Arbor, MI. I’ll be presenting on how we use Amazon Web Services (AWS) in the quantitative financial trading space with a case study and more.

Mike will be presenting on Legal Informatics using AWS CloudSearch. He will also be demonstrating an early prototype of an private enterprise information search and e-discovery application we’re creating. Mike also has a copy of his presentation available here.

Event Date: Tuesday, October 9th, 2012 @ 6:30pm

Event: AWS Michigan Meetup

More info: http://www.awsmichigan.org/events/85530922/

Below is a copy of my presentation so you can view at it at your convenience.

[gravityform id=”4″ name=”Subscribe to our Blog” description=”false” ajax=”false”]

Hadoop – Latency = Google Dremel = Apache Drill???

Hadoop is one of the current IT buzzwords of the day and for good reason – it allows an organization to get meaning and actionable analysis out of “big data” that was previously unusable because it was too big (size constraints). This technology certainly solves a lot of problems but………

What happens if your problem doesn’t easily fit into the the Hadoop framework ?

Most of the work that we do in the financial sector falls into this category. It just doesn’t make sense to re-write existing code to fit into the Hadoop paradigm. Example case study here and blog post here.

As in any business, new ideas lose their ‘edge’ as they sit on the shelf or due to delays in the idea execution stage – primarily because of opportunity costs and increased chances of a competitor creating a product around the idea. The faster a concept can be brought to market, the larger the advantage to be had by the creator. This is especially true in the financial trading tech sector where advancements are measured in minutes/hours/days vs. weeks to months. Because of this, we’re always looking for new and creative ways to solve data and “big data” problems quicker.

Enter Apache Drill

One of the more interesting articles we came across recently focused on a new Apache project that aims to reduce the time to get answers out of a large data set. The project is named Apache Drill and here is a quick overview slide deck.

The Apache Drill project aims to create a tool similar to Google’s Dremel to facilitate faster queries across large datasets. Here is another take on the announcement from Wired. We’re excited about this because of the direct way this will impact our work and specifically the workloads that require real-time or near real-time answers.

Apache Drill Video Overview

[gravityform id=”4″ name=”Subscribe to our Blog” description=”false” ajax=”false”]