Wednesday, 17 May 2017

UWP MetroLog using JsonPostTarget & HTTP Server


This blog more deals about the UWP MetroLog which is a lightweight logging framework designed for Windows Store & UWP Apps and more you can know about it from the github.


Using this log framework we can use several targets to write the log and for that we have some documents to look in the internet and in the documentation in the official location. But for JSON Target we have very limited support when i am writing this post and using JsonPostTarget we can stream log messages back to an HTTP/HTTPS end-point of our own design.


So here I have some UWP application code to showcase how we can use the Json Target logging.
Add the dependencies in project.json to have Metro log as  "MetroLog": "1.0.1"  , in future the version may be change and also you can get that from MetroLog from NuGet Package.

Note : Enable Private Networks (Client & Server) or Internet (Client & Server) capabilities in the Package.appxmanifest in the project.

In the code we have two major modules:
  • HTTP Server
  • Logger
HTTP Server will use Windows.Networking.Sockets.StreamSocketListener to receive the socket input from the JsonPostTarget log format.

Logger section will be used to initialize the MetroLog with JsonPostTarget with the url to post and log level to be save and the threshold as shown below.

using MetroLog;
using MetroLog.Targets;
using System;

public class JsonLogger
        public void InitializeJsonLog(int listnerport)
            var configuration = new LoggingConfiguration();
            string ul = string.Format("http://localhost:{0}/WriteLog", listnerport);
            var url = new Uri(ul);
            var jsonPostTarget = new JsonPostTarget(1, url);
            configuration.AddTarget(LogLevel.Trace, LogLevel.Fatal, jsonPostTarget);
            configuration.IsEnabled = true;
            LogManagerFactory.DefaultConfiguration = configuration;

HTTP Server has 2 methods to Start & Stop the socket listener and a private methods to format the content and to save the log message in the local folder.

using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Threading;
using Windows.Networking.Sockets;
using Windows.Storage;

public class HTTPServer : IDisposable
        private int port;
        private readonly StreamSocketListener listener;
        SemaphoreSlim sSlim;

        public HTTPServer(int listnerport)
            this.listener = new StreamSocketListener();
            port = listnerport;
            this.listener.ConnectionReceived += (s, e) => ProcessRequestAsync(e.Socket);
            sSlim = new SemaphoreSlim(1);

        public async void StartServerAsync()
            await listener.BindServiceNameAsync(port.ToString());

        public async void StopServerAsync()
            await listener.CancelIOAsync();

        public void Dispose()
        private void ProcessRequestAsync(StreamSocket socket)
            string content;
                using (var input = socket.InputStream)
                    using (var reader = new StreamReader(input.AsStreamForRead()))
                        var requestHeader = reader.ReadLine();
                        content = ParseRequest(reader, new Dictionary<string, string>());
                if (!string.IsNullOrWhiteSpace(content))
            catch (Exception ex)
                throw ex;

        private string ParseRequest(StreamReader reader, Dictionary<string, string> Headers)
            bool finishedParsingHeaders = false;
            while (true)
                string line = "";
                if (!finishedParsingHeaders)
                    line = reader.ReadLine();
                    int contentLength = Headers.ContainsKey("Content-Length") ? int.Parse(Headers["Content-Length"]) : 0;
                    if (contentLength > 0)
                        char[] byteContent = new char[contentLength];
                        reader.ReadBlock(byteContent, 0, contentLength);

                        line = Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(byteContent));
                        return line;

                if (String.IsNullOrWhiteSpace(line))               
                    finishedParsingHeaders = true;              
                    if (!finishedParsingHeaders)
                        var splitHeader = line.Split(new char[] { ':' }, 2);
                        Headers.Add(splitHeader[0].Trim(), splitHeader[1].Trim());
            return string.Empty;

        private async void WriteToLog(string Content)
            await sSlim.WaitAsync();
                var filename = String.Format("Log - {0:yyyyMMdd}.log", DateTime.Now);
                var logfilename = ApplicationData.Current.LocalFolder.CreateFileAsync(filename, CreationCollisionOption.OpenIfExists).GetAwaiter().GetResult();
                await FileIO.AppendTextAsync(logfilename, Content);

From the application first you have to start the HTTP Server and then log some message and check the log file content in the local storage folder for the file with log content existence.
The log location will be in C:\Users\<user>\AppData\Local\Packages\<AppName>\LocalState\

Main Page code in Application :

The below is the sample code to initialize the log and HTTP Server.

public sealed partial class MainPage : Page
        private HttpServer.HTTPServer httpServer;
        private int listerport = 8085;
        public MainPage()
            this.Loaded += MainPage_Loaded;

        private void MainPage_Loaded(object sender, RoutedEventArgs e)
            httpServer = new HttpServer.HTTPServer(listerport);
            new Logger.JsonLogger().InitializeJsonLog(listerport);

        private void btnStart_Click(object sender, RoutedEventArgs e)

        private void btnStop_Click(object sender, RoutedEventArgs e)

        private void btnLog_Click(object sender, RoutedEventArgs e)
            var logger = LogManagerFactory.DefaultLogManager.GetLogger(typeof(MainPage));
            logger.Debug("test debug");
            logger.Info("test info");
            logger.Warn("test warn");
            logger.Error("error message", new Exception("test exception"));
            logger.Fatal("fatal message", new Exception("teat fatal exception"));

Below is app UI screenshot for the above code :

  1. Run the application.
  2. Click Start button to start the HTTP Server.
  3. Click the Click Me To Log Sample button to write some sample log information.
  4. Click Stop button to stop the HTTP Server.
  5. Now check the log file in the above mentioned location.

Also you can download the source code from here.

Thanks for Reading!! Happy Coding!!

Thursday, 23 January 2014

HTTP POST with Speedway Connect Software Using ASP.NET C#

Good Day!!

This is blog is all about the RFID Reader program, for my project requirement I want to use Speedway connect software to post the data from the speedway revolution R420 reader to the given URL. To know about the Speedway Connect Software please click here.

From the above Speedway Connect Software blog I got the sample for HTTP POST in PHP language only, after that I googled for a sample code in .Net , unfortunately couldn't find any sample. So tried my own and after some time reached the goal and also I would like to share the same in my blog so that it will useful for others.

Here in this blog I am just going to show my piece of code not discussing anything about Speedway connect software, readers can read/know about it from the above mentioned link. Lets dive into the code.

Create a simple ASP.Net website, in that project I am going to use the Default.aspx page, in the code get the HTTP POST variables (which was posted from reader) and write that in a text file. Below is the screen shot of the code.

After compiled successfully host the website in your IIS and add the link in Speedway connect Software in the reader UI as show below.

Now start your reader and check the text file hopefully it will have the tag reads recorded.

Thanks for learning.

Happy Coding!!!

Prabhakaran Soundarapandian

Monday, 26 August 2013


SignalR .net

In this post I am sharing magical stuff which I have learned recently and it will be very useful..Let's dive!!!

SignalR is a groundbreaking open-source project from .NET community. It offers real-time communication to a variety of platforms, is easy to use and to set up, and scales immensely.

Initially it was developed for and later it was scaled to Silverlight and WPF technologies for real time notification/Push mechanism.Pushing data from the server to the client (not just browser clients) has always been a tough problem. SignalR makes it dead easy and handles all the heavy lifting for you.

What is ASP.NET SignalR?

ASP.NET SignalR is a new library for ASP.NET developers that makes it incredibly simple to add real-time web functionality to your applications. What is "real-time web" functionality? It's the ability to have your server-side code push content to the connected clients as it happens, in real-time.

You may have heard of WebSockets, a new HTML5 API that enables bi-directional communication between the browser and server. SignalR will use WebSockets under the covers when it's available, and gracefully fallback to other techniques and technologies when it isn't, while your application code stays the same.

SignalR also provides a very simple, high-level API for doing server to client RPC (call JavaScript functions in your clients' browsers from server-side .NET code) in your ASP.NET application, as well as adding useful hooks for connection management, e.g. connect/disconnect events, grouping connections, authorization.

What can you do with ASP.NET SignalR?

SignalR can be used to add any sort of "real-time" web functionality to your ASP.NET application. While chat is often used as an example, you can do a whole lot more. Any time a user refreshes a web page to see new data, or the page implements Ajax long polling to retrieve new data, is candidate for using SignalR.

It also enables completely new types of applications, that require high frequency updates from the server, e.g. real-time gaming. ShootR game is a great example

For more information and about the latest news you can go to SignalR.

I have did a beginning hands on code which was inspired from the instruction video.The example is to move a rectangular shape in one client and that has to reflect in all the connected clients(browser).The server and the client side code are in the same project.Client code will be in Javascript file and Server code is in C#.Before trying out the sample please install the SignalR package into your project.To install SignalR refer Getting SignalR-Ready.

Step 1:

Create a new empty web application and Right click the project and click Manage NuGet Packages.Then search for SignalR and install the ASP.Net SignalR sample now the sample stockticker application will be downloaded into your application and that will compile successfully and now create a folder MoveShapeDemo and create the 3 files as shown in the second image.

Manage Nuget Package

MoveShapeDemo Files

Step 2:

Now you can code your server side code in MoveShape.cs file as show in the image below.Please see the comment in the code for your better understanding.

Step 3:

Now you can code your client side code in MoveShape.js file as show in the image below.
Please see the comment in the code for your better understanding.

Step 4 :

Now you can code you view(HTML) page in MoveShape.htm file as show in the image below.
Please see the comment in the code for your better understanding.

That's you can run your application in browser (IE > 8) at least 2 browser and then try it by moving the square in one browser and that should reflect in the other clients(browsers) too.

Hope you enjoyed this post.

Thanks for Reading.Happy Coding!!

Tuesday, 26 March 2013

A CPU Friendly Infinite loop in C#

To run a infinite loop to do some process, we will be using While loop,System.Threading,System.Timer and so on.., each of this having its own pro's and con's and some of that will trash the CPU performance.

In this blog I am going to post the simple technique and that will utilize less CPU.To achieve this we have to use EventWaitHandle from System.Threading class.The following example will show the usage.

C# Console app Example :

private static void Main(string[] args)
EventWaitHandle waithandler = new EventWaitHandle(false, EventResetMode.AutoReset, Guid.NewGuid().ToString());do
// ToDo: Something else if desired.
} while (true);

The above example will print the " Entered " word with every one second delay interval and infinitely.

If you want to do the same in Asynchronous way ,there are lot of ways to do and you can find one of the  method in the below example.

C# Console app Example(Asynchronously) :

//Declare a delegate for Async operation.

public delegate void AsyncMethodCaller();
class Program
private static void Main(string[] args)
           // Create the delegate.

            AsyncMethodCaller caller = new AsyncMethodCaller(Program.AsyncKeepAlive);
            // Initiate the asychronous call.
            IAsyncResult result = caller.BeginInvoke(null, null);}

private static void AsyncKeepAlive()

EventWaitHandle eventWaitHandle = new EventWaitHandle(false, EventResetMode.AutoReset, Guid.NewGuid().ToString());

 // ToDo: Something else if desired.
 } while (true);



Hope you enjoyed this post.
Thanks for Reading.Happy Coding!!

Friday, 1 March 2013


After a long time again back with some web apps stuffs and got time to share my studies on Knockout JS a JavaScript library.

Introduction to Knockout:

Knockout is a JavaScript library that helps you to create rich, responsive display and editor user interfaces with a clean underlying data model. Any time you have sections of UI that update dynamically (e.g., changing depending on the user’s actions or when an external data source changes), KO can help you implement it more simply and maintainably.

  • Elegant dependency tracking - automatically updates the right parts of your UI whenever your data model changes.
  • Declarative bindings - a simple and obvious way to connect parts of your UI to your data model. You can construct a complex dynamic UIs easily using arbitrarily nested binding contexts.
  • Trivially extensible - implement custom behaviors as new declarative bindings for easy reuse in just a few lines of code.
  • Pure JavaScript library - works with any server or client-side technology
  • Can be added on top of your existing web application without requiring major architectural changes
  • Compact - around 13kb after gzipping
  • Works on any mainstream browser (IE 6+, Firefox 2+, Chrome, Safari, others)
  • Comprehensive suite of specifications (developed BDD-style) means its correct functioning can easily be verified on new browsers and platforms
For more information you can refer here.


This blog will gives you a glance at what is Knockout, why to use it and how to use it. If you are a newcomer to KO or if you are juggling between multiple JS libraries, this cheat-sheet is a handy guide to get your KO karma flowing.

  • Knockout is a JavaScript Library (as opposed to Backbone.js which is a framework) that helps you improve the User Experience of your web application. 

  • Knockout provides two way data-binding using a ViewModel and provides DOM Templating, it doesn’t deal with sending data over to server or routing.

  • You use Knockout as a drop-in enhancement library to improve usability and user experience, whereas Framework like Backbone is used ground up in new applications (Single Page Apps is a good example).

  • You can create a KO ViewModel as a JavaScript object or as a Function (known as prototype). It’s outside the jQuery document ready.

  • ViewModel as a function
                   var myViewModel = function() {
                              this.Email = “”;
                              this.Name = “Sumit”;
                              this.LastName = “Maitra”;
                              this.WebSite = “”; }

  • ViewModel as an object: you can follow the JavaScript Object Notation –
                var myViewModel = {
                            Email: “”,
                            Name: “Sumit”,
                            LastName: “Maitra”

  • Observable Properties are functions: When dealing with observable values you have to end the property with parenthesis like a function call, because computed values are actually functions that need to be evaluated on bind.
  • Applying a ViewModel
              // In case the View Model is defined as a function
                    ko.applyBindings(new orderViewModel());
              // In case the View Model is defined as a JSON Object

             Here ko is the global reference to Knockout that you get once you add Knockout                                                  Script reference in your page.

  • Observable Properties: KO has the concept Observable properties. If you define a property as an observable, then DOM elements bound to it, will be updated as soon as the property changes.
                   var myViewMode = {
                                 Email: ko.observable(“”)

  • Observable Arrays: When you define an array of Json objects as Observable, KO refreshes DOM elements bound to it when items are added or removed from the Array.
                           var orderViewModels = function() {
                                 // Binds to an empty array
                                  this.Address = ko.observableArray([]);

          Note: When properties of an Item in the array changes then KO does not raise ‘modified’ events.

  • Assign value to observables: We saw how to declare observable Properties above. But when we have to assign a value to an observable in JavaScript we assign it as if we are calling a function, for 

  • Simple Data Binding to properties
                  <input data-bind=“value : Email” />
                  Value of input element to the Email in the ViewModel.

  • Binding to Computed Values: Binding KO to a function gives you added flexibility of defining methods that do computation and return a computed value. KO can actually bind DOM elements to these methods as well.
       For example if we wanted to bind First Name and Last Name and show it in a single    DOM element we could do something like this:
On the View Model

                        var myViewModel = function() {
                        this.Email = “”;                        this.Name = “Sumit”;                        this.LastName = “Maitra”;                        this.FullName = ko.computed(function () {                        return this.LastName() + “, ” + this.Name();                        }, this);                        }

       Binding to DOM Element

                        <label data-bind= “text: FullName” />
  • Binding to an array of objects: KO can bind the DOM to an array in the view Model. We use KO’s foreach syntax as follows
                        <table>                        <tbody data-bind= “foreach: Address”>                                                </tbody>                        </table>

        You can do the same for an Ordered <ol> List or an Unordered List <ul> too. Once you have done the foreach binding, KO treats anything DOM element inside as a part of a template that’s repeated as many times as the number of elements items in the list to which it is bound.
  • Binding elements of an array: Once you have bound an Array to a DOM element KO gives you each element in the array to bind against. So an Address object may be bound as follows:
                        <table>                        <tbody data-bind= “foreach: Address”>                        <tr> <td>                        Street: <label data-bind: “text: Street” />                        #: <label data-bind: “text: Number” />                        City: <label data-bind: “text: City” />                        State: <label data-bind: “text: State” />                        </td> </tr>                        </tbody>                        </table>
  • Binding to properties other than text and value: Now let’s say we want to bind the WebSite element of the ViewModel an anchor tag
                        <a data-bind=“attr : {href : WebSite}”>                        <span data-bind= “text: Name”</a>

      What we are doing here is using Knockout’s attribute binding technique to bind the ‘href’ attribute to the URL in the WebSite property of the View Model. Using attribute binding we can bind to any HTML attribute we Observable Properties: KO has the concept of want to.

  • Getting KO Context in jQuery: Now let’s say we want to have a Delete button for each address. The following markup will add a button 
                        <table>                        <tbody class=“addressList” data-bind= “foreach: Address”>                        <tr><td>                        Address: <label data-bind: “text: $parent.                        AddressString(Street, Number, City)” />                        </td> <td>                        <button class= “addressDeleter” ></button>                        </td> </tr>                        </tbody></table>

Now to handle the click even using jQuery. Since the button is generated on KO binding we cannot use the normal click handler assignment of jQuery. Instead we use the parent container and assign a delegate as follows

                        $(“#addressList”).delegate(“.noteDeleter”, “click”,                        function() {                        var address = ko.dataFor(this);                        // send the address to the server and delete it                        });

As we can see above the ko.dataFor(this) helper method in KO returns the object that was bound to that particular row of data. So it returns an Address object. If you need the entire ViewModel you can use ko.contextFor(this).

Thanks for reading.Happy coding!!!!

Wednesday, 28 November 2012

A hotfix for the ASP.NET browser definition files in MS .NET Framework 4.0

This post describes a hotfix for the ASP.NET browser definition files that are included in the Microsoft .NET Framework 4.0.

This blog is mainly for the ASP.NET developer's who are all using IE 10 and Mozilla Firefox 4.0 or a later version of Mozilla Firefox  browser's.The following are 2 Issue's so far and you can get the current fix from MSFT.

Issue 1

Consider the following scenario:
  • You use Windows Internet Explorer 10 to access an ASP.NET-based webpage.
  • The webpage starts a postback.

In this scenario, the postback fails, and you receive the following error message:
Script Error encountered", "'__doPostBack' is undefined
Note The webpage can start a postback in various ways. For example, a LinkButton control can start a postback.

Issue 2

Consider the following scenario:
  • You create an ASP.NET-based webpage that has the MaintainScrollPositionOnPostBack attribute set to True.
  • You use Mozilla Firefox 4.0 or a later version of Mozilla Firefox to open the webpage.
  • The webpage starts a postback.

In this scenario, the scroll position of the webpage is not maintained after postback.

This hotfix introduces updated definitions in the browser definition files for Internet Explorer and for Mozilla Firefox. The browser definition files are stored in one of the following folders, depending on the installed version of the Microsoft .NET Framework:

  • For 32-bit versions of the .NET Framework 4.0

  • For 64-bit versions of the .NET Framework 4.0
By default, ASP.NET uses sniffing technology for the user agent string to detect browsers. The browser definition files cover a certain range of browser versions. However, as the version numbers increase, ASP.NET might not recognize new versions of a browser by using the user agent string. In this case, ASP.NET might handle these versions as an unknown browser. For example, ASP.NET cannot recognize Windows Internet Explorer 10 that has the following user agent string:

Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0)

This hotfix is intended to correct only the problem that describedabove. Apply this hotfix only to systems that are experiencing the problem described the same

After you apply this hotfix, you might have to restart the computer if any affected files are being used when you install this hotfix. 
To avoid a restart of the computer, shut down all web server applications for ASP.NET websites before you apply this hotfix.

Note : This hotfix does not replace a previously-released hotfix.

Courtesy : Microsoft

Thanks & Regards
Prabhakaran Soundarapandian

Sunday, 28 October 2012

Big Data Right Now: Five Trendy Open Source Technologies

Today I am going to share a new trend in technology where I got the below information form Techcrunch and though of sharing to all.

Big Data is on every CIO’s mind this quarter, and for good reason. Companies will have spent $4.3 billion on Big Data technologies by the end of 2012.
But here’s where it gets interesting. Those initial investments will in turn trigger a domino effectof upgrades and new initiatives that are valued at $34 billion for 2013, per Gartner. Over a 5 year period, spend is estimated at $232 billion.
What you’re seeing right now is only the tip of a gigantic iceberg.
Big Data is presently synonymous with technologies like Hadoop, and the “NoSQL” class of databases including Mongo (document stores) and Cassandra (key-values).  Today it’s possible to stream real-time analytics with ease. Spinning clusters up and down is a (relative) cinch, accomplished in 20 minutes or less. We have table stakes.
But there are new, untapped advantages and non-trivially large opportunities beyond these usual suspects.
Did you know that there are over 250K viable open source technologies on the market today? Innovation is all around us. The increasing complexity of systems, in fact, looks something like this:
We have a lot of…choices, to say the least.
What’s on our own radar, and what’s coming down the pipe for Fortune 2000 companies? What new projects are the most viable candidates for production-grade usage? Which deserve your undivided attention?
We did all the research and testing so you don’t have to. Let’s look at five new technologies that are shaking things up in Big Data. Here is the newest class of tools that you can’t afford to overlook, coming soon to an enterprise near you.


Storm and Kafka are the future of stream processing, and they are already in use at a number of high-profile companies including Groupon, Alibaba, and The Weather Channel.
Born inside of Twitter, Storm is a “distributed real-time computation system”. Storm does for real-time processing what Hadoop did for batch processing. Kafka for its part is a messaging system developed at LinkedIn to serve as the foundation for their activity stream and the data processing pipeline behind it.
When paired together, you get the stream, you get it in-real time, and you get it at linear scale.
Why should you care?
With Storm and Kafka, you can conduct stream processing at linear scale, assured that every message gets processed in real-time, reliably. In tandem, Storm and Kafka can handle data velocities of tens of thousands of messages every second.
Stream processing solutions like Storm and Kafka have caught the attention of many enterprises due to their superior approach to ETL (extract, transform, load) and data integration.
Storm and Kafka are also great at in-memory analytics, and real-time decision support. Companies are quickly realizing that batch processing in Hadoop does not support real-time business needs. Real-time streaming analytics is a must-have component in any enterprise Big Data solution or stack, because of how elegantly they handle the “three V’s” — volume, velocity and variety.
Storm and Kafka are the two technologies on the list that we’re most committed to at Infochimps, and it is reasonable to expect that they’ll be a formal part of our platform soon.


Drill and Dremel make large-scale, ad-hoc querying of data possible, with radically lower latencies that are especially apt for data exploration. They make it possible to scan over petabytes of data in seconds, to answer ad hoc queries and presumably, power compelling visualizations.
Drill and Dremel put power in the hands of business analysts, and not just data engineers. The business side of the house will love Drill and Dremel.
Drill is the open source version of what Google is doing with Dremel (Google also offers Dremel-as-a-Service with its BigQuery offering). Companies are going to want to make the tool their own, which why Drill is the thing to watch mostly closely. Although it’s not quite there yet, strong interest by the development community is helping the tool mature rapidly.
Why should you care?
Drill and Dremel compare favorably to Hadoop for anything ad-hoc. Hadoop is all about batch processing workflows, which creates certain disadvantages.
The Hadoop ecosystem worked very hard to make MapReduce an approachable tool for ad hoc analyses. From Sawzall to Pig and Hive, many interface layers have been built on top of Hadoop to make it more friendly, and business-accessible. Yet, for all of the SQL-like familiarity, these abstraction layers ignore one fundamental reality – MapReduce (and thereby Hadoop) is purpose-built for organized data processing (read: running jobs, or “workflows”).
What if you’re not worried about running jobs? What if you’re more concerned with asking questions and getting answers — slicing and dicing, looking for insights?
That’s “ad hoc exploration” in a nutshell — if you assume data that’s been processed already, how can you optimize for speed? You shouldn’t have to run a new job and wait, sometimes for considerable lengths of time, every time you want to ask a new question.
In stark contrast to workflow-based methodology, most business-driven BI and analytics queries are fundamentally ad hoc, interactive, low-latency analyses. Writing Map Reduce workflows is prohibitive for many business analysts. Waiting minutes for jobs to start and hours for workflows to complete is not conducive to an interactive experience of data, the comparing and contrasting, and the zooming in and out that ultimately creates fundamentally new insights.
Some data scientists even speculate that Drill and Dremel may actually be better than Hadoop in the wider sense, and a potential replacement, even. That’s a little too edgy a stance to embrace right now, but there is merit in an approach to analytics that is more query-oriented and low latency.
At Infochimps we like the Elasticsearch full-text search engine and database for doing high-level data exploration, but for truly capable Big Data querying at the (relative) seat level, we think that Drill will become the de facto solution.


R is an open source statistical programming language. It is incredibly powerful. Over two million (and counting) analysts use R. It’s been around since 1997 if you can believe it. It is a modern version of the S language for statistical computing that originally came out of the Bell Labs. Today, R is quickly becoming the new standard for statistics.
R performs complex data science at a much smaller price (both literally and figuratively). R is making serious headway in ousting SAS and SPSS from their thrones, and has become the tool of choice for the world’s best statisticians (and data scientists, and analysts too).
Why should you care?
Because it has an unusually strong community around it, you can find R libraries for almost anything under the sun — making virtually any kind of data science capability accessible without new code. R is exciting because of who is working on it, and how much net-new innovation is happening on a daily basis. the R community is one of the most thrilling places to be in Big Data right now.
R is a also wonderful way to future-proof your Big Data program. In the last few months, literally thousands of new features have been introduced, replete with publicly available knowledge bases for every analysis type you’d want to do as an organization.
Also, R works very well with Hadoop, making it an ideal part of an integrated Big Data approach.
To keep an eye on: Julia is an interesting and growing alternative to R, because it combats R’s notoriously slow language interpreter problem. The community around Julia isn’t nearly as strong right now, but if you have a need for speed…


Gremlin and Giraph help empower graph analysis, and are often used coupled with graph databases like Neo4j or InfiniteGraph, or in the case of Giraph, working with Hadoop. Golden Orbis another high-profile example of a graph-based project picking up steam.
Graph databases are pretty cutting edge. They have interesting differences with relational databases, which mean that sometimes you might want to take a graph approach rather than a relational approach from the very beginning.
The common analogue for graph-based approaches is Google’s Pregel, of which Gremlin and Giraph are open source alternatives. In fact, here’s a great read on how mimicry of Google technologies is a cottage industry unto itself.
Why should you care?
Graphs do a great job of modeling computer networks, and social networks, too — anything that links data together. Another common use is mapping, and geographic pathways — calculating shortest routes for example, from place A to place B (or to return to the social case, tracing the proximity of stated relationships from person A to person B).
Graphs are also popular for bioscience and physics use cases for this reason — they can chart molecular structures unusually well, for example.
Big picture, graph databases and analysis languages and frameworks are a great illustration of how the world is starting to realize that Big Data is not about having one database or one programming framework that accomplishes everything. Graph-based approaches are a killer app, so to speak, for anything that involves large networks with many nodes, and many linked pathways between those nodes.
The most innovative scientists and engineers know to apply the right tool for each job, making sure everything plays nice and can talk to each other (the glue in this sense becomes the core competence).


SAP Hana is an in-memory analytics platform that includes an in-memory database and a suite of tools and software for creating analytical processes and moving data in and out, in the right formats.
Why should you care?
SAP is going against the grain of most entrenched enterprise mega-players by providing a very powerful open source product.  And it’s not only that — SAP is also creating meaningful incentives for startups to embrace Hana as well. They are authentically fostering community involvement and there is uniformly positive sentiment around Hana as a result.
Hana highly benefits any applications with unusually fast processing needs, such as financial modeling and decision support, website personalization, and fraud detection, among many other use cases.
The biggest drawback of Hana is that “in-memory” means that it by definition leverages access to solid state memory, which has clear advantages, but is much more expensive than conventional disk storage.
For organizations that don’t mind the added operational cost, Hana means incredible speed for very-low latency big data processing.


D3 doesn’t make the list quite yet, but it’s close, and worth mentioning for that reason.
D3 is a javascript document visualization library that revolutionizes how powerfully and creatively we can visualize information, and make data truly interactive. It was created by Michael Bostock and came out of his work at the New York Times, where he is the Graphics Editor.
For example, you can use D3 to generate an HTML table from an array of numbers. Or, you can use the same data to create an interactive  bar chart with smooth transitions and interaction.
Here’s an example of D3 in action, making President Obama’s 2013 budget proposal understandable, and navigable.
With D3, programmers can create dashboards galore. Organizations of all sizes are quickly embracing D3 as a superior visualization platform to the heads-up displays of yesteryear.
Reference : TechCrunch

Thanks & Regards 
Prabhakaran Soundarapandian