Wednesday, 28 November 2012

A hotfix for the ASP.NET browser definition files in MS .NET Framework 4.0

This post describes a hotfix for the ASP.NET browser definition files that are included in the Microsoft .NET Framework 4.0.

This blog is mainly for the ASP.NET developer's who are all using IE 10 and Mozilla Firefox 4.0 or a later version of Mozilla Firefox  browser's.The following are 2 Issue's so far and you can get the current fix from MSFT.


Issue 1

Consider the following scenario:
  • You use Windows Internet Explorer 10 to access an ASP.NET-based webpage.
  • The webpage starts a postback.

In this scenario, the postback fails, and you receive the following error message:
Script Error encountered", "'__doPostBack' is undefined
Note The webpage can start a postback in various ways. For example, a LinkButton control can start a postback.

Issue 2

Consider the following scenario:
  • You create an ASP.NET-based webpage that has the MaintainScrollPositionOnPostBack attribute set to True.
  • You use Mozilla Firefox 4.0 or a later version of Mozilla Firefox to open the webpage.
  • The webpage starts a postback.

In this scenario, the scroll position of the webpage is not maintained after postback.



This hotfix introduces updated definitions in the browser definition files for Internet Explorer and for Mozilla Firefox. The browser definition files are stored in one of the following folders, depending on the installed version of the Microsoft .NET Framework:

  • For 32-bit versions of the .NET Framework 4.0
    %WinDir%\Microsoft.NET\Framework\v4.0.30319\CONFIG\Browsers

  • For 64-bit versions of the .NET Framework 4.0
    %WinDir%\Microsoft.NET\Framework64\v4.0.30319\CONFIG\Browsers
By default, ASP.NET uses sniffing technology for the user agent string to detect browsers. The browser definition files cover a certain range of browser versions. However, as the version numbers increase, ASP.NET might not recognize new versions of a browser by using the user agent string. In this case, ASP.NET might handle these versions as an unknown browser. For example, ASP.NET cannot recognize Windows Internet Explorer 10 that has the following user agent string:

Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0)

This hotfix is intended to correct only the problem that describedabove. Apply this hotfix only to systems that are experiencing the problem described the same

After you apply this hotfix, you might have to restart the computer if any affected files are being used when you install this hotfix. 
To avoid a restart of the computer, shut down all web server applications for ASP.NET websites before you apply this hotfix.

Note : This hotfix does not replace a previously-released hotfix.

Courtesy : Microsoft

Thanks & Regards
Prabhakaran Soundarapandian

Sunday, 28 October 2012

Big Data Right Now: Five Trendy Open Source Technologies

Today I am going to share a new trend in technology where I got the below information form Techcrunch and though of sharing to all.


Big Data is on every CIO’s mind this quarter, and for good reason. Companies will have spent $4.3 billion on Big Data technologies by the end of 2012.
But here’s where it gets interesting. Those initial investments will in turn trigger a domino effectof upgrades and new initiatives that are valued at $34 billion for 2013, per Gartner. Over a 5 year period, spend is estimated at $232 billion.
What you’re seeing right now is only the tip of a gigantic iceberg.
Big Data is presently synonymous with technologies like Hadoop, and the “NoSQL” class of databases including Mongo (document stores) and Cassandra (key-values).  Today it’s possible to stream real-time analytics with ease. Spinning clusters up and down is a (relative) cinch, accomplished in 20 minutes or less. We have table stakes.
But there are new, untapped advantages and non-trivially large opportunities beyond these usual suspects.
Did you know that there are over 250K viable open source technologies on the market today? Innovation is all around us. The increasing complexity of systems, in fact, looks something like this:
We have a lot of…choices, to say the least.
What’s on our own radar, and what’s coming down the pipe for Fortune 2000 companies? What new projects are the most viable candidates for production-grade usage? Which deserve your undivided attention?
We did all the research and testing so you don’t have to. Let’s look at five new technologies that are shaking things up in Big Data. Here is the newest class of tools that you can’t afford to overlook, coming soon to an enterprise near you.

STORM AND KAFKA

Storm and Kafka are the future of stream processing, and they are already in use at a number of high-profile companies including Groupon, Alibaba, and The Weather Channel.
Born inside of Twitter, Storm is a “distributed real-time computation system”. Storm does for real-time processing what Hadoop did for batch processing. Kafka for its part is a messaging system developed at LinkedIn to serve as the foundation for their activity stream and the data processing pipeline behind it.
When paired together, you get the stream, you get it in-real time, and you get it at linear scale.
Why should you care?
With Storm and Kafka, you can conduct stream processing at linear scale, assured that every message gets processed in real-time, reliably. In tandem, Storm and Kafka can handle data velocities of tens of thousands of messages every second.
Stream processing solutions like Storm and Kafka have caught the attention of many enterprises due to their superior approach to ETL (extract, transform, load) and data integration.
Storm and Kafka are also great at in-memory analytics, and real-time decision support. Companies are quickly realizing that batch processing in Hadoop does not support real-time business needs. Real-time streaming analytics is a must-have component in any enterprise Big Data solution or stack, because of how elegantly they handle the “three V’s” — volume, velocity and variety.
Storm and Kafka are the two technologies on the list that we’re most committed to at Infochimps, and it is reasonable to expect that they’ll be a formal part of our platform soon.

DRILL AND DREMEL

Drill and Dremel make large-scale, ad-hoc querying of data possible, with radically lower latencies that are especially apt for data exploration. They make it possible to scan over petabytes of data in seconds, to answer ad hoc queries and presumably, power compelling visualizations.
Drill and Dremel put power in the hands of business analysts, and not just data engineers. The business side of the house will love Drill and Dremel.
Drill is the open source version of what Google is doing with Dremel (Google also offers Dremel-as-a-Service with its BigQuery offering). Companies are going to want to make the tool their own, which why Drill is the thing to watch mostly closely. Although it’s not quite there yet, strong interest by the development community is helping the tool mature rapidly.
Why should you care?
Drill and Dremel compare favorably to Hadoop for anything ad-hoc. Hadoop is all about batch processing workflows, which creates certain disadvantages.
The Hadoop ecosystem worked very hard to make MapReduce an approachable tool for ad hoc analyses. From Sawzall to Pig and Hive, many interface layers have been built on top of Hadoop to make it more friendly, and business-accessible. Yet, for all of the SQL-like familiarity, these abstraction layers ignore one fundamental reality – MapReduce (and thereby Hadoop) is purpose-built for organized data processing (read: running jobs, or “workflows”).
What if you’re not worried about running jobs? What if you’re more concerned with asking questions and getting answers — slicing and dicing, looking for insights?
That’s “ad hoc exploration” in a nutshell — if you assume data that’s been processed already, how can you optimize for speed? You shouldn’t have to run a new job and wait, sometimes for considerable lengths of time, every time you want to ask a new question.
In stark contrast to workflow-based methodology, most business-driven BI and analytics queries are fundamentally ad hoc, interactive, low-latency analyses. Writing Map Reduce workflows is prohibitive for many business analysts. Waiting minutes for jobs to start and hours for workflows to complete is not conducive to an interactive experience of data, the comparing and contrasting, and the zooming in and out that ultimately creates fundamentally new insights.
Some data scientists even speculate that Drill and Dremel may actually be better than Hadoop in the wider sense, and a potential replacement, even. That’s a little too edgy a stance to embrace right now, but there is merit in an approach to analytics that is more query-oriented and low latency.
At Infochimps we like the Elasticsearch full-text search engine and database for doing high-level data exploration, but for truly capable Big Data querying at the (relative) seat level, we think that Drill will become the de facto solution.

R

R is an open source statistical programming language. It is incredibly powerful. Over two million (and counting) analysts use R. It’s been around since 1997 if you can believe it. It is a modern version of the S language for statistical computing that originally came out of the Bell Labs. Today, R is quickly becoming the new standard for statistics.
R performs complex data science at a much smaller price (both literally and figuratively). R is making serious headway in ousting SAS and SPSS from their thrones, and has become the tool of choice for the world’s best statisticians (and data scientists, and analysts too).
Why should you care?
Because it has an unusually strong community around it, you can find R libraries for almost anything under the sun — making virtually any kind of data science capability accessible without new code. R is exciting because of who is working on it, and how much net-new innovation is happening on a daily basis. the R community is one of the most thrilling places to be in Big Data right now.
R is a also wonderful way to future-proof your Big Data program. In the last few months, literally thousands of new features have been introduced, replete with publicly available knowledge bases for every analysis type you’d want to do as an organization.
Also, R works very well with Hadoop, making it an ideal part of an integrated Big Data approach.
To keep an eye on: Julia is an interesting and growing alternative to R, because it combats R’s notoriously slow language interpreter problem. The community around Julia isn’t nearly as strong right now, but if you have a need for speed…

GREMLIN AND GIRAPH

Gremlin and Giraph help empower graph analysis, and are often used coupled with graph databases like Neo4j or InfiniteGraph, or in the case of Giraph, working with Hadoop. Golden Orbis another high-profile example of a graph-based project picking up steam.
Graph databases are pretty cutting edge. They have interesting differences with relational databases, which mean that sometimes you might want to take a graph approach rather than a relational approach from the very beginning.
The common analogue for graph-based approaches is Google’s Pregel, of which Gremlin and Giraph are open source alternatives. In fact, here’s a great read on how mimicry of Google technologies is a cottage industry unto itself.
Why should you care?
Graphs do a great job of modeling computer networks, and social networks, too — anything that links data together. Another common use is mapping, and geographic pathways — calculating shortest routes for example, from place A to place B (or to return to the social case, tracing the proximity of stated relationships from person A to person B).
Graphs are also popular for bioscience and physics use cases for this reason — they can chart molecular structures unusually well, for example.
Big picture, graph databases and analysis languages and frameworks are a great illustration of how the world is starting to realize that Big Data is not about having one database or one programming framework that accomplishes everything. Graph-based approaches are a killer app, so to speak, for anything that involves large networks with many nodes, and many linked pathways between those nodes.
The most innovative scientists and engineers know to apply the right tool for each job, making sure everything plays nice and can talk to each other (the glue in this sense becomes the core competence).

SAP HANA

SAP Hana is an in-memory analytics platform that includes an in-memory database and a suite of tools and software for creating analytical processes and moving data in and out, in the right formats.
Why should you care?
SAP is going against the grain of most entrenched enterprise mega-players by providing a very powerful open source product.  And it’s not only that — SAP is also creating meaningful incentives for startups to embrace Hana as well. They are authentically fostering community involvement and there is uniformly positive sentiment around Hana as a result.
Hana highly benefits any applications with unusually fast processing needs, such as financial modeling and decision support, website personalization, and fraud detection, among many other use cases.
The biggest drawback of Hana is that “in-memory” means that it by definition leverages access to solid state memory, which has clear advantages, but is much more expensive than conventional disk storage.
For organizations that don’t mind the added operational cost, Hana means incredible speed for very-low latency big data processing.

HONORABLE MENTION: D3

D3 doesn’t make the list quite yet, but it’s close, and worth mentioning for that reason.
D3 is a javascript document visualization library that revolutionizes how powerfully and creatively we can visualize information, and make data truly interactive. It was created by Michael Bostock and came out of his work at the New York Times, where he is the Graphics Editor.
For example, you can use D3 to generate an HTML table from an array of numbers. Or, you can use the same data to create an interactive  bar chart with smooth transitions and interaction.
Here’s an example of D3 in action, making President Obama’s 2013 budget proposal understandable, and navigable.
With D3, programmers can create dashboards galore. Organizations of all sizes are quickly embracing D3 as a superior visualization platform to the heads-up displays of yesteryear.
Reference : TechCrunch

Thanks & Regards 
Prabhakaran Soundarapandian

Saturday, 13 October 2012

Evolution of C# - Part II


In this post we are going to see the continuation of  Evolution of C# - Part I and now I am not going to discuss like the previous post. Instead of , attaching a presentation prepared by myself and presented to the associates in my firm.

Please go through it and let me know the feedback's/doubts.


Hope this post is useful...

Thanks & Regards
Prabhakaran S Pandian

Saturday, 1 September 2012

Tech News - Indian cloud market to grow 70pc in 2012, over 2011


After being on the fringes for quite some time, cloud computing is set for a major leap. In these tough times, companies have been proactively looking at various 'disruptive technologies' that will ensure that technology is elastic enough to meet the business growth needs. Cloud models and the flexibility they bring are featuring high here. 

International Data Corporation (IDC) estimates the Indian Cloud market to be in the region of $535m in 2011, with a growth of more than 70 per cent expected for 2012 and almost 50 per cent growth forecasted for the next three years. It is a market that is fast maturing and seeing many new entrants with a broad range of investments/solutions taking key roles in the cloud ecosystem. Public cloud still lags way behind the private cloud adoption for a number of factors. 

IDC has just released a cloud research report, titled 'India Cloud Market Overview-2011-16'. The research provides insights on how the cloud market landscape is evolving and how companies are taking advantage of the new mode of IT usage. 

"We have definitely seen cloud cross the inflexion point in end 2011; use cases especially in IasS & SaaS areas provide testimony to that. With proper messaging from key vendors and due diligence of opportunities which exist in the cloud delivery models, the market will grow much faster in the coming years"" says Nirupam Chaudhuri, Research manager - Software & IT Services, IDC India, in a release. 

"Alliance with key channels and enablement will further intensify the growth for major cloud providers and gradually we will see even core applications moving to cloud much faster. Users need to feel much more comfortable with fewer inhibitions like security and ownership concerns" added Nirupam. 

Cloud providers also need to strengthen their capabilities to understand the business requirements of the organizations and come up with apt value propositions. "Organizations are more likely to work with firms that understand their business processes better and industry dynamics, and hence are better suited to overseeing the transition of the organization to a cloud environment without disruption of the business processes," saysSandeep Kumar Sharma, senior market analyst - IT services, IDC India. 

It is also imperative for the cloud providers to act as partners for the organizations in assessing their cloud readiness, and accordingly recommending a cloud adoption roadmap. This is absolutely essential for a seamless integration of the IT infrastructure into the cloud environment. "Organizations, even the larger ones, are on an increasing level feeling the pinching need to assess their cloud readiness and maturity levels. This would provide a boost to cloud consulting services in the coming 12-24 months. A direct corollary is that vendors need to have robust cloud consulting capabilities in place for making a foray into this space," Sandeep added. 


Tech News - Indian cloud market to grow 70pc in 2012, over 2011 | Techgig

Wednesday, 29 August 2012

Windows 8 Metro apps and the outside world

 In this blog I am sharing one of the new trend in technology world.

'Windows 8 Metro apps and the outside world: connecting with services and integrating the cloud


The blog attachment will covers the overview/discussion about the following :

       •Accessing data behind a service

       •Working with WCF and ASMX services

       •Working with REST services

       •Accessing oData services

       •Syndication with RSS

       •Authenticating with Twitter and Facebook using the WebAuthBroker

       •Using the Live SDK in your Windows 8 apps

       •Using roaming data in your applications

       •Working with sockets             
                –TCP sockets
                WebSockets

      •Setting the correct capabilities for network communication

I have attached the presentation by Gill Cleeren.




Topic summary: Windows 8 Metro style apps will give users a great experience. Apps promise the user to be alive and connected. Being connected means interfacing with services and the cloud. But what options do we have to make the connection to the outside world? 
In this discussion, we could get to know to explore how WinRT apps can connect with services (WCF, REST...) and the cloud to offer a great experience.


Hope this sharing content was useful.

Thanks & Regards
Prabhakaran S Pandian

Friday, 24 August 2012

Evolution of C# - Part I

Evolution of C#  - Part I

A quick reference about the evolution of c# , this blog will give an insight in C# 1.0,2.0,3.5,4.0.
The below image will give a overview sentence about the main features about the C# versions.



Evolution of C#

C# 2.0


Delegates


A delegate is a type-safe object that can point to another method (or possibly multiple methods) in the application, which can be invoked at later time.

Example:


using System; 


delegate string ConvertMethod(string inString); 


public class DelegateExample 


            public static void Main() 
            
                    // Instantiate delegate to reference UppercaseString method 
                    ConvertMethod  convertMeth = UppercaseString; 
                    string name = "Dakota";
                        
                    // Use delegate instance to call UppercaseString method 
                    Console.WriteLine(convertMeth(name)); 
             

             private static string UppercaseString(string inputString) 

             {
                    return inputString.ToUpper(); 
             } 
}


C# 2.0 Anonymous Delegates


Before C# 2.0, the only way to use delegates was to use named methods. In some cases, this results in forced creation of classes only for using with delegates. In some cases, these classes and methods are never even invoked directly.

C# 2.0 offers an elegant solution for these methods described above. Anonymous methods allow declaration of inline methods without having to define a named method. 
This is very useful, especially in cases where the delegated function requires short and simple code. Anonymous methods can be used anywhere where a delegate type is expected.

Anonymous Method declarations consist of the keyword delegate, an optional parameter list and a statement list enclosed in parenthesis.

Event Handler (without using anonymous methods)

public partial class Form1 : Form 
// This method connects the event handler. 
public Form1() 
InitializeComponent(); 
button1.Click += new EventHandler(ButtonClick); 
// This is the event handling method. 
private void ButtonClick(object sender, EventArgs e) 
MessageBox.Show("You clicked the button."); 
}

Event Handler (using anonymous methods)

public partial class Form1 : Form 
public Form1() 
InitializeComponent(); 

button1.Click += delegate(object sender, EventArgs e) 
// The following code is part of an anonymous method. 
MessageBox.Show("You clicked the button, and " + "This is an anonymous                                 method!"); 
}; 
}

One more example:

public partial class Form1 : Form 
{
public Form1() 
InitializeComponent(); 
// Declare the delegate that points to an anonymous method. 
EventHandler clickHandler = delegate(object sender, EventArgs e) 
MessageBox.Show("You clicked the button, and " + "This is an anonymous method!"); 
}; 
// Attach the delegate to two events. 
button1.Click += clickHandler; 
button2.Click += clickHandler; 
}


C# 2.0 Iterators


Iterator is one of the new feature in c# 2.0 which provide a way to create classes that can be used with foreach statement without implementing the IEnumerator & IEnumerable interfaces when compiler detects iterator it will automatically generate the current, MoveNext and dispose method of IEnumerable or IEnumerator interface.

foreach (item OrderItem in catalog) 
// (Process OrderItem here.) 
}



Enumerator e = catalog.GetEnumerator(); 
while (e.MoveNext()) 
OrderItem item = e.Current; 
// (Process OrderItem here.) 
}

one more example:

public class LinkInfo 
    { 
        public string FileName { get; set; }
        public string  LinkContent {  get; set;  } 
        public string  LinkStatus {  get; set;  }
        public string  NewLink {  get; set;  } 
    }

    public static IEnumerable<LinkInfo> GetLinksFromListView(ListView lstLinkList) 
    { 
        foreach (ListViewItem li in lstLinkList.Items) 
        {
            LinkInfo newLink = new LinkInfo(); 
            newLink.FileName = new string(li.Text.ToCharArray());
            newLink.LinkContent =  new string (li.SubItems[1].Text.ToCharArray()); 
            newLink.LinkStatus =  new string (li.SubItems[2].Text.ToCharArray()); 
            newLink.NewLink =  new string (li.SubItems[3].Text.ToCharArray()); 
            yield return newLink; 
        } 
    } 


C# 2.0 Partial Classes


Partial class was introduced in C#2.0 , it means that your class can be divied into multiple segments physically but at the time of compilation it will we grouped as a single entity.


// Stored in file MyClass1.cs 
public partial class MyClass 
public MethodA() 
{...} 

partial  public  MethodB(); 

// Stored in file MyClass2.cs 
public partial class MyClass
public MethodB() {...} 
}


C# 2.0 Generics


Generics is a most powerful feature in .net.Generics allow you to define type-safe classes,collections,remoting,arrays etc..this will improve the performance in your code and leads to a quality code.


public class ObjectList<ItemType> : CollectionBase

private ArrayList list = new ArrayList(); 

public int Add(ItemType value) 

return list.Add(value); 


public void Remove(ItemType value) 

list.Remove(value); 


public ItemType this[int index] 

get { return (ItemType)list[index]; } 
set { list[index] = value; } 

}




// Create the ObjectList instance, and 
// choose a type (in this case, string). 
ObjectList<string> list = new ObjectList<string>(); 

// Add two strings. 
list.Add("blue"); 
list.Add("green");

// The next statement will fail because it has the wrong type. 
// In fact, this line won't ever run, because the compiler
// notices the problem and refuses to build the application. 
list.Add(4);


C# 2.0 Generics constraints


ConstraintDescription
where T: structThe type argument must be a value type
where T: classThe type argument must be a reference type
where T: new( )The type argument must have a public default constructor
where T: <base class name>The type argument must be the base class name or derived from the base class name.
where T: <interface name>The type argument must be the interface or implement the interface. Multiple interfaces may be specified.


Hope this post is useful...
Will be concentrate on the C#3.0/3.5 & 4.0 in Part II.

Thanks & Regards
Prabhakaran S Pandian