Monday, November 12, 2007

Social Networking Features of MOSS 2007

Following is a list of some features provided by MOSS 2007 that will help developers to easily provide social networking features in MOSS based Enterprise wide Social Networking applications.

Colleagues: One of the main Social Networking components of MOSS 2007 is the concept of "Colleagues".

Colleagues are a list of friends, team members and co-workers that are related to a specific person through the establishment of a user profile.

A list of colleagues for a user is set up at the time of the user's profile creation and of course can be updated later through various built-in web parts. The details about the colleagues can be obtained from the Active Directory, Exchange Server, Live Communications Server 2005 or Office Communications Server 2007 set ups within the organization. Thus, the colleagues list is populated primarily based on Organization hierarchy. As a result, peers, supervisors, managers become a part of the colleagues. Additionally, MOSS can also gather this information by mining data sources like Outlook 2007 and IM contacts. (Note that privacy could become an issue here).

Through the list of colleagues, a user can find subject experts and key contacts within the organization, enabling increased and faster lines of communication.

Built-in Web Parts:

Colleagues Web Part: The colleagues web part allows users to present their mined and compiled colleagues to visitors. The colleagues list is a presentation of other organization members that the specified user works closely with in terms of organizational structure, interaction (i.e., email conversations and instant messaging contact lists) and group/site memberships. SharePoint can make recommendations regarding a colleague based on commonality of interactivity with these small groups, but users can also manually add and remove colleagues.

Colleagues Tracker Web Part: The colleague tracker web part allows organization members to privately view their list of compiled colleagues and to modify their views and inclusion in their colleagues list. The colleague tracker web part allows for the presentation of recommended colleagues and allows the user to modify colleague tracking by profile information. For example, users can modify the colleague tracker to present updated colleagues when anniversaries, profile properties, authored documents and blogs change. Additionally, scoping the presentation can occur when users choose to view colleagues specifically for the user's workgroup or organization-wide.

In Common With Web Part: Office SharePoint Server provides a summary view of information relating to the memberships, organizational managers and colleagues that a visitor has in common with the owner of a My Site.

Membership Web Parts, Links, Sharepoint Sites: These web parts provide the ability for users to view their own Office SharePoint Server site, group and mail list memberships and links as well as those that they have in common with others. Additionally, visitors can view a user's memberships, Office SharePoint Server Sites and distribution group memberships.

Web Parts: Colleague Tracker, Colleague Web Part, Membership Web Part

Presence Information: When coupled with Office Communications Server and Exchange Server, presence information indicating online instant messaging status, Out of Office messages and contact information is displayed whenever user information is presented (i.e., colleagues and colleague tracker web parts, etc.).

People Search: Office SharePoint Server supports the discovery of team members, colleagues and other individuals by exposing a search interface in which information workers can search on the organization's personnel. Results are returned to users and are presented in terms of social distance and relevance for grouping. The search can further be refined by user profile attributes including job title and alternatively be viewed based on search term relevance.

My Sites: Finallty, "My Sites" is the place where the list of colleagues will get displayed. My Site allows users to present information about their skills, individuals they know as well as other social information to visitors.

Content from the link: http://blogs.msdn.com/sharepoint/archive/2007/10/24/enabling-and-managing-social-networks-for-business-use-with-microsoft-office-sharepoint-server-2007.aspx (contains some nice images / screenshots of the Sharepoint application)


 

Social Networking

If you are in the IT world or are a so-called Net savvy person, you must have heard about the word "Social Networking". It sounds great, although what does it mean really? And is it only restricted to sites like MySpace and Orkut and Facebook? Or can it be applied, is it being applied to intranet applications, for bringing people together within an Organization?

This blog is just a collection of some information, ideas that I found – a high level overview, to shed some light on the very frequently used term "Social Networking".

What is a Social Network?

"A social network is a social structure made of nodes (which are generally individuals or organizations) that are tied by one or more specific types of interdependency, such as values, visions, idea, financial exchange, friends, kinship, dislike, conflict, trade, web links, disease transmission (epidemiology), or airline routes" - This is what Wikipedia has to say about "Social Networks". The concept of social networks and social network analysis has been around for quite some time, bringing about a major change in the fields of sociology, social psychology, economics, organizational studies etc., helping social scientists to understand how organizations work, how networks or groups of people within an organization influence the organization in turn. Just to make this point a little more understandable – the size of a social network is a good indicator of its usefulness, its reach and the impact it will have. A small, very highly connected network will be less useful as compared to a loosely connected network, where people within the group have connections to networks / groups outside the home network. More open type of networks as the latter one result in introducing and sharing of newer ideas and opportunities, bringing people with different facets and capabilities together.

In simple words, a "social network" is an association of people drawn together by family, work or hobby. The term was first coined by professor J. A. Barnes in the 1950s, who defined the size of a social network as a group of about 100 to 150 people.

What is a Social Networking Site?

A social networking site is a web site that provides a virtual community for people interested in a particular subject or just to "hang out" together. Members communicate by voice, chat, instant message, videoconference and blogs, and the service typically provides a way for members to contact friends of other members. Examples of public social networking sites of course are Orkut, MySpace, Facebook etc.

These sites build on the Web 2.0 model, propagating the use of the Web as an application platform, helping members to share content of any form by forming online communities and are typically supported by an Ajaxified / rich / easy to use user interface.

Social Networking within Organizations

Public social networking sites are the "in things" anyhow – but what is gaining momentum now is the use of social networking sites or applications within organizations, to help employees share data, content, information, in the form of documents, videos, photos, presentations, making finding relevant people within the organization easier, and helping automate a lot of manual workflows and business processes within an organization by making use of a social networking platform.

Microsoft Office Sharepoint Server 2007 (MOSS 2007) fits this scenario and provides a lot of built-in features that will make building social networking applications. More on this in the next blog.

Tuesday, May 22, 2007

Synchronizing my thoughts on Synchronized collection

I admit I was not exactly aware of how Synchronized collections work and what it means exactly to get a Synchronized version of a non-thread-safe collection. It all started with the following code that uses a non-synchronized Queue collection of C# to maintain a queue of tasks:

Main Thread:

public void EnqueueTask()
{
// Create a new task t
// Enqueue it into the Queue
JobQueue.Enqueue(t);
}

Consumer Thread:

public void DequeueTask()
{
// Check if number of items in the queue is > 0 and dequeue the task if true
if(JobQueue.Count > 0)
{
JobQueue.Dequeue();
}
}

Note that it is known for sure that there is only one "Producer" thread that is adding tasks to the queue and there is exactly one "Consumer" thread that is removing tasks from the queue. There are not multiple consumers.

Questions that can arise when you see the code:

1. Is this code thread safe? After all, a shared Queue object is being accessed by two different threads - one adding to it and the other removing from it. So, our standard knowledge about threads and synchronization says that we need to do some synchronization here.

2. How to achieve synchronization, if it is needed?
a. Queue is by default not thread safe. Can I make use of the Synchronized Queue collection instead?
b. Synchronization and mutual exclusion can also be achieved by obtaining a lock and making sure only one of the two threads can access the Queue object at any time.

3. With this set up (knowing that there is only one producer and only one consumer), do I really need the Synchronization??

The 3rd question took most of my time today, the logic behind that thinking was this:

Even if there are two threads T1 and T2, accessing EnqueueTask and DequeueTask respectively, at any point of time, there could be only one thread active and using up the CPU time slice alloted to it. Further, shouldn't it be the case that the "Enqueue" and "Dequeue" functions on the Queue are atomic? What that means is, thread T1 can get preempted by thread T2, either before the call to Enqueue method or after the call to Enqueue is over. T1 cannot be preempted while it is in the process of enqueuing the task. Fair enough?

I tried searching a lot on whether these operations defined on the Queue object are atomic or reentrant. I never really found any concrete, to the point explanation to it. I am still not 100% sure I am right, but after some thinking I arrived at the following conclusion based on what I read mostly on the Net:

1. Queue they say is non-thread-safe by default. This actually perhaps means that the operations defined on this object are not atomic or reentrant. That is, it is in fact possible that thread T1 can get preempted while the "Enqueue" function call is in progress by thread T2 that then tries to call the "Dequeue" function. In that case, since both the functions do work on the same object and memory location, some weird, wrong results might be obtained.

2. I found an implementation of a Synchronized version of the Queue class. Following is how the "Enqueue" method will be implemented in the synchronized Queue class, that wraps a non-synchronized Queue object:

public override void Enqueue()
{
lock (syncRoot)
{
queue.Enqueue();
}
}

This implementation makes it apparent that the synchronized version in turn is obtaining a "lock" on the sync root.

3. What the synchronized collection is going to guarantee is that no two threads can call the operations on the synchronized collection object simultaneosly. An interesting point to note is that, it will however, not guarantee thread safety during enumeration or when a sequeunce of these operations needs to be executed as part of a critical section. What does this mean?

Consider the case when there can be multiple consumers in our earlier example, multiple threads trying to dequeue the tasks from our Queue. In that situation, using the Synchronized version of the Queue will not really help. The code:

if(queue.Count > 0)
{
queue.Dequeue();
}


has to be executed atomically. Otherwise, two threads T1 and T2 can read a Count > 0 and try to Dequeue the tasks. In case there is only one task in the Queue, the thread that dequeues the task later will get an exception. This problem can only be solved by obtaining a lock before entering the critical section.

4. Should we use Synchronized collections?

C#, Java provide the Synchronized collections as helper classes. And developers will obviously get tempted to use them. But here are a few points to know and remember before or while using them:

a. A Synchronized collection DOES NOT solve all the synchronization problems for us. Point 3 above gives an example.

b. If your code has sequence of operations on the collection that are atomic (as shown in Point 3), or you need to enumerate the collection, it is better to not use the synchronized collection. If you do, you would be unnecessarily adding the overhead of the synchronization done internally by the Synchronized collection.

c. I read in one of the articles that locking the collection object ourselves can give better results that using the synchronized wrapper. Have not tried this out myself, so cannot really comment on this.

d. The Synchronized collection is after all a wrapper over the actuall collection object. With the wrapper methods in place, every call to any method on the object will incur an overhead of synchronization.

Any comments / clarifications / corrections in this blog are welcome!

Friday, May 18, 2007

Weak References

By default, when we use a "new" operator, we get hold of a strong reference. A strong reference to an object prevents the Garbage Collector from removing the object from the Heap. The GC will not garbage collect any object as long as there are one or more strong references to it.

While this is the behavior we expect most of the times, there are times when we may not want to hold a strong reference to an object and prevent it from getting garbage collected.

A typical example of this is a Cache. While implementing a home grown Cache mechanism, it would be good to make use of what are called "Weak References". A weak reference allows the garbage collector to collect the object while still allowing the application to access the object. This means that in the intermediate time when there are no strong references to the object and till the time the garbage collector has not as yet collected the object from the heap, the object is accessible via the weak reference.

Example where Weak Reference can be used:

Lets say there is a huge DataSet that we are maintaining in memory. The DataSet is displayed to the user on one page. Now, the user visits another page of the application and therefore now the DataSet maintained in memory is not really required. Before the user moves to another page, what we can do is nullify the strong reference to the DataSet while maintaining only a weak reference to it. What this means is, while the user visits other pages of the application, if at all GC runs and needs to free the memory, the DataSet can be garbage collected. In case other pages do not need a lot of memory, GC will not run and the DataSet will not be garbage collected. If at all the user visits the same page again, the DataSet will still be in memory and can be referenced using the Weak Reference. Thus, a weak reference makes sure that we do not unnecessarily hold on to big objects in memory and prevent the GC from garbage collecting it.

The basic pattern for the use of a WeakReference would look like this:

// Create a weak reference to the DataSet ds.
private WeakReference Data = new WeakReference(ds);
// Get Data method that makes use of the Weak Reference
public DataSet GetData(){ DataSet data = (DataSet)Data.Target; if( data !=null) { return data; } else { data= GetBigDataSet() // load the data .... // Create a Weak Reference to data for later use... Data.Target = data; } return data;}
It would be clear by now how Weak References help when implementing a Cache. A cache can always maintain weak references to objects, which means that if memory needs to be cleaned up, objects in the cache that are no longer strong referenced can be picked up by GC freeing up the memory.

Note: While Weak Reference is a good thing for saving on memory utilization, weak references should be used only for large objects preferrably, because with small objects, the weak reference pointer itself could be larger than the actual object.

Sunday, May 13, 2007

Improving ASP .NET Performance - Part I

Read a very good, comprehensive article on "Improving ASP .NET Performance" on MSDN. Following are some important and interesting points mentioned in it, particularly concentrating on improving the performance of ASP .NET pages:

1. Trim your page size

Not something that is on our top priority list when we think about improving performance! But large pages increase the response times experienced by the client, increase the consumption of the n/w bandwidth, thereby increasing the load on the CPU. To trim the page size:

a. Remove extra white spaces and tabs (though good coding practice would tell you to keep them for better readability of the code)

b. Use script includes for static javascripts so that they can be cached for subsequent requests.

c. Disable view state when you do not need it

d. Limit the use of graphics, consider using compressed graphics

e. Use CSS for styling to avoid sending same formatting directives to the client repeatedly

f. Avoid long control names! (Again something that is contradictory to good coding pratice rules)

2. Use Page.IsPostBack check in the page code-behind to avoid execution of instructions that need to be executed only once when the page loads for the first time

3. Data Binding in pages:

a. Avoid using Page.DataBind method, since it is a page-level method. It internally invokes DataBind method on all the controls on the page that support data binding. Instead, as far as possible, call DataBind explicitly on required controls.

b. Minimize calls to DataBinder.Eval

DataBinder.Eval uses reflection to evaluate the arguments that are passed to it. This can be quite time consuming and expensive when there are a lot of rows and columns in the table. Instead, one can use explicit casting (cast to DataRowView class) or use ItemDataBound event when the record being bound contains a lot of fields.

4. Partial Page or Fragment Caching

In cases when caching of the entire page is not possible by using the OutputCache directive (this could be the case when parts of the page are dynamic and change frequently), it is possible to enable fragment caching only for specific portions of a page. These portions need to be abstracted out into user controls. The user controls on a page can be cached independently of the page. Typical examples are headers and footers, navigation menus etc.

5. If the same user control is repeated on multiple pages, make the pages share the same instance of the user control by setting the "Shared" attribute of @ OutputCache directive of the user control to true. This will save a significant amount of memory.

Saturday, May 12, 2007

.NET ThreadPool - Pitfalls and gotchas

Multithreading is used extensively in user interface applications, mainly to perform some time consuming operations in the background, while keeping the user interface active at the same time and not having to block the user. While multithreading is good, having too many threads active at a point in time can adversely affect the performance instead of improving it, just because of the number of expensive context switches that need to be performed.

A middle way therefore, is to make use of Thread Pools. .NET provides you with a readymade implementation of a Thread Pool in the form of the System.Threading.ThreadPool class. A single thread pool (default pool size of 25 threads) is maintained by the CLR for each process, asynchronous tasks can be performed by making use of the methods in this class, typically by calling the QueueUserWorkItem method that queues user requests to be picked up by available threads in the pool.

While the use of ThreadPool makes it a lot easier on the developer (all the intricacies of creating, managing and destroying a thread are hidden and happen behind the scenes) and also improves performance (a quick comparison between a manual Thread.start() and ThreadPool.QueueUserWorkItem()) shows a big difference), there are some pitfalls / points to remember / gotchas when it comes to using the ThreadPool. Following are some:

1. ThreadPool is leveraged by the .NET framework for a lot of tasks. ADO .NET, .NET Remoting, Timers, built-in delegate BeginInvoke methods - all of them internally make use of the ThreadPool. So this means, that the thread pool does not belong to your application alone, but is being used and loaded by the framework itself.

2. The tasks queued up using the QueueUserWorkItem can remain in a wait state for a long time, but the actual work required for each task has to be really less and fast - in order to avoid excessive blocking of a single thread to perform the task.

3. Once a task is submitted to the queue, there is no control over the thread that executes it, no way to get the state or set the thread's priority. It is not possible to create named threads using the ThreadPool class and therefore there is no way to track a particular thread. It is therefore best to use the ThreadPool only when you want to run independent tasks asynchronously, with no need to prioritize them, or make sure they run in a particular order.

4. One ThreadPool is created per process - which can possibly have multiple AppDomains. So, if one application using the ThreadPool behaves badly, another application in the same process runs the risk of getting affected!

5. It is critical to remember to write the code in such a way that deadlocks do not occur. While this is the very basic care one should take while using threads, it becomes pronounced with the use of ThreadPool because of point number 1 mentioned above. The catch is explained below:

Let us say there is a method called "ConnectTo" that opens and closes a socket using the "BeginConnect" and "EndConnect" methods of .NET that internally make use of the ThreadPool. There is a task "WriteToSocket" that is submitted to the queue - to make use of the ThreadPool. And now imagine there are 2 such tasks created with the pool size being 2. Now, the situation is that the two threads in the ThreadPool are already blocked by the "WriteToSocket" tasks. Each of these tasks, however, call "ConnectTo" which requires a thread from the ThreadPool in order to execute the asynchronous "BeginConnect" method. If you get the picture - what has happened in this case is the famous deadlock situtation.

Some rules of thumb to remember to avoid a situation as above:

a. Do not create any class whose synchronous methods wait for asynchronous functions, since this class could be called from a thread on the pool.

b. Do not use any class inside an asynchronous function if the class blocks waiting for asynchronous functions

c. Do not ever block a thread executed on the pool that is waiting for another function on the pool - so basically know which of the .NET built-in functions make use of the ThreadPool!

Friday, May 04, 2007

Showing a Sort Order Indicator in Header of GridView control of ASP .NET 2.0

It doesn't take a whole lot of effort to provide sorting on columns in a GridView control of ASP .NET 2.0. However, it does not have built-in support for showing an icon or an image to indicate the column on which the table is sorted and the order in which it is sorted.

To enable sorting and to show a sort order indicator in the column header of a GridView, the following things need to be done:

1. In the .aspx page, define event handlers for "OnSorting" and "OnRowCreated" events. The OnSorting event gets called whenever the grid is sorted by clicking on a column header and OnRowCreated event is called when a row in the grid gets created. Also set the "AllowSorting" attribute to true. The following code snippet shows the attributes of the GridView control:

AllowSorting="True" OnSorting="gridViewInvoiceSearchResult_OnSort" OnRowCreated="gridViewInvoiceSearchResult_OnRowCreated"

2. Write the event handlers for OnSorting and OnRowCreated events in the code-behind page.

3. The GridViewSortEventArgs parameter passed to the OnSorting event handler contains the sort direction (ascending or descending) and the sort expression (used to identify the column on which the sorting is done. A particular column in the GridView can be associated with a SortExpression while defining the binding of the column to a particular data field). Store these values in member variables declared within the code-behind page.

4. In the OnRowCreated event, use the SortExpression and SortDirection values stored earlier to determine which image to add and which column to add it to. The following code snippet shows the OnRowCreated event handler:

protected void gridViewInvoiceSearchResult_OnRowCreated(object sender, GridViewRowEventArgs e)
{
// Check whether the row is a header row

if (e.Row.RowType == DataControlRowType.Header)
{
// m_SortExp is the sort expression stored in the OnSorting event handler

if (String.Empty != m_SortExp)
{
// Based on the sort expression, find the index of the sorted column

int column = GetSortColumnIndex(this.gridViewInvoiceSearchResult, m_SortExp);
if (column != -1)
// Add an image to the sorted column header depending on the sort direction

AddSortImage(e.Row, column, m_SortDirection);
}
}
}

// Method to get the index of the sorted column based on SortExpression

private int GetSortColumnIndex(GridView gridView, String sortExpression)

{
if (gridView == null)
return -1;
foreach (DataControlField field in gridView.Columns)
{
if (field.SortExpression == sortExpression)
{
return gridView.Columns.IndexOf(field);
}
}
return -1;
}

// Method to add the sort icon to the column header

private void AddSortImage(GridViewRow headerRow, int column, SortDirection sortDirection)

{
if (-1 == column)
{
return;
}
// Create the sorting image based on the sort direction.
Image sortImage = new Image();
if (SortDirection.Ascending == sortDirection)
{
sortImage.ImageUrl = "~/down.gif";
sortImage.AlternateText = "Ascending order";
}
else
{
sortImage.ImageUrl = "~/up.gif";
sortImage.AlternateText = "Descending order";
}
// Add the image to the appropriate header cell.
headerRow.Cells[column].Controls.Add(sortImage);
}

Sunday, April 29, 2007

Retrieving multiple rows from Database Tables with ADO .NET

With ADO .NET there are two ways of retrieving multiple rows from a database table:

1. Using SqlDataAdapter to generate DataSet or DataTable
2. Using SqlDataReader to provide a read-only, forward-only data stream

The choice between the two approaches is one between performance and functionality. DataReader gives better performance, while DataAdapter approach provides additional functionality and flexibility. Following is a list of points telling you when to use which approach.

Use DataSet with SqlDataAdapter when:

1. You require a disconnected, memory-resident cache of data
2. You want to update some or all of the retrieved rows and use batch update facilities of the DataAdapter
3. You want to bind the data with a control that requires a data source that supports IList

Good-to-know points about SqlDataAdapter:

1. Fill method of SqlDataAdapter opens and closes the database connection for you. So you don't need to do connection management.
2. However, that means, if you require the connection to be open after the Fill Method, open the connection yourself before calling the Fill method, thus avoiding unnecessary close-open of the connection.

Use SqlDataReader when:

1. You are dealing with large volumes of data that is too much for maintaining in a single cache.
2. You want to simply read the data and present to the user (read-only data)
3. You want to bind the data with a control that requires a data source that implements IEnumerable

Good-to-know points about SqlDataReader:

1. Remember to call Close on SqlDataReader as soon as possible, since the connection to the database remains open as long as the data reader is active.
2. The database connection can be closed implicitly by passing the CommandBehavior.CloseConnection value to the ExecuteReader method - it ensures that the connection is closes when the data reader is closed.
3. Use typed accessor methods (GetInt32, GetString etc.) if the column's data type is know while reading data from the reader. This reduces the amount of type conversion required, improving performance.
4. If you want to stop pulling data from the server to client, call Cancel method on SqlDataReader before calling Close method. Cancel method ensures that the data is discarded. Calling Close directly however, will make the reader pull all the data before closing the stream.

The main advantage of the SqlDataReader approach over the DataSet approach is that the former avoids the object creation overhead associated with a DataSet. Note that the DataSet object creation will result in creation of many other objects like DataTable, DataRow, DataColumn and the Collection objects used to hold all these sub-objects.

Saturday, April 28, 2007

Exception Handling Best Practices

Anyone who has been a programmer in his life, knows about exceptions and also knows that there is no exception to having Exceptions in one's code! Fresh programmers however often overlook the importance of having exception management framework in their applications. A proper, well-defined exception handling framework along with a logging framework is absolutely essential for making sure that your program or application does not spring surprises on you later!

So, what are the best ways of handling Exceptions in the code? What should one do and what should one not do as regards Exceptions?

1. Catch Exceptions the right way

In C# or in Java, the way to handle exceptions is to put the code that could generate the exception into a try..catch..finally block. If there are multiple catch blocks, make sure that these are ordered from the most specific type to the most generic type. This ensures that the catch block for the most specific type of exception is considered first for any given exception, thus guaranteeing a specific treatment to that exception if required.

2. Catch Exceptions only if you know what to do with them!

Most of us, most of the times, blindly write the try..catch blocks, just because the IDE makes us do it. Because of that we end up having catch blocks as below:
catch(Exception exp){ throw exp;}
Now, the thing to note here is that you have not done anything really by catching that exception. It has just been rethrown from the catch block. If you find such lines in your code, the first thing you should be doing is simply removing those catch blocks. Instead, let these exceptions propagate further to the caller methods - which perhaps know better what to do with them.

Catch an exception only if:

a. You want to log that exception - The exception message and the stack trace when logged give a good picture to the developer as to what has gone wrong.
b. You want to write some clean-up code - In this case you can rethrow the same exception if no extra information is needed
c. You want to write some code to recover from the exception
d. You want to add some relevant information to the exception - This is particularly necessary in multi-tier applications or end-user applications where some nice user-friendly message has to be shown to the user. As an exception is propagated up the call-stack, often the information associated with it becomes less relevant. In such cases, wrap the actual exception in a custom, application specific exception. Remember to store the actual exception in the "InnerException" property of .NET Exception class (This is possible in Java too). This ensures that the actual exception is never lost and can always be retrieved if required.

3. Use Exceptions sparingly

Catching and Throwing of Exceptions is performance intensive. Never use exceptions to control the normal flow of operations in the code.

4. Use Custom Exceptions

As mentioned in point 2 d., one of things that one can do after catching an exception is to wrap it up in a more understandable exception with some extra information. This is where we need custom exception classes, exceptions that are specific to your application. In .NET, this can be done by extending the "ApplicationException" class. The entire hierarchy of the application exceptions can be bundled into one single assembly so that it can be referenced everywhere in your application.

A comprehensive article on Exception Management in .NET at http://msdn2.microsoft.com/en-us/library/ms954599.aspx

Friday, April 27, 2007

Simultaneous calls to server from UpdatePanel

The simplest way of Ajaxifying your ASP .NET web page using the ASP .NET Ajax library is to make use of its "UpdatePanel" control. It allows parts of your web pages to get refreshed, avoiding whole page reloads. This control causes asynchronous postbacks to the server. On receiving a response from the server, only the components inside the UpdatePanel are re-rendered. With the Ajax approach of making the calls to the server asynchronously, the user interface remains active even after a request to the server has been initiated. This means that the user can click on other active buttons on the page and initiate some more asynchronous calls (assuming those actions are also ajaxified).

Does this mean that I can place multiple UpdatePanels on my web page, let the user perform many actions simultaneously, resulting in many asynchronous calls to the server and different parts of the page getting refreshed as and when the corresponding Ajax call ends? Well - the answer to this question is NO and to understand why it is important to understand how an asynchronous postback is (or is not) different from a regular postback

The asynchronous postback made by UpdatePanel is exactly the same as a regular postback except for one important thing: the rendering. Asynchronous postbacks go through the same life cycles events as regular pages. Only at the render phase do things get different, since only the components inside the UpdatePanel are re-rendered instead of the whole page.

What this means for us is: Assume that there are two UpdatePanels on a web page, with one button each. The user clicks on Button 1 in the first UpdatePanel and the processing starts. Since the UI remains active after the first Ajax call, the user clicks on Button 2 in second UpdatePanel - assuming that both the calls will go through and he will get to see both the updates in some time. However, what will happen is that the second button click will invalidate the asynchronous postback initiated by the first button click. Effectively, only the final and the second button click will actually get processed, instead of both!

Note: While this is the story of the ASP .NET Ajax UpdatePanel - another interesting thing is the limit of 2 simultaneous connections to a server, a restriction put by most of the browsers today. This means that even with the crude simple Ajax way, it is possible to process only two requests simultaneously. All the other requests will be queued by the browser, until a free connection is available.

Sunday, April 22, 2007

New version of Visual Studio - code named "Orcas"

The successor of Visual Studio 2005, code named "Orcas" (named after an island - this knowledge is courtesy Wikipedia) had its Beta 1 release on April 19, 2007. This new Visual Studio version is Vista compatible and promises a lot of new features and enhancements and bug fixes (ahem!) as compared to Visual Studio 2005.

I am of course yet to try it out, but some of the features that caught my attention:

1. Support for WPF, WCF and a host of other .NET 3.0 technologies
2. Full integration with Visual Studio Tools for Office (VSTO)
3. Support for LINQ (Language Integrated Query)
4. Integrated support for ASP .NET Ajax Extensions

One sure thing to check out in the coming days for all the Microsoft Developers out there!

Thursday, April 05, 2007

Binding ASP .NET 2.0 GridView to a complex object using ObjectDataSource

The ObjectDataSource provided with ASP .NET 2.0 allows a GridView component to be bound a list of user defined objects. The columns in the gridview could be defined as "BoundFields", binding the column to a particular property of the user defined object.

However, this works fine, as long as the object is flat. In the current project that I am working on, we had a class structure as given below:

class ShoppingCartItem
{
int requestedQuantity;
ProductInfo product;
}

class ProductInfo
{
string productName;
string brandName;
}

We wanted to show the Product Name, Brand Name and Requested Quantity of each product in the Shopping Cart of the user in a tabular format using the GridView control. We were using the ObjectDataSource and binding the GridView with a list of ShoppingCartItem instances. Now, the problem was that the product name and brand name were not directly accessible from the ShoppingCartItem class. Instead they were properties inside the ProductInfo class, that was a part of the ShoppingCartItem class.

Solutions:

1. This particular problem could have been solved by using the TemplateField and the DataBinder.eval approach, but that meant changing a lot of things in the page.

2. Instead, a clean, quick and yet a very simple solution to this problem was to add wrapper methods inside the ShoppingCartItem class to get the product name and the brand name from the ProductInfo class. We could easily bind the column of the GridView to these wrapper methods inside the ShoppingCartItem class.