WCF Tracing

After deploying application a proper troubleshooting policy should be ready in order to perform maintenance activity of our project life cycle. Here we will see different tools available from Microsoft within visual studio in order to manage, configure, debug and troubleshoot WCF service after deployment. Note that if you are unfamiliar with WCF, please read my WCF Tutorial .

WCF Tracing can track all the events or only specified events in the program. Tracing is built upon the System.Diagnostics namespace of the .NET Framework. The Debug and Trace classes under this namespace are responsible for debugging and tracing purpose.

Tracing is turned off by default and needs to be enabled   to start using tracing. We can enable tracing by two ways. One is through code and another is from config file. Enabling tracing in configuration is the optimal method since we can easily turn it on and off as required. When debugging  is  no longer required we should turn tracing enable off.

Enabling Tracing

The following code snippet will enable tracing through configuration file:

<system.diagnostics>

<trace autoflush="true" />

<sources>

<source name="System.ServiceModel"

switchValue="All">

<listeners>

<add name="TraceListeners"

type="System.Diagnostics.XmlWriterTraceListener"

initializeData= "c:\Log\trace.svclog"/>

</listeners>

</source>

</sources>

</system.diagnostics>

In this example, the name and type of the trace listener is specified. The Listener is named TraceListeners and the standard .NET Framework trace listener (the native format for the tool - System.Diagnostics.XmlWriterTraceListener) is added as the type. The initializeData attribute is used to set the name of the log file for that Listener. Here the fully qualified file name is used as “c:\Log\trace.svclog”.

Tracing Levels:

Tracing Levels:
The tracing level is controlled by the switchValue setting. The available tracing levels are described below:

SL. No

Level

Tracked Events

Content of the Tracked Events

1.

Off

No traces are emitted.

2.

Critical

Out of memory exception, Stack overflow exception, Application start
errors, System hangs, Poison messages

3.

Error

All exception are logged

Unexpected processing has happened. The application was not able to
perform a task as expected. However, the application is still up and running.

4.

Warnings

Timeout exceeded, Credential rejected, throttling exceeded, receiving
queue nearing capacity

A possible problem has occurred or may occur, but the application
still functions correctly. However, it may not continue to work properly.

5.

Information

Channels and endpoints created, message enters/leaves transport, configuration
read, general helpful information

Important and successful milestones of application execution,
regardless of whether the application is working properly or not.

6.

Verbose

Debugging or application optimization

Low level events for both user code and servicing are emitted.

7.

Activity Tracing

Tracing for transfers, activity boundaries, start/stop

Flow events between processing activities and components.

8.

All

All listed events

Application may function properly. All events are emitted.

Trace Sources:

WCF defines a trace source for every assembly. Traces generated within an assembly are accessed by the listeners which are defined for that source. The below trace sources are defined:

  • System.ServiceModel: Logs all stages of WCF processing, whenever the configuration is read, a message is processed in transport, security processing, a message is dispatched in user code, and so on.
  • System.ServiceModel.MessageLogging: Logs all messages that flow through the system.
  • System.IdentityModel: Logs data for authentication and authorization.
  • System.ServiceModel.Activation: Logs the activity of creating and managing service hosts.
  • System.IO.Log: Logging for the .NET Framework interface to the Common Log File System (CLFS).
  • System.Runtime.Serialization: Logs when objects are read or written.
  • CardSpace: Logs messages related to any CardSpace identity processing that occurs within WCF context.

Continues…

WCF Performance Tuning

WCF was introduced to overcome the constraints of previous distributed technologies like ASP.NET Web Services, WSE, .NET Enterprise Services and .NET Remoting and to provide a performance boost in addition. For an introduction to WCF please read my first WCF article - WCF Tutorial .

Performance is a central goal for web or app site, expecially since Google now includes site responsiveness a factor in their ranking algorithm.  For ASP.NET optimization tips, please see  my article titled 50 Tips to Boost ASP.NET Performance.  In this article I will  discuss   WCF performance tuning techniques.

Use DataContractSerializer:

Serialization  is the process of converting an object instance into a portable and transferable format.   Xml Serialization  is popular for its interoperability and binary Serialization  is more useful for transferring objects between two .NET  applications.

System.Runtime.Serialization.DataContractSerializer is   designed for WCF but can also be used for general serialization. The DataContractSerializer has some benefits over XmlSerializer:

  1. XmlSerializer can serialize only properties but DataContractSerializer can serialize fields in addition to  properties.
  2. XmlSerializer can serialize only public members but DataContractSerializer can serialize not only public members but also private and protected members.
  3. In performance terms,  DataContractSerializer is approximately 10% faster than XmlSerializer.

Select proper WCF binding:

System-provided WCF bindings are used to specify the transport protocols, encoding, and security details required for clients and services to communicate with each other. The below are the available  system-provided WCF bindings:

1.BasicHttpBinding:

A binding   suitable for communication with WS-Basic Profile conformant Web Services such as ASMX-based services. This binding uses HTTP as the transport and Text/XML for message encoding.

2. WSHttpBinding:

A secure and interoperable binding   suitable for non-duplex service contracts.

3. WSDualHttpBinding:

A secure and interoperable binding  suitable for duplex service contracts or communication through SOAP intermediaries.

4. WSFederationHttpBinding:

A secure and interoperable binding that supports the WS-Federation protocol, enabling organizations that are in a federation to efficiently authenticate and authorize users.

5. NetTcpBinding:

A secure and optimized binding suitable for cross-machine communication between WCF applications

6. NetNamedPipeBinding:

A  reliable, secure, optimized binding suitable for on-machine communication between WCF applications.

7. NetMsmqBinding:

A queued binding suitable for cross-machine communication between WCF applications.

8. NetPeerTcpBinding :

A binding which enables secure, multi-machine communication.

9. MsmqIntegrationBinding :

A binding that is suitable for cross-machine communication between a WCF application and existing MSMQ applications.

In this context Juval Lowry has presented a nice decision making diagram:

WCF Performance Tuning

It should be noted that WCF also allows us to define our own custom bindings.

Use Tracing:

Tracing can track all the events or specified events in a  program. By default it is off. For debugging purposes we have to make enable it explicitly either through code or using a config file setting which is preferable. If  debugging is not required we should disable tracing. For more details reader can read my article titled “Tracing in WCF”.

Close Proxy:

The proxy represents a service contract. It provides the same operations as service’s contract along with some additional methods for managing the proxy life cycle and the connection to the service. It is a recommended best practice to always close the proxy when the client is finished using it. When we close the proxy, the session with the service is terminated and the connection is closed as well and thus can serve to process new requests in better way.

It should be noted that calling a proxy in a using statement (see the below code snippet) is actually not the optimal  or safest method.

using (ServiceProxy proxyClient = new ServiceProxy())
            {
                proxyClient.SomeFunction();
            }
The above code will be translated to something as follows:
    ServiceProxy proxyClient = new ServiceProxy();
            try
            {
                proxyClient.SomeFunction();
            }
            finally
            {
                if (proxyClient != null)
                    ((IDisposable)proxyClient).Dispose();
            }

The problem with this method is that proxyClient.Dispose() will throw an exception when the proxy is in a faulted state .So to close the proxy even under faulted state the below is the suggested approach:

ServiceProxy proxyClient = new ServiceProxy();
            try
            {
                proxyClient.SomeFunction();
                proxyClient.Close();
            }
            finally
            {
                if (proxyClient.State != System.ServiceModel.CommunicationState.Closed)
                {
                    proxyClient.Abort();
                }
            }

Throttling:

Throttling is a way of mitigating potential DoS (denial of service) attacks. Using ServiceThrottlingBehavior we can set smooth loading and resource allocations on the server. In WCF, there are three service-level throttles that are controlled by ServiceThrottlingBehavior.

1. MaxConcurrentCalls: The maxConcurrentCalls attribute lets us specify the maximum number of simultaneous calls for a service. When the maximum number of simultaneous calls has been met and a new call is placed, the call is queued and will be processed when the number of simultaneous calls is below the specified maximum number. The default value is 16.

2. MaxconcurrentSessions: The maxconcurrentSessions attribute specifies the maximum     number of connections

to a single service. The channels below the specified limit will be active/open. It should be noted that this throttle is effectively disabled for non-sessionful channels (such as default BasicHttpBinding).The default value is 10.

3. MaxConcurrentInstance: The maxConcurrentInstance attribute specify the maximum number of simultaneous service instances. While receiving new instance request the maximum number has already been reached, the request is queued up and will be completed when the number of instances is below the specified maximum. The default value is total of the two attributes maxConcurrentSessions and maxConcurrentCalls.

From general feedback it is has been noted that the default settings for the above mentioned three attributes are very conservative and are insufficient in real production scenarios and thus developers need to increase those default settings.

Hence Microsoft has increased the default settings in WCF4.0 as follows:

1. MaxConcurrentSessions: default is 100 * ProcessorCount

2. MaxConcurrentCalls: default is 16 * ProcessorCount

3. MaxConcurrentInstances: default is the total of MaxConcurrentSessions and MaxConcurrentCalls

Now we have a new parameter which is the multiplier “ProcessorCount” for the settings. The main reason for this is that we do not need to change the settings in deployment from a low end system to a multiple processor based system. The value for MaxConcurrentSessions is also increased from 10 to 100.

Quotas:

There are three types of quotas in WCF transports:

1. Timeouts. Timeouts are used for the mitigation of DOS attacks which rely on tying up resources for an extended period of time.

2. Memory allocation limits: Memory allocation limits prevent a single connection from exhausting the system resources and denying service to other connections.

3. Collection size limits: Collection size limits restrict the consumption of resources which indirectly allocate memory or are in limited supply.

As per MSDN the transport quotas available for the standard WCF transports: HTTP(S), TCP/IP, and named pipes are

as follows:

Sl.No

Name

Type

Min Value

Default Value

Description

1

CloseTimeout

TimeSpan

0

1 min

Maximum time to wait for a connection to

close before the transport will raise an exception.

2

ConnectionBufferSize

Integer

0

8 KB

Size in bytes of the transmit and receive

buffers of the underlying transport.  Increasing this buffer size can

improve throughput when sending large messages.

3

ConnectionLeaseTimeout

Timespan

0

5 Min

Maximum lifetime of an active pooled

connection.  After the specified time elapses, the connection will close

once the current request is serviced.

This setting only applies to pooled

connections.

4

IdleTimeout

Timespan

0

2 Min

Maximum time a pooled connection can remain

idle before being closed.

This setting only applies to pooled connections.

5

ListenBacklog

Integer

0

10

Maximum number of unserviced connections

that can queue at an endpoint before additional connections are denied.

6

MaxBufferPoolSize

Long

0

512 KB

Maximum amount in bytes of memory that the

transport will devote pooling reusable message buffers.  When the pool

cannot supply a message buffer, a new buffer is allocated for temporary use.

Installations that create many channel

factories or listeners can allocate large amounts of memory for buffer

pools.  Reducing this buffer size can greatly reduce memory usage in

this scenario.

7

MaxBufferSize

Integer

1

64 KB

Maximum size in bytes of a buffer used for

streaming data.  If this transport quota is not set or the transport is

not using streaming, then the quota value is the same as the smaller of the

MaxReceivedMessageSize quota value and Integer.MaxValue.

8

MaxInboundConnections

1

10

Maximum number of incoming connections that

can be serviced.  Increasing this collection size can improve

scalability for large installations.

Connection features such as message

security can cause a client to open more than one connection.  Service

administrators should account for these additional connections when setting

this quota value.

Connections waiting to complete a transfer

operation can occupy a connection slot for an extended period of time.

Reducing the timeouts for send and receive operations can free up connection

slots quicker by disconnecting slow and idle clients.

9

MaxOutboundConnectionsPerEndpoint

Integer

1

10

Maximum number of outgoing connections that

can be associated with a particular endpoint.

This setting only applies to pooled

connections.

10

MaxOutputDelay

Timespan

0

200 ms

Maximum time to wait after a send operation

for batching additional messages in a single operation.  Messages are

sent earlier if the buffer of the underlying transport becomes full.

Sending additional messages does not reset the delay period.

11

MaxPendingAccepts

Integer

1

1

Maximum number of channels that the

listener can have waiting to be accepted.

There is an interval of time between a

channel completing an accept and the next channel beginning to wait to be

accepted.  Increasing this collection size can prevent clients that

connect during this interval from being dropped.

12

MaxReceivedMessageSize

Long

0

64 KB

Maximum size in bytes of a received

message, including headers, before the transport will raise an exception

13

OpenTimeout

Timespan

0

1 Min

Maximum time to wait for a connection to be

established before the transport will raise an exception.

14

ReceiveTimeout

Timespan

0

1 Min

Maximum time to wait for a read operation

to complete before the transport will raise an exception.

15

SendTimeout

Timespan

0

1 Min

Maximum time to wait for a write operation

to complete before the transport will raise an exception.

Without  proper settings of quotas (see the below configuration settings) the exceptions will rise which may cause to terminate the service.

 xxx

Other quotas of the ReaderQuotas property that can be used to restrict message complexity to provide protection from denial of service (DOS) attacks, these are:

  1. MaxDepth: The maximum nested no depth per read. The default is 32.
  2. MaxStringContentLength: The maximum string length allowed by the reader.  The default is 8192.
  3. MaxArrayLength: The maximum allowed array length of data being received by WCF from a client. The default is 16384.
  4. MaxBytesPerRead: The maximum allowed bytes returned per readThe default is 4096.
  5. MaxNameTableCharCount: The maximum characters in a table name. The default is 16384.

Continues…

Windows Communication Foundation ( WCF ) Tutorial

In this WCF tutorial we introduce the primary reasons for moving from other technologies to WCF as well as how to get started using WCF. Prior to .Net 3.0 it was not an easy matter to select a particular technology for communicating between systems due to the number of technologies available from Microsoft. For example, users could have used Web Service to communicate between a Java based application and a .Net application; WSE users could have take advantage of some of the WS-* message protocols; MSMQ has the ability to queue messages which helps to communicate between intermittently connected solutions; Enterprise services (the successor of COM+) helps to build distributed application easily; .Net Remoting is a fast way to move messages between two .NET applications. All the above mentioned technologies have their pros and cons. Using WCF now we can take the advantage of all the above distributed technologies in a unified manner and WCF is the successor to all these message distribution technologies.

Performance comparison between distributed technologies:

When we migrate distributed applications made with ASP.NET Web Services, WSE, .NET Enterprise Services and .NET Remoting to WCF, it will in almost all cases result in a performance boost:

Other Distributed Technologies WCF Performance Advantage
ASP.NET Web Service 25%—50% faster
.NET Remoting 25% faster
WSE 2.0/3.0 implementations. 400% faster
.NET Enterprise Service 100% faster subject to the load.

Whereas the other Microsoft distributed technologies do not have too many limitations in running on Windows operating system, an application built with WCF can run only on Windows XP SP2, Windows Vista or Windows Server 2008.

In the next part of our WCF tutorial we take a more indepth look at WCF and how to get started using it.

Programming Model

A WCF service is made up of three parts: the service, one or more endpoints and a hosting environment.

A service is basically a class written in a .Net compliant language which contains some methods that are exposed through the WCF service. A service may have one or more endpoints – an endpoint is responsible for communication from the service to the client.
Endpoints also have three parts which are known as ‘ABC’: ‘A’ for Address, ‘B’ for Binding and ‘C’ for Contracts.

Address: Address specifies the info about where to find the service.

Binding: Binding specifies the info for how to interact with the service.

Contracts: Contracts specifies the info for how the service is implemented and what it offers.

Finally a hosting environment where the service is contained.

WCF bindings: System-provided WCF bindings are used to specify the transport protocols, encoding, and security details required for clients and services to communicate with each other. As per MSDN followings are the system-provided WCF bindings:

< BasicHttpBinding: A binding that is suitable for communication with WS-Basic Profile conformant Web Services like ASMX-based services. This binding uses HTTP as the transport and Text/XML as the message encoding.

WSHttpBinding:
A secure and interoperable binding that is suitable for non-duplex service contracts.

WSDualHttpBinding: A secure and interoperable binding that is suitable for duplex service contracts or communication through SOAP intermediaries.

WSFederationHttpBinding: A secure and interoperable binding that supports the WS-Federation protocol, enabling organizations that are in a federation to efficiently authenticate and authorize users.

NetTcpBinding:
A secure and optimized binding suitable for cross-machine communication between WCF applications

NetNamedPipeBinding:
A secure, reliable, optimized binding that is suitable for on-machine communication between WCF applications.

NetMsmqBinding:
A queued binding that is suitable for cross-machine communication between WCF applications.

NetPeerTcpBinding :
A binding that enables secure, multi-machine communication.

MsmqIntegrationBinding :

MsmqIntegrationBinding: A binding that is suitable for cross-machine communication between a WCF application and existing MSMQ applications.

It should be noted that WCF also allows us to define our own custom bindings.

Creating a WCF service

In order to create a WCF service in Visual Studio, select WCF>WCF Service Library from the New Project dialog.

This will create the several files in a new project. Apart from the AppConfig file there are two more files – Service1.cs and IService1.cs . Service1.cs is an implementation of the IService1.cs interface.

Working with the Interface:

We need a service contract to create a new service. The service contract is the interface of the service. It consists of all the methods which are exposed along with input parameter(s) and return value.

Interface IService1.cs:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.Text;

namespace WcfServiceLibrary1
{
    [ServiceContract]
    public interface IService1
    {

        [OperationContract]
        int Addition(int x,int y);

        [OperationContract]
        Customer GetDataUsingDataContract(Customer cust);

    }

    [DataContract]
    public class Customer
    {
        String name;
        string contactNo;

        [DataMember]
        public string Name
        {
            get { return name; }
            set { name = value; }
        }

        [DataMember]
        public string ContactNo
        {
            get { return contactNo; }
            set { contactNo = value; }
        }
    }

}

In the following class we can see that interface IService1is implemented.

Continues…

ASP.NET Hosting Guide

The first choice to consider in selecting an ASP.NET Host is to select between Shared Hosting, VPS Hosting, and Dedicated Server Hosting:

ASP.NET Shared Hosting
(typical price $5 – $25 per month)

In shared hosting the web host allocates a portion of a server to your hosting plan. Typically you are given a fixed amount of disk space and bandwidth but no other resources (such as memory or CPU) are dedicated to your plan. A shared hosting will not allow for any access to the root OS, therefore you will not be able to install programs (such as backup software) on the Windows Server OS that runs the site, nor will you be able to configure IIS to your exact needs (IIS tasks can often be handled by the control panel you will be given but note that some IIS functions such as dynamic compression make not be available on your shared hosting plan).

There should also be at least one SQL Server database included in the package (be sure to check the cost of adding additional SQL Server databases as this cost varies a lot between hosts). Also note the size of the database which should be 100MB for a starter plan. Always check with your host if you will be able to connect to your SQL Server database via SQL Server Management Studio (SSMS) as this is very important in administering the database. Note that when you set up your SQL Server database be sure to set the Recovery Model to Simple and Auto-Shrink to True in the Properties > Options of the database in SQL Server Management Studio. This will dramatically cuts the size of your database.

Shared hosting will normally be administered through a control panel. Plesk is generally considered to be the best of the bunch of control panels although it is often charged at a premium to other control panels. In my experience most control panels will do the job just fine since you will have a limited amount of control that any shared hosting can have over the operation system. Most of the time you will connecting with your host by FTP’ing the files to the server.

Most shared hosts offer unlimited domains to be setup on your plan, although you should check the limit if you plan on running a large number of sites. Note that although the hosts will allow you to register domains with them you should always register the domain with a third party (such as GoDaddy) as it will be much easier to move hosts when the domain is with a separate registrar.

Shared hosting is the most difficult form of hosting to evaluate from just the specs provided by the hosting providers since there are no resources dedicated to your individual plan and you have actually no idea how many sites are stuffed onto the server you share, or the hardware of the server or how the shared SQL Server is configured – in short it is impossible to guess the quality of hosting you are being offered. The only way to evaluate shared hosting is to sign up for a month and check the performance of your ASP.NET web app. The paid version of Pingdom is a good tool for this as it provides a graph of the responsiveness of your app over time, if you dont wish to use the paid version you can enter a URL from your site in the Pingdom load time test (which is free) and then manually compare the response times.

One issue to consider in shared hosting is backups. Since you won’t have access to the OS you won’t be able to install third party backup software. In addition, SSMS may be restricted in making backups since they need to be stored on a physical directory on the server. This is one issue you will need to ask the web host provider about. Ideally you should be able to schedule a regular offsite backup of the site files and database.

ASP.NET VPS Hosting
(typical price $35 – $75 per month)

Virtual Private Server (VPS) hosting is the next step up on the ASP.NET Hosting ladder. Since the introduction of Hyper-V (Microsoft’s virtualization technology for Windows Server) ASP.NET VPS hosting has developed a lot and VPS Hosting is now the main entry point into ASP.NET Hosting. A VPS is essentially a fully isolated instance of a Windows Server operating system with its own dedicated resources which sits alongside other VPSs on the same server. Thus you would be fully isolated from other users on the same server and be able to Remote Desktop into your server and install apps such as backup utilities and setup IIS and SQL Server.

As such a VPS Hosting plan will provide a full compliment of dedicated resources such as disk space, bandwidth, memory and CPU. Note, however, that the CPU allocation for a VPS is not always very clear – many providers simply quote the specs of the actual server CPU (such as 2 Dual Core Xeon processors) without quantifying how much is dedicated to each VPS plan (part of the problem in this is that there is no agreed upon standard quoting a  CPU resource).

In reality the CPU allocation is often of little consequence as most of the performance will come from the hard drive, the memory and the network on which the physical server sits. Since the quality of the network cannot be acertained from the hosting provider’s specs you should look at the memory and hard drive. In general you try get the maximum amount of memory possible, especially if you will be running SQL Server – on my previous host (a dedicated host). It is surprising the amount of memory which can be consumed if it is available.  I previously ran an app on 4GB RAM server of which the system utilized about 3.6 GBs when I moved host I rented a server with 8GB memory and 7.5 GB is currently used by the exact same app with almost all the increase being taken by SQL Server (despite the app using caching where possible). Needless to say the app’s responsiveness increased dramatically.

The disk specs are not always disclosed by the hosting provider, ideally this would be SAS storage although SATA (hopefully 7200 RPM) would also do a good job.

VPS is sometimes termed Virtual Dedicated Hosting (VDS) although normally it is the same principal. Just ensure that all the system resources are dedicated to your ‘VPS’.

ASP.NET Dedicated Server Hosting

A VPS plan will normally be suitable for a small to medium sized enterprise, but if your app(s) have a very high resource utilization you should consider a dedicated server.  For example I run SQL Server Performance.com which serves about 30,000 pages a day as well as DerivativesONE.com which prices complex financial derivatives. Running both of these on a high-spec VPS gave generally poor results in terms of the responsiveness of both sites. A major limiting factor of a VPS is that although the memory is dedicated, the CPU and disk I/O are in reality shared resources with all the other VPS plans across the server farm. In most other respects a dedicated server is very similar to a VPS and should be evaluated in a similar manner.

One issue to be aware of for dedicated hosting is the contract. You should always look carefully at the contract clauses, especially the renewal of the contract – some of the smaller web hosting providers require signed annual contracts which automatically renew without a prior cancellation.

ASP.NET Cloud Hosting

A comprehensive overview of ASP.NET hosting wouldn’t be complete without a mention of cloud hosting. Cloud hosting is the latest buzz in web hosting but unfortunately, in my experience, there is little to recommend it for hosting ASP.NET applications/sites. Most of the major cloud hosting providers such as RackSpace Cloud or Amazon Web Services (AWS) are focused on providing solutions for developers building on the LAMP Stack and so there are major omissions in their cloud offerings, for example neither RackSpace or AWS offer an easy install of SQL Server.
Windows Azure is Microsoft’s Windows cloud hosting offering but it is not fully compatible with existing ASP.NET web apps  and so is more suitable for apps which are built from the ground up to run on Azure.

SQL Server

SQL Server is most expensive part of the Windows  stack which will run your web app, so it deserves careful attention. SQL Server is licensed on a per-processor basis and so when selecting you sever specs always prefer a more powerful Quad-Core to two Dual-Cores since the Quad-Caore will half your SQL Server cost.

Also ensure that if you are using VPS or dedicated servers that the providers offers the SQL Server Web-Edition, several do not publicize this since it will only be about $25 per month as opposed to over $300 for the WorkGroup edition but it in most cases works equally well in a web environment.

If your host charges by size of the database make such it is optimized for size- AutoShrink on, Simple Recovery etc

General Issues across all Hosting Plans

There are a few other factors which are common to any hosting plan which you should consider:

  • .NET Specialization : You should always prefer hosts which focus solely on .NET hosting. Hosts who specialize in ASP.NET will be much better able to support your issues and are usually quicker to upgrade to the latest versions of the .NET platform.
  • Support : Difficult to determine in advance since all hosts claim fantastic performance, but one thing that will always come in useful in  24/7 chat which works great for quickly getting support on small issues such as reboots etc.
  • Test for a month first : Don’t commit for a year before you have tried the host for a month at least. As noted above some of the factors such as the network speed of the hosting provider’s network are not disclosed (or can’t be quantified).
  • IP Addresses : If you are running multiple sites, it is advisable to have each on a separate IP Address, check how many IP’s the plan comes with and the cost of additional IPs.

Anything I missed out? Any experiences to share? Please leave a comment and let us know.

ASP.NET MVC 3 First Look

ASP.NET MVC 3 Preview 1 has just been released and is now available for download here. Microsoft is now using Preview as the name for its early releases which roughly corresponds with the old CTP release type.

The first thing to not is that MVC 3 is backwards compatible with MVC  2, and can be installed side-by-side with MVC 2 – so you can use the current distribution for testing without impacting your current MVC 2 projects.

MVC 3 View Enhancements

MVC 3 introduces two improvements to the MVC view engine:

  • Ability to select the view engine to use. MVC 3 allows you to select from any of your  installed view engines from Visual Studio by selecting Add > View (including the newly introduced ASP.NET “Razor” engine”):
    MVC 3
  • Support for the next ASP.NET “Razor” syntax. The newly previewed Razor syntax is a concise lightweight syntax.

MVC 3 Control Enhancements

  • Global Filters : ASP.NET MVC 3  allows you to specify that a filter which applies globally to all Controllers within an app by adding it to the GlobalFilters collection.  The RegisterGlobalFilters() method is now included in the default Global.asax class template and so provides a convenient place to do this since is will then be called by the Application_Start() method:
    void RegisterGlobalFilters(GlobalFilterCollection filters)
    {
    filters.Add(new HandleLoggingAttribute());
    filters.Add(new HandleErrorAttribute());
    }
    void Application_Start()
    {
    RegisterGlobalFilters (GlobalFilters.Filters);
    }
  • Dynamic ViewModel Property : MVC 3 augments the ViewData API with a new “ViewModel” property on Controller which is of type “dynamic” – and therefore enables you to use the new dynamic language support in C# and VB pass ViewData items using a cleaner syntax than the current dictionary API.
    Public ActionResult Index()
    {
    ViewModel.Message = "Hello World";
    return View();
    }
  • New ActionResult Types : MVC 3 includes three new ActionResult types and helper methods:
    1. HttpNotFoundResult – indicates that a resource which was requested by the current URL was not found. HttpNotFoundResult will return a 404 HTTP status code to the calling client.
    2. PermanentRedirects – The HttpRedirectResult class contains a new Boolean “Permanent” property which is used to indicate that a permanent redirect should be done. Permanent redirects use a HTTP 301 status code.  The Controller class  includes three new methods for performing these permanent redirects: RedirectPermanent()RedirectToRoutePermanent(), andRedirectToActionPermanent(). All  of these methods will return an instance of the HttpRedirectResult object with the Permanent property set to true.
    3. HttpStatusCodeResult – used for setting an explicit response status code and its associated description.

MVC 3 AJAX and JavaScript Enhancements

MVC 3 ships with built-in JSON binding support which enables action methods to receive JSON-encoded data and then model-bind it to action method parameters.
For example a jQuery client-side JavaScript could define a “save” event handler which will be invoked when the save button is clicked on the client. The code in the event handler then constructs a client-side JavaScript “product” object with 3 fields with their values retrieved from HTML input elements. Finally, it uses jQuery’s .ajax() method to POST a JSON based request which contains the product to a /theStore/UpdateProduct URL on the server:

$('#save').click(function () {
var product = {
ProdName: $('#Name').val()
Price: $('#Price').val(),
}

$.ajax({
url: '/theStore/UpdateProduct',
type: "POST";
data: JSON.stringify(widget),
datatype: "json",
contentType: "application/json; charset=utf-8",
success: function () {
$('#message').html('Saved').fadeIn(),
},
error: function () {
$('#message').html('Error').fadeIn(),
}
});
return false;
});

MVC will allow you to implement the /theStore/UpdateProduct URL on the server by using an action method as below. The UpdateProduct() action method will accept a strongly-typed Product object for a parameter. MVC 3 can now automatically bind an incoming JSON post value to the .NET Product type on the server without having to write any custom binding.

[HttpPost]
public ActionResult UpdateProduct(Product product) {
// save logic here
return null
}

MVC 3 Model Validation Enhancements

MVC 3 builds on the MVC 2 model validation improvements by adding   support for several of the new validation features within the System.ComponentModel.DataAnnotations namespace in .NET 4.0:
Continues…

Using Custom Data Types in ASP.NET Profiles

ASP.NET Profile can also accept custom data types and they are relatively easy to implement

The first step is to create a class which wraps the information you require. In the class, you may use public member variables, but the preferred choice is full-fledged property procedures which will allow the  class to support data binding or other complex logic.

For example, the below code shows an  Address class which should be placed in the App_Code directory of the web app:

[Serializable()]
public class Address
{
private string fullName;
public string FullName {...}
private string streetNumber;
public string StreetNumber {...}
private string cityCode;
public string CityCode {...}
private string zip;
public string Zip  {...}
private string stateCode;
public string StateCode {...}
private string countryCode;
public string CountryCode {...}
public Address(string nameCode, string streetCode, string cityCode,
string zip, string stateCode, string countryCode)
{
NameCode = nameCode;
StreetCode = streeCodet;
CityCode = cityCode;
Zip= zip;
StateCode = stateCode;
CountryCode = countrCode;
}
public Address()
{ }
}

Next add a property in the web.config to declare it:

<properties>
<add name="CutomerAddress" type="Address" />
</properties>

Now you can use the Profile in your code.


To assign values to the Profile:

Profile.CutomerAddress.Zip = txtZip.Text;

To access the Profile data:

string zipStr;
zipStr = Profile.CutomerAddress.Zip;

Automatic Saves

The ASP.NET Profiles feature cannot detect changes in complex data types (ie anything other than strings, Boolean values, simple numeric types etc). So the Profile includes complex data types, ASP.NET will save the complete profile info at the end of every request which accesses the Profile. The behavior has an obvious performance cost. Therefore to optimize Profile performance when using  complex types, you can  set the profile property to be read-only (in the event it never changes).

Alternatively, you can disable the autosave behavior by using  the automaticSaveEnabled attribute in the <profile> element and setting this to  false. If you do this you will need to use Profile.Save() to explicitly save changes to the Profile. This approach is normally preferred as the parts of code which modify a Profile are easy to spot and you can easily add  Profile.Save() to the end of the code block:

Profile.CustomerAddress = new Address(txtName1.Text, txtStreet1.Text, txtCity1.Text,

txtZip1.Text, txtState1.Text, txtCountry1.Text);

Profile.Save();

Optimizing ASP.NET Profiles Performance

ASP.NET Profiles were introduced to assist developers in persisting user information. Previous methods of persistence all had limitations in how they stored user data, Session state would only be held in memory and lost once the user’s session ended, a query-string would only be useful for that particular page and had to be recreated on each new page, cookies are only available on a single user machine. Profiles addressed all these difficulties by providing a simple persistent store which plugs into ASP.NET Membership. Profiles are ideal for storing user info such as preferences for a web app, besides being convenient they are very simple to use – just create them in the web.config file and access them anywhere in the application using Profile.ProfileName.

But with the convenience and power of Profiles comes a price – performance. Profiles are stored in a database, and therefore if used without caution can have a major performance cost.

To understand how best to use Profiles, first we will look at how they work under the hood. Profiles plug into the life-cycle of the page at two points:

  • The first time the Profile object is accessed in your code ASP.NET retrieves all the  profile data for the current user from the   database. If   the profile data is used more  than once in the same request ASP.NET reads it only once and then reuses it.
  • If profile data is updated, that update is deferred until the page has finished processing( ie after the PreRender, PreRenderComplete, and Unload events have completed). At that point the profile data  is written   to the database, thus  multiple changes are updated in batch.

Thus, using Profiles can result in an extra two database hits  per  request (if Profile data is read and then updated) or one extra database hit  (for simply reading the Profile data). It should be noted that Profiles do not have a caching mechanism so so every request for Profile data or update of Profile data  requires a database connection.

Thus from a performance viewpoint, Profiles are best when:

  • There are a relatively small number of pages which access the Profile data.
  • Profiles only store small amounts of data (since accessing Profiles always results in the retrieval of all the Profile data for that user it can be quite result in large payloads).

Therefore to optimize performance when using ASP.NET Profiles it is best to combine  

Profiles with other methods of  state management. For example,   a web app could first check if there was a cookie stored on the user’s machine for the user’s date format preference and if not available  this data could be retrieved from the Profile (which would then then add a cookie) this will save a database round trip each time to check the preferences (session state could also be used for this).

Getting Started Using ASP.NET Profiles

ASP.NET Profiles are a very useful tool for  persisting user data. Most other methods of state management do not easily persist the data across user visits, but Profiles plug seamlessly into the ASP.NET Membership database to provide a convenient persistent store.

Defining Profile Properties

The first step to using Profiles is to defining them in the web.config file.  This is done by adding the <profile> section to the web.config file and the adding each property using a  <add> element nested inside the <properties> element:

<configuration>
<system.web>
...
<profile>
<properties>
<add name="Language"/>
<add name="NumberFormat"/>
<add name="JoinedDate"/>
</properties>
</profile>
</system.web>
...
</configuration>

In addition to name the <add> element accepts several attributes which should be used. By default the format of the Profile is set to String but can be set to any datatype, for example the above JoinedDate profile should have a attribute of type added with the associated data type:

<add name="JoinedDate" type="System.DateTime" />

defaultValue is another useful attribute which sets the default of the Profile. For example, this could be used the set the initial language a user’s preferences is set to:

<add name="Language" defaultValue="en" />

There are several additional attributes, namely:

  • serializeAs : The format to use for serializing this Profile (String, CML, Binary, or ProviderSpecific)
  • readOnly : This is a boolean which sets if the Profile can be updated.
  • allowAnonymous : A boolean which sets if the Profile can be used with anonymous profiles.
  • provider : The profile provider that is used to manage this property.

Access Profiles

Profile access is very simple. Just use Profile.ProfileName anywhere in an app to get the profile value for the user. For example:

String langStr = Profile.Language

Update Profiles

Updating ASP.NET  Profiles is also a simple procedure, just assign the value to the Profile and it will be stored:

Profile.Language  = langTxtBox.Text

Note that the Profile will not actually be written to the database (and therefore not stored) until the page life-cycle is complete. Therefore after updating a Profile, avoid accessing it unless the page has finished processing (as only the old value will be stored).

Be aware that Profiles do not come without issues. There is a performance cost to using Profiles inappropriately see ASP.NET Profile Performance for more details.

Run External Applications From ASP.NET

To run an external application from your ASP.NET web application use the System.Diagnostics.Process.Start method for calling the  application.

The first step for  running an external application from an ASP.NET app is to create a ProcessStartInfo object, and then pass  the name of the application to run as well as  command-line parameters it might require. In the sample code below, we use the Java runtime to execute a Java program named ExternalJavaProgram, in this case the name of the application to run is java and the only command-line parameter required is  the name of the Java program - ExternalJavaProgram.

In the page’s code-behind class :

  1. Import the System.Diagnostics namespace.
  2. Create a ProcessStartInfo object and then pass the name of the external app to run as well as all required command-line parameters.
  3. Set the working directory to the external app’s location.
  4. Start the external app process by calling the Start method of the Process class  and pass the ProcessStartInfo object.


Code Example

private void Page_Load(object sender, System.EventArgs e)

  {

  Process proc = null;
  ProcessStartInfo si = null;

  // create a new start info object with the program to execute
 // and the required command line parameters
  si = new ProcessStartInfo("java",  "ExternalJavaProgram");

  // set the working directory to the location of the the legacy program
  si.WorkingDirectory = Server.MapPath(".");

  // start a new process using the start info object
  proc = Process.Start(si);

  // wait for the process to complete before continuing
  proc.WaitForExit( );

  }  // Page_Load

Working with ADO.NET Transactions

A transaction is a group of operations combined into a logical unit of work that is either guaranteed to be executed as a whole or rolled back. Transactions help the database in satisfying all the ACID (Atomic, Consistent, Isolated, and Durable). Transaction processing is an indispensible part of ADO.NET. It guarantees that a block of statements will either be executed in its entirety or rolled back,( i.e., none of the statements will be executed). Transaction processing has improved a lot in ADO.NET 2.0. This article discusses how we can work with transactions in both ADO.NET 1.1 and 2.0.

Implementing Transactions in ADO.NET

Note that in ADO.NET, the transactions are started by calling the BeginTransaction method of the connection class. This method returns an object of type SqlTransaction.
Other ADO.NET connection classes like OleDbConnection, OracleConnection also have similar methods. Once you are done executing the necessary statements within the transaction unit/block, make a call to the Commit method of the given SqlTransaction object, or you can roll back the transaction using the Rollback method, depending on your requirements (if any error occurs when the transaction unit/block was executed).
To work with transactions in ADO.NET, you require an open connection instance and a transaction instance. Then you need to invoke the necessary methods as stated later in this article.  Transactions are supported in ADO.NET by the SqlTransaction class that belongs to the System.Data.SqlClient namespace.

The two main properties of this class are as follows:

  • Connection: This indicates the SqlConnection instance that the transaction instance is associated with
  • IsolationLevel: This specifies the IsolationLevel of the transaction

The following are the methods of this class that are noteworthy:
Commit()   This method is called to commit the transaction
Rollback()  This method can be invoked to roll back a transaction. Note that a transaction can only be rolled back after it has been committed.
Save()       This method creates a save point in the transaction. This save point can be used to rollback a portion of the transaction at a later point in time. The following are the steps to implement transaction processing in ADO.NET.

  • Connect to the database
  • Create a SqlCommand instance with the necessary parameters
  • Open the database connection using the connection instance
  • Call the BeginTransaction method of the Connection object to mark the beginning of the transaction
  • Execute the sql statements using the command instance
  • Call the Commit method of the Transaction object to complete the
    transaction, or the Rollback method to cancel or abort the transaction
  • Close the connection to the database

The following code snippet shows how we can implement transaction processing using ADO.NET in our applications.

string connectionString = ...; //Some connection string
SqlConnection sqlConnection = new SqlConnection(connectionString);
sqlConnection.Open();

SqlTransaction sqlTransaction = sqlConnection.BeginTransaction();

SqlCommand sqlCommand = new SqlCommand();
sqlCommand.Transaction = sqlTransaction;

try
{
sqlCommand.CommandText = "Insert into Employee (EmpCode, EmpName) VALUES (1, 'Joydip')";
sqlCommand.ExecuteNonQuery();
sqlCommand.CommandText = "Insert into Dept (DeptCode, DeptName, EmpCode) VALUES (9, 'Software', 1)";
sqlCommand.ExecuteNonQuery();
sqlTransaction.Commit();
//Usual code
}

catch(Exception e)
{
sqlTransaction.Rollback();
//Usual code
}

finally
{
sqlConnection.Close();
}

The next piece of code illustrates how we can use the “using” statement for the above code. According to MSDN, the “using” statement, “defines a scope, outside of which an object or objects will be disposed.
Continues…