Tuesday, 8 December 2009

Connection Pooling

There are some properties which could be set in the connection string.
Default value of some of them are:
* Connect Timeout - controls the wait period in seconds when a new connection is requested, if this timeout expires, an exception will be thrown. Default is 15 seconds.
* Max Pool Size - specifies the maximum size of your connection pool. Default is 100. Most Web sites do not use more than 40 connections under the heaviest load but it depends on how long your database operations take to complete.
* Min Pool Size - initial number of connections that will be added to the pool upon its creation. Default is zero; however, you may chose to set this to a small number such as 5 if your application needs consistent response times even after it was idle for hours. In this case the first user requests won't have to wait for those database connections to establish.
* Pooling - controls if your connection pooling on or off. Default as you may've guessed is true. Read on to see when you may use Pooling=false setting.

And also we have to close the connection after using it. Otherwise it will make connection pool increase until reaching maximal size.

For more information of these properties, please check MSDN http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.connectionstring.aspx

Monday, 7 December 2009

How accurate of DateTime.Now

Copied from here http://www.red-gate.com/products/ants_performance_profiler/dotnet_challenge_question_4_v2.htm?utm_source=simpletalk&utm_medium=email&utm_content=dotnetchallengeq4&utm_campaign=antsperformanceprofiler
DateTime.Now is not intended to be a high-precision timer. For Windows NT 3.5, Windows 2000 and later, it has a resolution of approximately 10–15 milliseconds. Resolution may be lower on other platforms. For more accurate timing, the System.Diagnostics.StopWatch class is available. Other high-precision timers include the kernel32.dll QueryPerformanceTimer and QueryPerformanceFrequency native calls. If you are trying to time the execution speed of code, an even better solution is just to use ANTS Performance Profiler.

Wednesday, 2 December 2009

WCF Best Practices: Configuring ClickOnce Trusted Publishers

This article is from MSDN. Its address is http://msdn.microsoft.com/en-us/library/ms996418.aspx

Configuring ClickOnce Trusted Publishers

Brian Noyes
Microsoft MVP

April 2005

Applies to:
Visual Studio 2005

Summary: ClickOnce security allows you to take advantage of the runtime security protections provided by Code Access Security, while still allowing a dynamic determination of permissions for a particular application at the point where the application is deployed through ClickOnce. (11 printed pages)

Contents

Trusted Publishers and ClickOnce Application Signing 101
ClickOnce Security Checks at Launch
Get Into the Zone
ClickOnce Trusted Publishers in Action
Automating the Process
Conclusion
About the Author

ClickOnce security allows the automatic elevation of privileges for a ClickOnce-deployed application based on either user prompting or trusted publishers. When you deploy an application with ClickOnce, the operations that application performs or the resources it tries to access may require Code Access Security (CAS) permissions greater than what it would be granted based on the current policy. If that is the case, by default the .NET Framework runtime on the client machine will prompt the user and ask them whether they want to install the application and grant it elevated trust.

In an enterprise environment where administrators own the desktop and configuration control over each desktop is possible, it is generally preferable to avoid having to prompt the user for trust decisions. Most users do not have the sophistication to understand the implications of their trust decisions, and do not know when they should or should not grant application permissions. ClickOnce gives you control over this problem by allowing ClickOnce applications to automatically elevate their own privileges without user prompting—if the application manifests have been signed by a trusted publisher.

Trusted Publishers and ClickOnce Application Signing 101

So what constitutes a trusted publisher? First, you must always sign the ClickOnce deployment and application manifests with a publisher certificate. Next, the certificate used to sign a ClickOnce application must be configured in the Trusted Publishers certificate store on the user's machine. And finally, the certificate authority that issued the certificate must be configured in the Trusted Root Certificate Authority certificate store on the user's machine. I'll peel back the layers of each of these three pieces in turn.

When you first create a Windows Forms application in Visual Studio 2005 and publish it with ClickOnce, Visual Studio will automatically generate a publisher certificate for you and use it to sign your application when it is published. When it does this, it generates a personal certificate file (.pfx file) and adds it to your Visual Studio project with a default file naming convention of _TemporaryKey.pfx. Visual Studio will also add this certificate to your personal certificate store, and will enable the project settings that set this certificate as the one to be used to sign ClickOnce application manifests. Because this all happens automatically, you might not even be aware it is occurring.

Note Beta 1 allowed you to strong name your manifests using a strong name key file (.snk file by convention). Beta 2 and RTM no longer support this, and you must sign your manifests with a publisher certificate, typically a .pfx file that may or may not be password protected.

The signing process uses the public and private keys in the certificate to apply an XML Digital Signature to the deployment and application XML manifest files generated for a ClickOnce application. This digital signature approach ensures that you know who signed a given ClickOnce application deployment based on the public key that gets embedded in the manifest file, and that the file has not been tampered with or its contents changed in any way since it was signed. This prevents a malicious party from adding unintended settings or files to a ClickOnce application after it is published by a trusted authority.

Publisher certificates come in two flavors—self-generated or third-party–verified (by Verisign, for example). A certificate is issued by a certificate authority, which itself has a certificate that identifies it as a certificate issuing authority. A self-generated certificate is one that you create for development purposes, and you basically become both the certificate authority and the publisher that the certificate represents. To be used for production purposes, you should be using a certificate generated by a third party, either an external company like Verisign or an internal authority such as your domain administrator in an enterprise environment.

To be considered a trusted publisher, the publisher certificate must be installed in the Trusted Publishers certificate store on the user's machine, and the issuing authority of the publisher certificate must have their own certificate installed in the Trusted Root Certification Authority certificate store. You can use the certmgr.exe certificate management console in Windows to manage and install certificates in the stores on your machine, and you can also install them using Visual Studio 2005. I'll step you through the process of using Visual Studio later in this article.

ClickOnce Security Checks at Launch

When a ClickOnce application is being launched on a user's desktop the first time, the .NET Framework runtime will first check to ensure that the application manifests have not been tampered with since they were signed with whatever publisher certificate was used for signing. If they pass that check, the runtime will then look into the Trusted Root Certification Authority store and see if the certificate for the issuer of the publisher's certificate is installed in that store. It will then look at who the publisher on the certificate is, and see if their certificate is in the Trusted Publishers store. If those two things are true, then by default the user will not be prompted, and the application will be granted whatever privileges are specified in the application manifest file. An application trust for this application will be added to the user's .NET Framework security policy, the app will then just launch and run, and if the permissions were specified correctly in the application manifest, the user should never see a prompt or a security exception.

If both the issuer of the certificate and the publisher represented by the certificate are unknown on the client machine (based on the certificates installed in the stores), then the user will be prompted with the dialog box shown in Figure 1 and they can decide whether to allow the application to obtain the required privileges, depending on which zone the application is being launched from. If they click on the More Information... link at the bottom, they will get the dialog box shown in Figure 2, which gives the user a little more detail on what is about to happen if they click the Install button, but in general is probably just going to scare them—like most security dialogs do—because it gives very little information about what is really going on.

Figure 1. Untrusted publisher and certificate authority user prompt

Figure 2. More Information dialog

If the certificate used to sign the application manifests is generated by a trusted root certificate authority, but the specific publisher certificate is not in the trusted publishers store, then the user will still be prompted, but with a slightly friendlier prompt than when the issuer of the publisher certificate is unknown (see Figure 3). The more friendly prompt will indicate the publisher organization, because with the Authenticode certificate technology and trusted roots, you can at least trust that the publishing organization is who they say they are according to the issuer of the certificate. If you trust the issuing authority, then you can trust that the publisher is not pretending to be someone they are not.

Figure 3. Trusted certificate authority user prompt

After an application trust has been created for a given application, either due to automatic configuration based on a trusted publisher certificate, or based on the user being prompted and allowing the application to install, subsequent versions of the same application will not need to prompt again unless the requested security permissions change.

Get Into the Zone

There are five built-in security zones that are used in CAS for origin-based trust decisions: MyComputer, LocalIntranet, Internet, TrustedSites, and UntrustedSites. These same zones are used to determine what kind of prompting should be allowed for users with respect to elevating ClickOnce application permissions. Each zone corresponds to the context from which a ClickOnce application is launched, which is determined by the full path address that was used to the deployment manifest (.application file) for the ClickOnce application.

Table 1 shows some examples of launch zones based on the address used for the deployment manifest. What it basically breaks down to is if the address is a local file path to a non-networked drive, the application will be launched in the MyComputer zone. If the address uses a network protocol (http or UNC file share) and the server portion of the address is a single machine name, it will be evaluated to be coming from the LocalIntranet zone. If the server name portion of the address contains dots, it is evaluated as coming from the Internet zone. TrustedSites and UntrustedSites depends on individual addresses that are configured as part of Internet Explorer's Trusted Sites and Restricted Sites security settings.

Table 1. ClickOnce Launch Zone Examples

Launch Address Launch Zone
http://deploymentserver/MyClickOnceApp/MyClickOnceApp.application LocalIntranet
\\deploymentserver\MyClickOnceApp\MyClickOnceApp.application LocalIntranet
http://some.dotted.servername/Apps/MyClickOnceApp.application Internet
\\127.0.0.1\sharefolder\MyClickOnceApp.application Internet
C:\inetpub\wwwroot\MyClickOnceApp\MyClickOnceApp.application MyComputer

By default, the MyComputer, LocalIntranet, and TrustedSites are configured to allow user prompting to elevate security privileges of a ClickOnce application if that application is not signed by a trusted publisher. The Internet zone default is that if the application manifest is signed by a publisher certificate issued by a trusted root authority, then that application can prompt the user for elevated permissions if needed (that is, the publisher certificate is not also installed in the Trusted Publishers store). If an Interne-launched application is not signed with a certificate issued by a trusted root authority, the application will not be allowed to run. The UntrustedSites zone default is that if the application is not signed by a trusted publisher certificate issued by a trusted root authority, the application will not be allowed to run (in other words, no user prompting is allowed).

These settings can be modified if desired for your enterprise by configuring an obscure registry key that will be checked by ClickOnce to determine the user prompting policy. Each of the behaviors described above corresponds to a value you can set for each of the zones through this registry key.

The registry key \HKLM\Software\Microsoft\.NETFramework\Security\TrustManager\PromptingLevel is the one that allows you to customize the prompting behavior. This key is not present by default after a .NET Framework 2.0 installation, so you will have to create it manually if you want to customize these settings.

Under that registry key, you can add any of 5 string values, named MyComputer, LocalIntranet, Internet, TrustedSites, and UntrustedSites. These correspond to their respective zones. As a value for these, you can set one of three strings: Enabled, Disabled, or AuthenticodeRequired. Enabled is the default for the MyComputer, LocalIntranet and TrustedSites zones. The Internet default is AuthenticodeRequired, and the UntrustedSites default is Disabled. Table 2 shows the values that you can set for each zone and their effects. Figure 4 shows the registry key values set to their default behavior, but keep in mind this key does not exist by default so you will typically only create it if you are going to set them to different values than the defaults.

Table 2. PromptingLevel Registry Key Value Launch Effects

Value Not Trusted Root Authority Certificate Issued by Trusted Root Authority Trusted Root Authority and Trusted Publisher Certificate
Enabled Unfriendly user prompt Friendly user prompt No prompt; permissions granted and app launches
AuthenticodeRequired Application disabled Friendly user prompt No prompt; permissions granted and app launches
Disabled Application disabled Application disabled No prompt; permissions granted and app launches

Figure 4. User Prompting Registry key values

ClickOnce Trusted Publishers in Action

To test this, you will need to configure certificates on your development machine. The first step is to have a certificate to use to sign your ClickOnce apps and to configure that certificate in the desired certificate stores on your development or test machine. As mentioned before, Visual Studio will generate a new certificate for each ClickOnce project unless you configure a certificate to use for signing ClickOnce manifests before you first publish the application. I recommend generating a new test certificate, saving it to a known location, and then using that to sign all of your developmental ClickOnce projects so that you don't have to litter up your certificate stores with a bunch of test certificates. The certificate that is automatically generated is not password protected, and I strongly recommend you only use certificate files that are password protected.

To generate a new certificate file in Visual Studio 2005 that is password protected, go to the project properties window (double-click on the Properties node in Solution Explorer, or right-click on the project node and select Properties from the context menu). Select the Signing tab, check the Sign the ClickOnce Manifests check box, and click the Create Test Certificate... button (see Figure 5). You will be prompted for a password, and a new pfx file with a default name will be added to your project. This certificate will also be set as the certificate used to sign the manifests, and it will be installed in your personal certificate store in Windows. You can then rename the file, copy it to a reusable location, and then configure that certificate as the cert for any application by pressing the Select From File... button in the Signing tab.

Figure 5. Signing project properties

Once you have a certificate and have identified which one to use for signing your ClickOnce manifests in the Signing project properties, you can publish your application from Visual Studio and the manifests will be signed with that certificate. If you happen to have a "real" publisher certificate (that is, a Verisign one, or one that your development organization uses signed by some other trusted root authority), you can use that instead, either from a file as described above, or by pointing to the certificate in your personal store of certificates using the Select From Store... button in the Signing project properties.

To see how you can avoid a user prompt with a trusted publisher deployment, you need to configure the publisher certificate on the machine where the app will be launched with ClickOnce, which is often your development machine for first trials and development purposes. If you generated the certificate yourself as described above (or using the makecert.exe command-line utility that comes with Visual Studio), you will need to add that certificate to the Trusted Root Certification Authorities store. This is because you are not only the publisher, but you are also the issuer of the certificate. You will then also want to install the same certificate into the Trusted Publishers store, which is the final step that allows the application to launch without prompting.

To make this all concrete, let's step through an example by the numbers. Start a new Windows Application project in Visual Studio 2005 and name it ClickOnceTrustedPub. After the project is created, go to the project properties by double-clicking the Properties node in the Solution Explorer tree under the project node, and select the Signing tab.

Next, select the box to Sign the ClickOnce manifests. Press the Create Test Certificate... button, and enter a password for the certificate. The file that will be created and added to the project will be named ClickOnceTrustedPub_TemporaryKey.pfx. Rename it to devcert.pfx in Solution Explorer. This would also be a good time to copy the file to some common development folder on your machine so that you can reuse it for subsequent projects and not have to keep regenerating and configuring your certificates. When creating the certificate file, Visual Studio also installed it in your Personal store of certificates.

To add this certificate to the Trusted Root Certificate Authority and Trusted Publisher stores, click on the More Details button in the Signing project properties tab. This brings up the certificate information dialog (see Figure 6). Click the Install Certificate... button at the bottom of the General tab, and you will be presented with the Certificate Import Wizard.

Figure 6. Certificate information dialog

In the second step of the wizard, select the radio button to Place all certificates in the following store, then press the Browse button (see Figure 7). This will open a dialog box where you can select from the list of certificate stores (see Figure 8).

Figure 7. Certificate Wizard store selection

Figure 8. Certificate Store selection dialog box

The first time through this process, select the Trusted Root Certificate Authorities store, click Next, and then Finish in the wizard. You will be prompted with a verbose security-warning dialog about the hazards of installing a root authority certificate. Go ahead and click Yes or you will not be able to try out the trusted publisher functionality of ClickOnce, but make sure you understand the risk that it is describing. If someone else obtained your certificate, signed an application with it, and then launched it on your machine, Windows would treat that application as having been published by a company that has been verified by a trusted authority.

Repeat the process described starting with the More Details button, but this time install the same certificate into the Trusted Publishers store.

Once you have done this, you can publish your application with ClickOnce. To do this, select Publish from the Build menu in Visual Studio, and click Finish in the wizard that pops up. This will build your app, publish it with the default ClickOnce publishing settings, and will present you with a Web page from which you can test the installation as a client by clicking on the Install button on the Web page. If you click on that button, the application should download and run on the desktop without any form of prompting. The default permissions requested by a ClickOnce application are unrestricted (full trust), and the default Install button link presented uses a LocalIntranet zone URL. So if you repeated this same process without having configured the trusted publisher certificate, you would have been prompted with the dialog box shown in Figure 1.

Automating the Process

In an operational environment with lots of user machines to maintain, you are not going to have Visual Studio available on each machine to configure publisher certificates, so you will need to use the certificate management console (certmgr.exe) included in Windows. If you just run certmgr.exe with no arguments from a command line, a Microsoft Management Console (MMC) window appears that will allow you to add or remove certificates from any of the stores on the local machine. But even with that, you may not want to have to go touch every machine to configure the certificates. The process can also be automated using certmgr.exe with some command-line parameters.

You first need to export the public portion of a certificate into a certificate file (.cer) from certmgr using the Export button:

Figure 9. Certmgr.exe exporting certificates

After you have done that, you can copy that certificate file to a target machine and run certmgr.exe on the command line. You will need to pass it the file name along with which store to place it in as command-line parameters with the appropriate switches, and that will install the certificate on the machine:

certmgr –add alice.cer –s Root
certmgr –add alice.cer –s TrustedPublisher

All of this can be scripted or added to a custom installer through a Visual Studio Setup and Deployment project (or some other form of installer), and the resulting Windows Installer package (.msi file) can be added to the bootstrapper for your ClickOnce application. For more information on the bootstrapper, see Sean Draine's article Use the Visual Studio 2005 Bootstrapper to Kick-Start Your Installation in the October 2004 issue of MSDN Magazine.

Conclusion

ClickOnce security allows you to take advantage of the runtime security protections provided by Code Access Security, while still allowing a dynamic determination of permissions for a particular application at the point where the application is deployed through ClickOnce. However, this flexibility comes at a price—you have to decide whether to allow the user to be the one responsible for elevating application permissions through prompting, and whether you want that prompting to be based on where the publisher certificate came from. The default behavior of ClickOnce is the easiest to understand. Either an application is going to automatically elevate its permissions because it is being deployed from a trusted publisher, or it is going to prompt the user to let them decide whether to trust the publisher. In more controlled environments, you may want to restrict user prompting, and this article has spelled out how you can do that using the PromptingLevel registry key and configuring publisher and trusted root authority certificates on the user's machine. Understanding the effects of the various values and how they behave with different certificate store configurations is important to properly employing the security protections of ClickOnce.

About the Author

Brian Noyes is a Microsoft MVP and well-known speaker, trainer, writer, and consultant with IDesign, Inc. (www.idesign.net). He speaks at TechEd US and Malaysia, Visual Studio Connections, VSLive!, DevEssentials, and other conferences, and is one of the top-rated speakers on the INETA Speakers Bureau. He has published numerous articles on .NET Framework development for MSDN Magazine, Visual Studio Magazine, asp.netPRO, The Server Side .NET, CoDe Magazine, .NET Developer's Journal, and other publications. His latest book, Data Binding in Windows Forms 2.0, part of the Addison-Wesley .NET Development Series, will hit the shelves in the fall of 2005. Brian got started programming to stimulate his brain while flying F-14 Tomcats in the Navy, applying his skills and interest to programming aircraft and avionics simulations, prototypes, and support applications while stimulating his adrenal glands attending Top Gun and U.S. Naval Test Pilot School.

WCF Best Practices: Load Balancing

This article is from MSDN. Its address is http://msdn.microsoft.com/en-us/library/ms730128.aspx

Load Balancing

One way to increase the capacity of Windows Communication Foundation (WCF) applications is to scale them out by deploying them into a load-balanced server farm. WCF applications can be load balanced using standard load balancing techniques, including software load balancers such as Windows Network Load Balancing as well as hardware-based load balancing appliances.

The following sections discuss considerations for load balancing WCF applications built using various system-provided bindings.

Load Balancing with the Basic HTTP Binding

From the perspective of load balancing, WCF applications that communicate using the BasicHttpBinding are no different than other common types of HTTP network traffic (static HTML content, ASP.NET pages, or ASMX Web Services). WCF channels that use this binding are inherently stateless, and terminate their connections when the channel closes. As such, the BasicHttpBinding works well with existing HTTP load balancing techniques.

By default, the BasicHttpBinding sends a connection HTTP header in messages with a Keep-Alive value, which enables clients to establish persistent connections to the services that support them. This configuration offers enhanced throughput because previously established connections can be reused to send subsequent messages to the same server. However, connection reuse may cause clients to become strongly associated to a specific server within the load-balanced farm, which reduces the effectiveness of round-robin load balancing. If this behavior is undesirable, HTTP Keep-Alive can be disabled on the server using the KeepAliveEnabled property with a CustomBinding or user-defined Binding. The following example shows how to do this using configuration.






name="Microsoft.ServiceModel.Samples.CalculatorService"
behaviorConfiguration="CalculatorServiceBehavior">






binding="customBinding"
bindingConfiguration="HttpBinding"
contract="Microsoft.ServiceModel.Samples.ICalculator" />












Load Balancing with the WSHttp Binding and the WSDualHttp Binding

Both the WSHttpBinding and the WSDualHttpBinding can be load balanced using HTTP load balancing techniques provided several modifications are made to the default binding configuration.

  • Turn off Security Context Establishment: this can be accomplished by the setting the EstablishSecurityContext property on the WSHttpBinding to false. Alternatively, if security sessions are required, it is possible to use stateful security sessions as described in the Secure Sessions topic. Stateful security sessions enable the service to remain stateless as all of the state for the security session is transmitted with each request as a part of the protection security token. Note that to enable a stateful security session, it is necessary to use a CustomBinding or user-defined Binding as the necessary configuration settings are not exposed on WSHttpBinding and WSDualHttpBinding that are provided by the system.

  • Do not use reliable sessions. This feature is off by default.

Load Balancing the Net.TCP Binding

The NetTcpBinding can be load balanced using IP-layer load balancing techniques. However, the NetTcpBinding pools TCP connections by default to reduce connection latency. This is an optimization that interferes with the basic mechanism of load balancing. The primary configuration value for optimizing the NetTcpBinding is the lease timeout, which is part of the Connection Pool Settings. Connection pooling causes client connections to become associated to specific servers within the farm. As the lifetime of those connections increase (a factor controlled by the lease timeout setting), the load distribution across various servers in the farm becomes unbalanced. As a result the average call time increases. So when using the NetTcpBinding in load-balanced scenarios, consider reducing the default lease timeout used by the binding. A 30-second lease timeout is a reasonable starting point for load-balanced scenarios, although the optimal value is application-dependent. For more information about the channel lease timeout and other transport quotas, see Transport Quotas.

For best performance in load-balanced scenarios, consider using NetTcpSecurity (either Transport or TransportWithMessageCredential).

WCF Best Practices: Controlling Resource Consumption and Improving Performance

This article is from MSDN. Its address is http://msdn.microsoft.com/en-us/library/bb463275.aspx

Controlling Resource Consumption and Improving Performance

This topic describes various properties in different areas of the Windows Communication Foundation (WCF) architecture that work to control resource consumption and affect performance metrics.

Properties that Constrain Resource Consumption in WCF

Windows Communication Foundation (WCF) applies constraints on certain types of processes for either security or performance purposes. These constraints come in two main forms, either quotas and throttles. Quotas are limits that when reached or exceeded trigger an immediate exception at some point in the system. Throttles are limits that do not immediately cause an exception to be thrown. Instead, when a throttle limit is reached, processing continues but within the limits set by that throttle value. This limited processing might trigger an exception elsewhere, but this depends upon the application.

In addition to the distinction between quotas and throttles, some constraining properties are located at the serialization level, some at the transport level, and some at the application level. For example, the quota System.ServiceModel.Channels.TransportBindingElement.MaxReceivedMessageSize, which is implemented by all system-supplied transport binding elements, is set to 65,536 bytes by default to hinder malicious clients from engaging in denial-of-service attacks against a service by causing excessive memory consumption. (Typically, you can increase performance by lowering this value.)

An example of a serialization quota is the System.Runtime.Serialization.DataContractSerializer.MaxItemsInObjectGraph property, which specifies the maximum number of objects that the serializer serializes or deserializes in a single ReadObject method call. An example of an application-level throttle is the System.ServiceModel.Dispatcher.ServiceThrottle.MaxConcurrentSessions property, which by default restricts the number of concurrent sessionful channel connections to 10. (Unlike the quotas, if this throttle value is reached, the application continues processing but accepts no new sessionful channels, which means that new clients cannot connect until one of the other sessionful channels is ended.)

These controls are designed to provide an out-of-the-box mitigation against certain types of attacks or to improve performance metrics such as memory footprint, start-up time, and so on. However, depending on the application, these controls can impede service application performance or prevent the application from working at all. For example, an application designed to stream video can easily exceed the default System.ServiceModel.Channels.TransportBindingElement.MaxReceivedMessageSize property. This topic provides an overview of the various controls applied to applications at all levels of WCF, describes various ways to obtain more information about whether a setting is hindering your application, and describes ways to correct various problems. Most throttles and some quotas are available at the application level, even when the base property is a serialization or transport constraint. For example, you can set the System.Runtime.Serialization.DataContractSerializer.MaxItemsInObjectGraph property using the System.ServiceModel.ServiceBehaviorAttribute.MaxItemsInObjectGraph property on the service class.

Bb463275.note(en-us,VS.90).gifNote:
If you have a particular problem, you should first read the WCF Troubleshooting Quickstart to see whether your problem (and a solution) is listed there.

Properties that restrict serialization processes are listed in Security Considerations for Data. Properties that restrict the consumption of resources related to transports are listed in Transport Quotas. Properties that restrict the consumption of resources at the application layer are the members of the ServiceThrottle class.

Detecting Application and Performance Issues Related to Quota Settings

The defaults of the preceding values have been chosen to enable basic application functionality across a wide range of application types while providing basic protection against common security issues. However, different application designs might exceed one or more throttle settings although the application otherwise is secure and would work as designed. In these cases, you must identify which throttle values are being exceeded and at what level, and decide on the appropriate course of action to increase application throughput.

Typically, when writing the application and debugging it, you set the System.ServiceModel.Description.ServiceDebugBehavior.IncludeExceptionDetailInFaults property to true in the configuration file or programmatically. This instructs WCF to return service exception stack traces to the client application for viewing. This feature reports most application-level exceptions in such a way as to display which quota settings might be involved, if that is the problem.

Some exceptions happen at run time below the visibility of the application layer and are not returned using this mechanism, and they might not be handled by a custom System.ServiceModel.Dispatcher.IErrorHandler implementation. If you are in a development environment like Microsoft Visual Studio, most of these exceptions are displayed automatically. However, some exceptions can be masked by development environment settings such as the Just My Code settings in Visual Studio 2005.

Regardless of the capabilities of your development environment, you can use capabilities of WCF tracing and message logging to debug all exceptions and tune the performance of your applications. For more information, see Using Tracing to Troubleshoot Your Application.

Performance Issues and XmlSerializer

Services and client applications that use data types that are serializable using the XmlSerializer generate and compile serialization code for those data types at run time, which can result in slow start-up performance.

Bb463275.note(en-us,VS.90).gifNote:
Pre-generated serialization code can be used only in client applications and not in services.

The ServiceModel Metadata Utility Tool (Svcutil.exe) can improve start-up performance for these applications by generating the necessary serialization code from the compiled assemblies for the application. For more information, see How to: Improve the Startup Time of WCF Client Applications using the XmlSerializer.

Performance Issues When Hosting WCF Services Under ASP.NET

When a WCF service is hosted under IIS and ASP.NET, the configuration settings of IIS and ASP.NET can affect the throughput and memory footprint of the WCF service. For more information about ASP.NET performance, see http://msdn.microsoft.com/en-us/library/ms998549.aspx. One setting that might have unintended consequences is MinWorkerThreads, which is a property of the ProcessModelSection. If your application has a fixed or small number of clients, setting MinWorkerThreads to 2 might provide a throughput boost on a multiprocessor machine that has a CPU utilization close to 100%. This increase in performance comes with a cost: it will also cause an increase in memory usage, which could reduce scalability.

WCF Best Practices: Data Contract Versioning

This article is from MSDN. Its address is http://msdn.microsoft.com/en-us/library/ms733832.aspx

Best Practices: Data Contract Versioning

This topic lists the best practices for creating data contracts that can evolve easily over time. For more information about data contracts, see the topics in Using Data Contracts.

Note on Schema Validation

In discussing data contract versioning, it is important to note that the data contract schema exported by Windows Communication Foundation (WCF) does not have any versioning support, other than the fact that elements are marked as optional by default.

This means that even the most common versioning scenario, such as adding a new data member, cannot be implemented in a way that is seamless with regard to a given schema. The newer versions of a data contract (with a new data member, for example) do not validate using the old schema.

However, there are many scenarios in which strict schema compliance is not required. Many Web services platforms, including WCF and XML Web services created using ASP.NET, do not perform schema validation by default and therefore tolerate extra elements that are not described by the schema. When working with such platforms, many versioning scenarios are easier to implement.

Thus, there are two sets of data contract versioning guidelines: one set for scenarios where strict schema validity is important, and another set for scenarios when it is not.

Versioning When Schema Validation Is Required

If strict schema validity is required in all directions (new-to-old and old-to-new), data contracts should be considered immutable. If versioning is required, a new data contract should be created, with a different name or namespace, and the service contract using the data type should be versioned accordingly.

For example, a purchase order processing service contract named PoProcessing with a PostPurchaseOrder operation takes a parameter that conforms to a PurchaseOrder data contract. If the PurchaseOrder contract has to change, you must create a new data contract, that is, PurchaseOrder2, which includes the changes. You must then handle the versioning at the service contract level. For example, by creating a PostPurchaseOrder2 operation that takes the PurchaseOrder2 parameter, or by creating a PoProcessing2 service contract where the PostPurchaseOrder operation takes a PurchaseOrder2 data contract.

Note that changes in data contracts that are referenced by other data contracts also extend to the service model layer. For example, in the previous scenario the PurchaseOrder data contract does not need to change. However, it contains a data member of a Customer data contract, which in turn contained a data member of the Address data contract, which does need to be changed. In that case, you would need to create an Address2 data contract with the required changes, a Customer2 data contract that contains the Address2 data member, and a PurchaseOrder2 data contract that contains a Customer2 data member. As in the previous case, the service contract would have to be versioned as well.

Although in these examples names are changed (by appending a "2"), the recommendation is to change namespaces instead of names by appending new namespaces with a version number or a date. For example, the http://schemas.contoso.com/2005/05/21/PurchaseOrder data contract would change to the http://schemas.contoso.com/2005/10/14/PurchaseOrder data contract.

For more information, see Best Practices: Service Versioning.

Occasionally, you must guarantee strict schema compliance for messages sent by your application, but cannot rely on the incoming messages to be strictly schema-compliant. In this case, there is a danger that an incoming message might contain extraneous data. The extraneous values are stored and returned by WCF and thus results in schema-invalid messages being sent. To avoid this problem, the round-tripping feature should be turned off. There are two ways to do this.

For more information about round-tripping, see Forward-Compatible Data Contracts.

Versioning When Schema Validation Is Not Required

Strict schema compliance is rarely required. Many platforms tolerate extra elements not described by a schema. As long as this is tolerated, the full set of features described in Data Contract Versioning and Forward-Compatible Data Contracts can be used. The following guidelines are recommended.

Some of the guidelines must be followed exactly in order to send new versions of a type where an older one is expected or send an old one where the new one is expected. Other guidelines are not strictly required, but are listed here because they may be affected by the future of schema versioning.

  1. Do not attempt to version data contracts by type inheritance. To create later versions, either change the data contract on an existing type or create a new unrelated type.

  2. The use of inheritance together with data contracts is allowed, provided that inheritance is not used as a versioning mechanism and that certain rules are followed. If a type derives from a certain base type, do not make it derive from a different base type in a future version (unless it has the same data contract). There is one exception to this: you can insert a type into the hierarchy between a data contract type and its base type, but only if it does not contain data members with the same names as other members in any possible versions of the other types in the hierarchy. In general, using data members with the same names at different levels of the same inheritance hierarchy can lead to serious versioning problems and should be avoided.

  3. Starting with the first version of a data contract, always implement IExtensibleDataObject to enable round-tripping. For more information, see Forward-Compatible Data Contracts. If you have released one or more versions of a type without implementing this interface, implement it in the next version of the type.

  4. In later versions, do not change the data contract name or namespace. If changing the name or namespace of the type underlying the data contract, be sure to preserve the data contract name and namespace by using the appropriate mechanisms, such as the Name property of the DataContractAttribute. For more information about naming, see Data Contract Names.

  5. In later versions, do not change the names of any data members. If changing the name of the field, property, or event underlying the data member, use the Name property of the DataMemberAttribute to preserve the existing data member name.

  6. In later versions, do not change the type of any field, property, or event underlying a data member such that the resulting data contract for that data member changes. Keep in mind that interface types are equivalent to Object for the purposes of determining the expected data contract.

  7. In later versions, do not change the order of the existing data members by adjusting the Order property of the DataMemberAttribute attribute.

  8. In later versions, new data members can be added. They should always follow these rules:

    1. The IsRequired property should always be left at its default value of false.

    2. If a default value of null or zero for the member is unacceptable, a callback method should be provided using the OnDeserializingAttribute to provide a reasonable default in case the member is not present in the incoming stream. For more information about the callback, see Version-Tolerant Serialization Callbacks.

    3. The Order property on the DataMemberAttribute should be used to make sure that all of the newly added data members appear after the existing data members. The recommended way of doing this is as follows: None of the data members in the first version of the data contract should have their Order property set. All of the data members added in version 2 of the data contract should have their Order property set to 2. All of the data members added in version 3 of the data contract should have their Order set to 3, and so on. It is permissible to have more than one data member set to the same Order number.

  9. Do not remove data members in later versions, even if the IsRequired property was left at its default property of false in prior versions.

  10. Do not change the IsRequired property on any existing data members from version to version.

  11. For required data members (where IsRequired is true), do not change the EmitDefaultValue property from version to version.

  12. Do not attempt to create branched versioning hierarchies. That is, there should always be a path in at least one direction from any version to any other version using only the changes permitted by these guidelines.

    For example, if version 1 of a Person data contract contains only the Name data member, you should not create version 2a of the contract adding only the Age member and version 2b adding only the Address member. Going from 2a to 2b would involve removing Age and adding Address; going in the other direction would entail removing Address and adding Age. Removing members is not permitted by these guidelines.

  13. You should generally not create new subtypes of existing data contract types in a new version of your application. Likewise, you should not create new data contracts that are used in place of data members declared as Object or as interface types. Creating these new classes is allowed only when you know that you can add the new types to the known types list of all instances of your old application. For example, in version 1 of your application, you may have the LibraryItem data contract type with the Book and Newspaper data contract subtypes. LibraryItem would then have a known types list that contains Book and Newspaper. Suppose you now add a Magazine type in version 2 which is a subtype of LibraryItem. If you send a Magazine instance from version 2 to version 1, the Magazine data contract is not found in the list of known types and an exception is thrown.

  14. You should not add or remove enumeration members between versions. You should also not rename enumeration members, unless you use the Name property on the EnumMemberAttribute attribute to keep their names in the data contract model the same.

  15. Collections are interchangeable in the data contract model as described in Collection Types in Data Contracts. This allows for a great degree of flexibility. However, make sure that you do not inadvertently change a collection type in a non-interchangeable way from version to version. For example, do not change from a non-customized collection (that is, without the CollectionDataContractAttribute attribute) to a customized one or a customized collection to a non-customized one. Also, do not change the properties on the CollectionDataContractAttribute from version to version. The only allowed change is adding a Name or Namespace property if the underlying collection type's name or namespace has changed and you need to make its data contract name and namespace the same as in a previous version.

Some of the guidelines listed here can be safely ignored when special circumstances apply. Make sure you fully understand the serialization, deserialization, and schema mechanisms involved before deviating from the guidelines.

Static constructor

From MSDN.
http://msdn.microsoft.com/en-us/library/k9x6w0hc(VS.80).aspx

Static constructors have the following properties:

  • A static constructor does not take access modifiers or have parameters.

  • A static constructor is called automatically to initialize the class before the first instance is created or any static members are referenced.

  • A static constructor cannot be called directly.

  • The user has no control on when the static constructor is executed in the program.

  • A typical use of static constructors is when the class is using a log file and the constructor is used to write entries to this file.

  • Static constructors are also useful when creating wrapper classes for unmanaged code, when the constructor can call the LoadLibrary method.

WCF Best Practices - Service Versioning

The following article is copied from MSDN.
Address for this article is http://msdn.microsoft.com/en-us/library/ms731060.aspx

Service Versioning

After initial deployment, and potentially several times during their lifetime, services (and the endpoints they expose) may need to be changed for a variety of reasons, such as changing business needs, information technology requirements, or to address other issues. Each change introduces a new version of the service. This topic explains how to consider versioning in Windows Communication Foundation (WCF).

Four Categories of Service Changes

The changes to services that may be required can be classified into four categories:

  • Contract changes: For example, an operation might be added, or a data element in a message might be added or changed.

  • Address changes: For example, a service moves to a different location where endpoints have new addresses.

  • Binding changes: For example, a security mechanism changes or its settings change.

  • Implementation changes: For example, when an internal method implementation changes.

Some of these changes are called "breaking" and others are "nonbreaking." A change is nonbreaking if all messages that would have been processed successfully in the previous version are processed successfully in the new version. Any change that does not meet that criterion is a breaking change. This topic describes mechanisms for making nonbreaking changes in contracts, addresses, and bindings.

Service Orientation and Versioning

One of the tenets of service orientation is that services and clients are autonomous (or independent). Among other things, this implies that service developers cannot assume that they control or even know about all service clients. This eliminates the option of rebuilding and redeploying all clients when a service changes versions. This topic assumes the service adheres to this tenet of service orientation and therefore must be changed or "versioned" independent of its clients.

In cases where a breaking change is unexpected and cannot be avoided, an application may choose to ignore this tenet and require that clients be rebuilt and redeployed with a new version of the service. That scenario does not receive further discussion here.

Contract Versioning

Contracts used by a client do not need to be the same as the contract used by the service; they need only to be compatible.

For service contracts, compatibility means new operations exposed by the service can be added but existing operations cannot be removed or changed semantically.

For data contracts, compatibility means new schema type definitions can be added but existing schema type definitions cannot be changed in breaking ways. Breaking changes might include removing data members or changing their data type incompatibly. This feature allows the service some latitude in changing the version of its contracts without breaking clients. The next two sections explain nonbreaking and breaking changes that can be made to WCF data and service contracts.

Data Contract Versioning

This section deals with data versioning when using the DataContractSerializer and DataContractAttribute classes.

Strict Versioning

In many scenarios when changing versions is an issue, the service developer does not have control over the clients and therefore cannot make assumptions about how they would react to changes in the message XML or schema. In these cases, you must guarantee that the new messages will validate against the old schema, for two reasons:

  • The old clients were developed with the assumption that the schema will not change. They may fail to process messages that they were never designed for.

  • The old clients may perform actual schema validation against the old schema before even attempting to process the messages.

The recommended approach in such scenarios is to treat existing data contracts as immutable and create new ones with unique XML qualified names. The service developer would then either add new methods to an existing service contract or create a new service contract with methods that use the new data contract.

It will often be the case that a service developer needs to write some business logic that should run within all versions of a data contract plus version-specific business code for each version of the data contract. The appendix at the end of this topic explains how interfaces can be used to satisfy this need.

Lax Versioning

In many other scenarios, the service developer can make the assumption that adding a new, optional member to the data contract will not break existing clients. This requires the service developer to investigate whether existing clients are not performing schema validation and that they ignore unknown data members. In these scenarios, it is possible to take advantage of data contract features for adding new members in a nonbreaking way. The service developer can make this assumption with confidence if the data contract features for versioning were already used for the first version of the service.

WCF, ASP.NET Web Services, and many other Web service stacks support lax versioning: that is, they do not throw exceptions for new unknown data members in received data.

It is easy to mistakenly believe that adding a new member will not break existing clients. If you are unsure that all clients can handle lax versioning, the recommendation is to use the strict versioning guidelines and treat data contracts as immutable.

For detailed guidelines for both lax and strict versioning of data contracts, see Best Practices: Data Contract Versioning.

Distinguishing Between Data Contract and .NET Types

A .NET class or structure can be projected as a data contract by applying the DataContractAttribute attribute to the class. The .NET type and its data contract projections are two distinct matters. It is possible to have multiple .NET types with the same data contract projection. This distinction is especially useful in allowing you to change the .NET type while maintaining the projected data contract, thereby maintaining compatibility with existing clients even in the strict sense of the word. There are two things you should always do to maintain this distinction between .NET type and data contract:

  • Specify a Name and Namespace. You should always specify the name and namespace of your data contract to prevent your .NET type’s name and namespace from being exposed in the contract. This way, if you decide later to change the .NET namespace or type name, your data contract remains the same.

  • Specify Name. You should always specify the name of your data members to prevent your .NET member name from being exposed in the contract. This way, if you decide later to change the .NET name of the member, your data contract remains the same.

Changing or Removing Members

Changing the name or data type of a member, or removing data members, is a breaking change even if lax versioning is allowed. If this is necessary, create a new data contract.

If service compatibility is of high importance, you might consider ignoring unused data members in your code and leave them in place. If you are splitting up a data member into multiple members, you might consider leaving the existing member in place as a property that can perform the required splitting and re-aggregation for down-level clients (clients that are not upgraded to the latest version).

Similarly, changes to the data contract’s name or namespace are breaking changes.

Round-Trips of Unknown Data

In some scenarios, there is a need to "round-trip" unknown data that comes from members added in a new version. For example, a "versionNew" service sends data with some newly added members to a "versionOld" client. The client ignores the newly added members when processing the message, but it resends that same data, including the newly added members, back to the versionNew service. The typical scenario for this is data updates where data is retrieved from the service, changed, and returned.

To enable round-tripping for a particular type, the type must implement the IExtensibleDataObject interface. The interface contains one property, ExtensionData that returns the ExtensionDataObject type. The property is used to store any data from future versions of the data contract that is unknown to the current version. This data is opaque to the client, but when the instance is serialized, the content of the ExtensionData property is written with the rest of the data contract members' data.

It is recommended that all your types implement this interface to accommodate new and unknown future members.

Data Contract Libraries

There may be libraries of data contracts where a contract is published to a central repository, and service and type implementers implement and expose data contracts from that repository. In that case, when you publish a data contract to the repository, you have no control over who creates types that implement it. Thus, you cannot modify the contract once it is published, rendering it effectively immutable.

When Using the XmlSerializer

The same versioning principles apply when using the XmlSerializer class. When strict versioning is required, treat data contracts as immutable and create new data contracts with unique, qualified names for the new versions. When you are sure that lax versioning can be used, you can add new serializable members in new versions but not change or remove existing members.

ms731060.note(en-us,VS.90).gifNote:
The XmlSerializer uses the XmlAnyElementAttribute and XmlAnyAttributeAttribute attributes to support round-tripping of unknown data.

Message Contract Versioning

The guidelines for message contract versioning are very similar to versioning data contracts. If strict versioning is required, you should not change your message body but instead create a new message contract with a unique qualified name. If you know that you can use lax versioning, you can add new message body parts but not change or remove existing ones. This guidance applies both to bare and wrapped message contracts.

Message headers can always be added, even if strict versioning is in use. The MustUnderstand flag may affect versioning. In general, the versioning model for headers in WCF is as described in the SOAP specification.

Service Contract Versioning

Similar to data contract versioning, service contract versioning also involves adding, changing, and removing operations.

Specifying Name, Namespace, and Action

By default, the name of a service contract is the name of the interface. Its default namespace is "http://tempuri.org", and each operation’s action is "http://tempuri.org/contractname/methodname". It is recommended that you explicitly specify a name and namespace for the service contract, and an action for each operation to avoid using "http://tempuri.org" and to prevent interface and method names from being exposed in the service’s contract.

Adding Parameters and Operations

Adding service operations exposed by the service is a nonbreaking change because existing clients need not be concerned about those new operations.

ms731060.note(en-us,VS.90).gifNote:
Adding operations to a duplex callback contract is a breaking change.

Changing Operation Parameter or Return Types

Changing parameter or return types generally is a breaking change unless the new type implements the same data contract implemented by the old type. To make such a change, add a new operation to the service contract or define a new service contract.

Removing Operations

Removing operations is also a breaking change. To make such a change, define a new service contract and expose it on a new endpoint.

Fault Contracts

The FaultContractAttribute attribute enables a service contract developer to specify information about faults that can be returned from the contract's operations.

The list of faults described in a service's contract is not considered exhaustive. At any time, an operation may return faults that are not described in its contract. Therefore changing the set of faults described in the contract is not considered breaking. For example, adding a new fault to the contract using the FaultContractAttribute or removing an existing fault from the contract.

Service Contract Libraries

Organizations may have libraries of contracts where a contract is published to a central repository and service implementers implement contracts from that repository. In this case, when you publish a service contract to the repository you have no control over who creates services that implement it. Therefore, you cannot modify the service contract once published, rendering it effectively immutable. WCF supports contract inheritance, which can be used to create a new contract that extends existing contracts. To use this feature, define a new service contract interface that inherits from the old service contract interface, then add methods to the new interface. You then change the service that implements the old contract to implement the new contract and change the "versionOld" endpoint definition to use the new contract. To "versionOld" clients, the endpoint will continue to appear as exposing the "versionOld" contract; to "versionNew" clients, the endpoint will appear to expose the "versionNew" contract.

Address and Binding Versioning

Changes to endpoint address and binding are breaking changes unless clients are capable of dynamically discovering the new endpoint address or binding. One mechanism for implementing this capability is by using a Universal Discovery Description and Integration (UDDI) registry and the UDDI Invocation Pattern where a client attempts to communicate with an endpoint and, upon failure, queries a well-known UDDI registry for the current endpoint metadata. The client then uses the address and binding from this metadata to communicate with the endpoint. If this communication succeeds, the client caches the address and binding information for future use.

Appendix

The general data contract versioning guidance when strict versioning is needed is to treat data contracts as immutable and create new ones when changes are required. A new class needs to be created for each new data contract, so a mechanism is needed to avoid having to take existing code that was written in terms of the old data contract class and rewrite it in terms of the new data contract class.

One such mechanism is to use interfaces to define the members of each data contract and write internal implementation code in terms of the interfaces rather than the data contract classes that implement the interfaces. The following code for version 1 of a service shows an IPurchaseOrderV1 interface and a PurchaseOrderV1:

public interface IPurchaseOrderV1
{
string OrderId { get; set; }
string CustomerId { get; set; }
}

[DataContract(
Name = "PurchaseOrder",
Namespace = "http://examples.microsoft.com/WCF/2005/10/PurchaseOrder")]
public class PurchaseOrderV1 : IPurchaseOrderV1
{
[DataMember(...)]
public string OrderId {...}
[DataMember(...)]
public string CustomerId {...}
}

While the service contract’s operations would be written in terms of PurchaseOrderV1, the actual business logic would be in terms of IPurchaseOrderV1. Then, in version 2, there would be a new IPurchaseOrderV2 interface and a new PurchaseOrderV2 class as shown in the following code:

public interface IPurchaseOrderV2
{
DateTime OrderDate { get; set; }
}
[DataContract(
Name = "PurchaseOrder ",
Namespace = "http://examples.microsoft.com/WCF/2006/02/PurchaseOrder")]
public class PurchaseOrderV2 : IPurchaseOrderV1, IPurchaseOrderV2
{
[DataMember(...)]
public DateTime OrderId {...}
[DataMember(...)]
public string CustomerId {...}
[DataMember(...)]
public DateTime OrderDate { ... }
}

The service contract would be updated to include new operations that are written in terms of PurchaseOrderV2. Existing business logic written in terms of IPurchaseOrderV1 would continue to work for PurchaseOrderV2 and new business logic that needs the OrderDate property would be written in terms of IPurchaseOrderV2.

Wednesday, 28 October 2009

Are static method thread safety?

Static method is thread safety only if it does not modify external variables. And it should be fine if there is no static variables in this class since static method can not use any non-static variables of a class. You do not need worry about local variables inside this static method.
Static method is useful because it could be called without creating a new instance of a class.

First post

Thanks to Takashi, Thura. I got a new blog.