Open Menu

Synergex Blog


Exposing WCF services using Synergy .NET Interop

By Steve Ives, Posted on September 2, 2011 at 11:20 am

In this post I will explain how a WCF service can be created using a Synergy .NET Interop project. In my earlier post I explained how to use xfNetLink .NET to create a WCF service, which provided an easy way to create a WCF service for developers who were already using xfNetLink .NET and xfServerPlus. Now I want to show you an alternate approach that is somewhat similar in nature, but which does not require the continued use of xfNetLink .NET and xfServerPlus in final solution. Of course this might not be what you want if your Synergy server is a non-Windows system.

This is the fourth in a series of posts relating to the various ways in which Synergy developers can use of Windows Communication Foundation (WCF) when building their applications. The posts in the series are:

  1. Building Distributed Apps with Synergy/DE and WCF
  2. Exposing WCF Services using xfNetLink .NET
  3. Hosting WCF Services in an ASP.NET Web Application
  4. Exposing WCF services using Synergy .NET Interop (this post)
  5. Self-Hosting WCF Services
  6. Exposing WCF services using Synergy .NET

What is Synergy .NET Interop

Before we get started we better make sure you know what a Synergy .NET Interop project is. In a nutshell an Interop project is a way of creating a .NET assembly containing various classes, where the external interface of those classes is defined by a collection of Traditional Synergy xfServerPlus methods, and the Synergy records that are referenced by the parameters of those methods. The idea is that the classes exposed by the assembly created by an Interop project have the exact same external interface that would be created if those same methods were defined in a Method Catalog and exposed via xfServerPlus and xfNetLink .NET. So the primary use case for an Interop assembly is to allow developers with existing applications which use xfNetLink .NET and xfServerPlus to factor those tools out of their solution, resulting in a pure .NET application.

In the same way that GENCS has an optional –W command line switch which essentially transforms the resulting .NET assembly into a WCF service, the Synergy .NET Interop project has a project option which does the exact same thing.

So let’s get started. Here’s how to do it. Remember, an Interop project allows you to expose a Synergy .NET assembly based on one or more traditional Synergy methods and records, but there are some prerequisites:

  1. The Synergy methods must make use of the recently introduced attributes and documentation comments, so that the external interface of the method is entirely defined within the code. The Interop project does not use a Synergy Method Catalog. For more information on this you can refer to a blog post that I wrote back in 2009 called Wave Goodbye to the MDU.
  2. The records that are referenced by the parameters of the methods must be defined in a repository, but this would always have been the case if you were using xfServerPlus.
  3. Generally the data files that are accessed by your methods should be moved to the Windows server where the WCF service will be hosted, so that the files can be accessed locally. If your data files need to stay on a non-Windows server then of course you could use xfServer to access the files remotely.
  4. You’ll need to have a copy of the source code for the Synergy methods available on the Windows development system where you are using Synergy .NET.

These are the steps that you’ll need to complete in order to create a Synergy .NET Interop assembly from your existing Synergy methods and repository structure definitions:

Create an Interop Project

The first step in the process is to create a new Synergy .NET Interop Assembly project:

  • Open Visual Studio and create a new Synergy .NET Interop Project.
  • From the menu, select File > New > Project.
  • In the Installed Templates list select Synergy/DE > Interop.
  • Set the name and location of the new project as required (here I’m naming the project MyInteropAssembly and setting the location to my c:temp folder).
  • Click the OK button to create the new project.

 

image

When you create an Interop project a new source file called SynergyRoutines.dbl is automatically included in the project. This file contains various utility routines that are used by the interop project in order to expose the same interface that xfNetLink .NET does. You shouldn’t need to be concerned with this code, and shouldn’t modify it. Simply close the file.

Add Synergy Method Source Files

Having created the Interop project the next step is to add the source code for your Synergy methods to the project. Remember that the methods that you include in an Interop project are Traditional Synergy subroutines and functions that include whatever {xfMethod} and {xfParameter} attributes are required in order to completely document the external interface of the methods within the source files.

imageFor clarity when creating an Interop project I also like to include my method source files in a folder within the project, and I generally name the folder “Methods”. This is entirely optional though.

  • In solution Explorer, right-click on your Interop project and from the context menu select Add > New Folder.
  • Change the name of the folder to Methods and press enter.
  • Right-click on your new Methods folder and from the context menu select Add > Existing Item….
  • Use the resulting open file dialog to browse to the location of your Synergy method routines, select them all, and then click the Add button to add the routines to your project.

In Solution Explorer your project should now look something like the image on the right.

Before we move on let me make a couple of things clear. Firstly it is important that you know that when you add existing files to a project using Add > Existing Item, those files are COPIED into your project folders. If you prefer to leave the source files in their existing location then you can use Add > Reference Existing Item.

The second thing that you need to understand is that in the example I am using here the nature of the methods that I am using is relatively simple, and the methods don’t have external dependencies on other routines in libraries, don’t use include files, etc. If your methods do have such dependencies then you will need to decide how best to resolve those dependencies.

Set Environment Variables

It is likely that you will need to set some environment variables in order for your code to be able to compile and run. It is almost certain that you will need to at least set the environment variables that allow your repository to be located and accessed, and in the case of the code that I am using, at runtime an environment variable named DAT is used to specify the location of the data files that are accessed by the code.

Synergy .NET development projects include the ability to set environment variables in the development environment via properties of the project:

  • In Solution Explorer, right click on your project and from the context menu select Properties.
  • In the project properties dialog, to the left side, select the tab named Environment Variables.
  • Set whatever environment variables are required in order for your code to compile and run.
  • Save your changes by selecting File > Save from the menu.
  • Close the project properties dialog.

In the case of the code I am using for this example I needed to set the following environment variables:

image

By the way, the tools in Synergy .NET also honor the same mechanisms for setting environment variables as Traditional Synergy on the Windows platform. So if the environment variables that you need are already present in your system environment, or are provided by the synergy.ini or synuser.ini files then you won’t need to set them here.

Build an Interop Assembly

You should now be able to build your interop assembly.

  • In Solution Explorer, right-click on your project and from the context menu select Build.
  • Check the output window, you should see something like this:
------ Build started: Project: MyInteropAssembly, Configuration: Debug Any CPU ------
========== Build: 1 succeeded or up-to-date, 0 failed, 0 skipped ==========

 

imageIf your assembly did not build then you’ll need to resolve any errors that you may have before you proceed. The most likely cause for errors at this point will be external dependencies that are not catered for.

Check out what happened in your project. If you look in Solution Explorer you will notice that a new folder called GeneratedCode was created, and some new source files were added to the folder. These new files are at the center of how Interop projects work.

The method source files that you added to the project, in the Methods folder, were Traditional Synergy subroutines and functions. The job of an Interop project is to expose the functionality of those methods in the same way that xfNetLink .NET’s GENCS utility does, and GENCS creates .NET classes that represent the Synergy methods, and the data structures exposed by those Synergy methods.

The Interop project essentially does the same thing, except that where GENCS created C# code which uses xfNetLink .NET and xfServerPlus to call the Synergy methods, the Interop project creates Synergy .NET code which calls the Synergy methods directly.

In the code used in my example the Synergy methods that I added to the project were all part of an interface named “SynergyServer”. This was defined by the {xfMethod} attributes that were present in each of the source files that I included into the project. You will notice that the GeneratedCode folder now includes a source file named SynergyServer.dbl, and if you were to look in the source file you would see that it contains a class named SynergyServer, which in turn contains methods named GetAddressForUpdate, GetAllCustomers and so on. These methods correspond to the Synergy methods that I originally included in the project.

Additionally you will notice that the GeneratedCode folder also contains several source files which represent the record definitions which are exposed by the parameters of my methods. For each record that is exposed the interop project has created a class to represent that data structure. These classes are named Address, Address_type, Contact and so on.

However, if you were to look in the source files that have been generated into the GeneratedCode folder you might notice that these generated classes do not contain any ServiceContract, OperationContract, DataContract or DataMember attributes. In other words, the classes can’t currently be exposed via WCF.

Note: You should not make changes to any of the code in the GeneratedCode folder, because this code will be recreated each time you build the project. 

Expose a WCF Service

As I briefly discussed in an earlier post, in order for classes to be exposed via WCF it is necessary to apply various attributes to those classes:

Nature of Item Required Attribute
Class containing callable methods ServiceContract
Callable method OperationContract
Class defining a data structure DataContract
Accessible member in a data structure DataMember

 

In an Interop project, these attributes can be automatically added to the generated classes by enabling a project option called “Generate WCF Contracts”.

  • In Solution Explorer, right-click your project and select Properties.
  • In the project properties dialog, to the left side, select the tab named Interop.
  • Check the Generate WCF Contracts option.
  • In most cases it’s also a good idea to set the Generate out parameters property to out instead of the default value of “ref”.
  • Save your changes by selecting File > Save from the menu.
  • Close the project properties dialog.
  • In Solution Explorer, right-click on your project and from the context menu select Build.
  • Check the output window and make sure that your assembly built correctly.

 

Note: The default behavior of generating out parameters as ref is to maintain compatibility with what GENCS does by default, but when exposing a new WCF service to a new client application you would always want output parameters defined as type out.

 

Now take a look at the various source files in the GeneratedCode folder. You should notice that the ServiceContract, OperationContract, DataContract and DataMember attributes are present as required.

Host the Service

That’s it, you just used a Synergy .NET Interop project to define and create a WCF service. Of course before you can actually do anything with the service you will need to host it somewhere. One option is to host the service in an ASP.NET Web Application as described in my earlier post.

However, sometimes developers don’t want to have to setup and maintain an IIS web server simply to host a WCF service. Luckily there are other ways of hosting a WCF service that do not require IIS, and that will be the topic of the next post in this series.


Hosting WCF Services in an ASP.NET Web Application

By Steve Ives, Posted on July 1, 2011 at 3:39 pm

There are several ways that you can use Synergy/DE tools to expose WCF services, but regardless of which way you decide to create a service, the next decision you have to make is how to host it. By hosting a WCF service you expos the service and make it possible to use it from other applications. In this post I will introduce you to one way that WCF services may be hosted, via an ASP.NET web application and an IIS web server. In a later post I will show you how to host WCF services without this requirement for an IIS web server.

This is the third in a series of posts relating to the various ways in which Synergy developers can use of Windows Communication Foundation (WCF) when building their applications. The posts in the series are:

  1. Building Distributed Apps with Synergy/DE and WCF
  2. Exposing WCF Services using xfNetLink .NET
  3. Hosting WCF Services in an ASP.NET Web Application (this post)
  4. Exposing WCF Services using Synergy .NET Interop
  5. Self-Hosting WCF Services
  6. Exposing WCF services using Synergy .NET

 

A WCF service is simply a collection of one or more classes in an assembly, but the whole point of implementing a WCF service is to make those classes available to be used by other applications. And those applications will often be located on different networked systems from the service itself. So, in order for a WCF service to be useful, it needs to be “hosted” somewhere, and that somewhere needs to be available on a network.

There are three basic ways that WCF services can be hosted:

  1. Inside a Web application located on a Web server.
  2. Using Windows Process Activation Services (WAS). This mechanism was introduced in Windows Server 2008 and provides a mechanism for hosting WCF services without requiring a Web server.
  3. Services can be “self” hosted by some custom .NET application. For example, developers often host WCF services inside a Windows Service application.

In this post I will show you the basics of hosting a WCF service in an ASP.NET web application. In a future post I will go on to show how to self-host services.

Setting up basic hosting for a WCF service inside an ASP.NET Web application is very easy, but as Synergy .NET doesn’t have support for creating Web applications you’ll have to use another .NET language, either C# or VB.NET. However, don’t worry about that too much, because there won’t actually be any code in the Web application!

  • Start Visual Studio 2010 and create a new ASP.NET application:
    • From the menu, select File > New > Project.
    • Under Installed Templates, select Visual C# > Web > ASP.NET Empty Web Application.
    • Chose the name and location for your new Web application.
    • Click the OK button to create the new project.
  • Add a new WCF Service to the project:
    • From the menu, select Project > Add New Item.
    • Under Installed Templates, select Visual C# > Web > WCF Service.
    • Enter the name for your new service. The name will be part of the service endpoint URI that the client connects to, so pick something meaningful. For example, if your service allows orders to be placed then you might name the service something like OrderServices.svc.
    • Click the Add button to add the new service to the project.

You will notice that three files were added to the project:

image

In addition to the service (.svc) file that you named, you will see two C# source files. These files were added because Visual Studio provided all of the files that you would need in order to expose a new WCF service that would be manually coded in these files. But what we’re trying to do is host a WCF service that already exists in another .NET assembly.

  • Delete the two C# source files by right-clicking on each and selecting delete.

In order to expose a WCF service that exists in another assembly, the first thing we need to do is add a reference to that assembly, to make the classes it contains available within the new Web application.

  • From the menu select Project > Add Reference and then select Browse.
  • Locate and select the assembly containing your WCF service and add a reference to your project.
  • If the assembly you selected was created using xfNetLink .NET then you will also need to add a reference to the xfnlnet.dll assembly, as is usual when using xfNetLink .NET

You’ll need to know that namespace and class name of the WCF service in your assembly. If you can’t remember this that double-click on the assembly that you just referenced and then use Object Browser to drill into the assembly and figure out the names. All that remains is to “point” the service file at your WCF service class.

  • Double-click on the service (.svc) file to edit it. You should see something like this:

image

Edit the file as follows:

  • Change Language=”C#” to Language=”Synergy”.
  • Change Service=”…” to Service=”YourNamespace.YourWcfClass” (obviously replacing YourNamespace and YourWcfClass with the namespace and class name of your WCF service).
  • Remove the CodeBehind=”…” item … there is no code-behind, the code is all in the other assembly.
  • Save the file.

So you should now have something like this:

image

If your service is based on an assembly created using xfNetLink .NET then you have one final configuration step to perform. You need to tell xfNetLink .NET how to connect to xfServerPlus. You can do this by using the Synergy xfNetLink .NET Configuration Utility (look in the Windows Start Menu) to edit the new web applications Web.config file. If your xfServerPlus service is not on the same machine as your new web application then you must specify a host name or IP address, and if xfServerPlus is not using the default port (2356) then you must also specify a port number. When you’re done, the Web.config file should contain a section like this:

image

That’s it, you’re done. You should now have a working WCF service!

  • To see if the service has really been hosted, right-click on the service (.svc) file and select View In Browser.

You should see the service home page, which will look something like this:

image

If you see this page then your service is up and running. By the way, the URI in your browsers address window is the URI that you would use to add a “Service Reference” to the service in a client application … but we’ll get on to that in a later post.

Of course, all we have actually done here is set up a web application to host a WCF service, but we’re using Visual Studio’s built in development web server (Cassini) to actually host the service. If you were doing this for real then you’d need to deploy your new web application onto a “real” IIS web server.

We’re not done with the subject of hosting WCF services; in fact we’ve only really scratched the surface. One step at a time!

In my next post in this series I will show you how to Expose a WCF services using Synergy .NET Interop.


Exposing WCF Services using xfNetLink .NET

By Steve Ives, Posted on at 1:40 pm

Following a recent enhancement to the product in Synergy 9.5.1, developers can now use xfNetLink .NET to create and expose Windows Communication Foundation (WCF) services. In this post I will describe how to do so.

This is the second in a series of posts relating to the various ways in which Synergy developers can use of Windows Communication Foundation (WCF) when building their applications. The posts in the series are:

  1. Building Distributed Apps with Synergy/DE and WCF
  2. Exposing WCF Services using xfNetLink .NET (this post)
  3. Hosting WCF Services in an ASP.NET Web Application
  4. Exposing WCF Services using Synergy .NET Interop
  5. Self-Hosting WCF Services
  6. Exposing WCF services using Synergy .NET

 

The basic approach is pretty easy; in fact it’s very easy. In 9.5.1 we added a new -w command line switch to the gencs utility, and when you use the new switch you wind up with a WCF service. I told you it was easy!

So, what changes when you use the gencs –w switch? Well, the first thing that changes are the procedural classes that are generated, the ones that correspond to your interfaces and methods in your method catalog, change in the following ways:

  1. Three new namespaces (System.Runtime.Serialization, System.ServiceModel and System.Collections.Generic) are imported.
  2. The class is decorated with the ServiceContract attribute. This identifies the class as providing methods which can be exposed via WCF.
  3. Methods in the class are decorated with the OperationContract attribute. This identifies the individual methods as being part of the service contract exposed by the class, making the methods callable via WCF.
  4. If any methods have collection parameters, the data type of those parameters is changed from System.Collections.ArrayList to System.Collections.Generic.List. Changing the collection data type is not strictly necessary in order to use WCF, but results in a more strongly prototyped environment which is easier to use, and offers more benefits to developers in terms of increased IntelliSense, etc.

The next thing that changes are any data classes, the ones that correspond to the repository structures that are exposed by the parameters of your methods, that are generated. These data classes change in the following ways:

  1. Two new namespaces are imported. These namespaces are System.Runtime.Serialization and System.ServiceModel.
  2. Each data class (which corresponds to one of your repository structures) that is generated is decorated with the DataContract attribute. This identifies the data class as a class which can be used in conjunction with a WCF service, and which can be serialized for transmission over the wire.
  3. Each public property (which corresponds to a field in the structure) is decorated with the DataMemeber attribute. This attribute identifies each property within the class as one which will be serialized, and as a result will be visible to client applications which use the WCF service.

By the way, if you use a Workbench “.NET Component Project” to create your xfNetLink .NET assemblies then there isn’t currently an option in the component information dialog to cause the new –w switch to be used. So, for the time being, what you can do is add the switch to the Workbench “Generate C# Classes” tool command line. To do this you would go into project properties for the component project, go to the Tools tab, select the “Generate C# Classes” tool, then add the –w at the end of the command line, like this:

image

We’ll be adding a new checkbox to the component information dialog in 9.5.3 later this year.

By the way, by using the gencs –w switch you make it possible to use the resulting classes as a WCF service, but you don’t have to do so … you can continue to use the classes as a regular xfNetLink .NET assembly also. Why would you want to do that? Well, I sometimes use gencs –w just to turn my collection parameters into generic lists!

I have submitted a working example to the Synergy/DE CodeExchange, contained within the zip file called ClientServerExamples.zip. Within the example, the following folders are relevant to exposing and consuming a WCF Service using xfNetLink .NET:

Folder / Project

Description

xfServerPlus

Contains the Workbench development environment for xfServerPlus and the exposed Synergy methods. If you wish to actually execute the examples then you must follow the setup instructions in the readme.txt file.

xfNetLink_wcf

Contains the Workbench project used to create the xfNetLink .NET client assembly with the WCF attributes.

3_xfpl_xfnl_wcf_service

Contains the Visual Studio 2010 solution containing an ASP.NET web application used to host the WCF service, as well as three example client applications, written in Synergy .NET, C# and VB.

So, the gencs –w command line switch you make it possible to use the resulting classes to directly expose a WCF service, but there is still the matter of how to “host” the service in order to make it accessible to remote client applications, and that’s what I will begin to address in my next posting.


Search Engine Optimization (SEO)

By Steve Ives, Posted on June 14, 2010 at 7:08 pm

I’ve been writing web applications for years, but I’ve never really had to put too much thought into whether search engines such as Google and Bing were finding the sites that I have worked on, or whether they were deducing appropriate information about those sites and giving them their appropriate ranking in search results. The reason for this is that most of the web development that I have been involved with has tended to be web “applications”, where the bulk of the interesting stuff is hidden away behind a login; without logging in there’s not much to look at, and in many cases the content of the site isn’t something you would want search engines to look at anyway … so who cares about SEO!

However, if you have a web site that promotes your company, products and services then you probably do care about SEO. Or if you don’t, you probably should! Improving your ranking with the search engines could have a positive impact on your overall business, and in these economic times we all need all the help we can get.

Turns out that the basic principles of SEO are pretty straight forward, really it’s mainly about making sure that your site conforms to certain standards, and doesn’t contain errors. Sounds simple right? You’d be surprised how few companies pay little, if any, attention to this potentially important subject, and suffer the consequences of doing so … probably without even realizing it.

You have little or no control over when search engine robots visit your site, so all you can do is try to ensure that everything that you publish on the web is up to scratch. Here are a few things that you can do to help improve your search engine ratings:

  • Ensure that the HTML that makes up your site is properly formatted. Robots take a dim view of improperly formatted HTML, and the more errors that are found, the lower your ratings are likely to be.
  • Don’t assume that HTML editors will always produce properly formatted HTML, because it’s not always the case!
  • Try to limit the physical size of each page. Robots have limits regarding the physical amount of data that they will search on any given page. After reading a certain amount of data from a page a robot may simply give up, and if there is important information at the bottom of a large page, it may never get indexed. Unfortunately these limits may be different from robot to robot, and are not published.
  • Ensure that every page has a title specified with the <TITLE> tag, and that the title is short and descriptive. Page titles are very important to search engine robots.
  • Use HTML headings carefully. Robots typically place a lot of importance on HTML heading tags, because it is assumed that the headings will give a good overall description of what the page is about. It is recommended that a page only has a single <H1> tag, and doesn’t make frequent use of subheadings (<H2>, <H3> etc.).
  • Use meta tags in each page. In particular use the meta keywords and meta description tags to describe what the page content is about, but also consider adding other meta tags like meta author and meta copyright. Search engine robots place high importance to the data in meta tags.
  • Don’t get too deep! Search engines have (undocumented) rules about how many levels deep they will go when indexing a site. If you have important content that is buried several levels down in your site it may never get indexed.
  • Avoid having multiple URL’s that point to the same content, especially if you have external links in to your site. How many external links point to your content is an important indicator of how relevant your site is considered to be by other sites, and having multiple URL’s pointing to the same content could dilute the search engine crawlers view of how relevant your content is to others.
  • Be careful how much use is made of technologies like Flash and Silverlight. If a site’s UI is comprised entirely of pages which make heavy use of these technologies then there will be lots of <OBJECT> tags in the site that point the browser to the Flash or Silverlight content, but mot much else! Robots don’t look at <OBJECT> tags, there’s no point because they would not know what do with the binary content anyway, so if you’re not careful you can create a very rich site that looks great in a browser … but has absolutely no content that a search engine robot can index!
  • If your pages do make a lot of use of technologies like Flash and Silverlight, consider using a <NOSCRIPT> tag to add content for search engine robots to index. The <NOSCRIPT> tag is used to hold content to display in browsers that don’t support JavaScript, but these days pretty much all browsers do. However, search engine robots DO NOT support JavaScript, so they WILL see the content in a <NOSCRIPT> section of a page!
  • Related to the previous item, avoid having content that is only available via the execution of JavaScript – the robots won’t execute any JavaScript code, so your valuable content may be hidden.
  • Try to get other web sites, particularly “popular” web sites, to have links to your content. Search engine robots consider inbound links to your site as a good indicator of the relevance and popularity of your content, and links from sites which themselves have high ratings are considered even more important.
  • Tell search engine robots what NOT to look at. If you have content that should not be indexed, for any reason, you can create a special file called robots.txt in the root folder of your site, and you can specify rules for what should be ignored by robots. In particular, make sure you exclude any binary content (images, videos, documents, PDF files, etc.) because these things are relatively large and may cause a robot to give up indexing your entire site! For more information about the robots.txt file refer to http://www.robotstxt.org.
  • Tell search engines what content they SHOULD look at by adding a sitemap.xml file to the root folder of your site. A sitemap.xml file contains information about the pages that you DO want search engine robots to process. For more information refer to http://www.sitemaps.org.
  • Ensure that you don’t host ANY malware on your site. Search engine robots are getting pretty good at identifying malware, and if they detect malware hosted on your site they are likely to not only give up processing the site, but also blacklist the site and never return.

Getting to grips with all of these things can be a real challenge, especially on larger sites, but there are tools out there to help. In particular I recently saw a demo of a new free tool from Microsoft called the SEO Toolkit. This is a simple application that you can use to analyze a web site in a similar way to the way that search engine robots look at a site, and the tool then produces detailed reports and suggestions as to what can be done to improve the SEO ratings for the site. You can also use the tool to compare changes over time, so you can whether changes you make to the site have improved or worsened your likely SEO rating. For more information refer to http://www.microsoft.com/web/spotlight/seo.aspx.

This article only scratches the surface of what is an extensive and complex subject, but hopefully armed with the basics you can at least be aware of some of the basic rules, and start to improve the ratings for your site.


Counting on Synergy

By Richard Morris, Posted on May 16, 2010 at 9:13 pm

One of our customers, Ibcos Computers, has used Synergy/DE to develop a web server. It may sound a bit bland, but it’s actually quite cool. In simple terms they have a server program running, written in Synergy, which accepts inbound resource requests. Each request then causes a child processes (again in Synergy) to be created to honour the request and return the required data. That data may be a web page, an image, or data as a result of executing some existing Synergy logic from their application.

Well, so far so good. The web server application works a treat. They can host complete web sites, and execute tried and tested logic from their Synergy application accessing their Synergy DBMS data. However, the way the Synergy/DE licensing works means that every time a child processes is created it consumes a Synergy license. This is a very valid situation – you are running a Synergy process, so you consume a Synergy license. Although you are running a Synergy process, that process may simply be locating and returning a resource associated with a web page, for example an image. When a web browser is loading a page from a web server it will asynchronously request many page resources at the same time. This means for a single page the web server may have many child processes executing, so the page can load quicker. It can be argued that although there are a number of live processes, each consuming a Synergy license, there is actually only one user, or client, which is the web browser.

Ibcos also have additional execution models for their web server. They have client applications that “log-in” to the site, execute Synergy logic to process data, then “log-out”. For this model, each connection does, and should, consume a Synergy runtime.

The challenge here is to reliably and accurately count the number of users actually accessing the web server. From a licence enforcement aspect, each resource is allocating a Synergy runtime license. However, Ibcos require a license entitlement model that would allow them to monitor individual types of connections, and be able to identify when pools of connections actually relate to an individual user.

For a solution Ibcos looked towards Synergex. I wrote a specification and they commissioned us to write the required code to utilise the Synergy/DE Licensing Toolkit to monitor license usage. Ibcos can use the Synergy/DE Licensing Toolkit to create and register licensed products within the Synergy/DE License Manager. As the different user types access the Web server, license slots are allocated and released, as required, around the logic being processed. A background process has been written that monitors license usage every five seconds. This information is stored and a program to report on license usage levels has been created. Synergex can then utilise this information to assist Ibcos with their license entitlement requirements.

Ibcos Computers, based in Poole, England, provides software solutions, written in Synergy, to the Agricultural, Ground Care, Construction, Material Handling, Commercial Vehicle and Industrial Dealer markets. Ibcos Gold, their main Synergy/DE UI Toolkit based application, has an installed base of over 500 customers and utilise the latest version 9.3 capabilities.

If you would like more details about License Enforcement and Entitlement, and the synergy Licensing Toolkit you can comment against this blog, email me at richard.morris@synergex.com, or visit the resource pages at www.synergyde.com.


Web Browser “Session Merging”

By Steve Ives, Posted on December 8, 2009 at 5:11 pm

I just realized something about modern web browsers, and as a long time web developer it kind of took me by surprise! Maybe it shouldn’t have, maybe I should have figured this out a long time ago, but I didn’t.

What I realized is that Internet Explorer 8 shares cookies between multiple tabs that are open to the same web application. In fact it’s worse than that … it also shares those cookies between tabs in multiple instances of the browser! And as if that’s not bad enough, research shows that Firefox, Google Chrome and Apple’s Safari all do the same thing! Internet Explorer 7 on the other hand shares cookies between multiple tabs, but not between browser windows.

If you’re an ASP[.NET] developer you’ve probably figured out by now why I am so concerned, but if you’re not then I’ll try to explain.

ASP, ASP.NET, and in fact most other server-side web development platforms (including Java’s JSP) have the concept of a “current user session”. This is essentially a “context” inside the web server which represents the current users “state”. The easiest way to think about this is to picture a UNIX or OpenVMS system; when a user logs in a new process is created, and (usually) the process goes away when the user logs out. A web applications user session is not a process as such, but it sometimes helps to think of it that way.

Web developers in these environments can, and very often do make use of this current user session. They use it to store state; information about the current user, or application information about what they are doing or have done, and possibly even to cache certain application data or resources to avoid having to repeatedly allocate and free those resources, or to avoid having to obtain a piece of data over and over again when the data doesn’t change.

Now, at this point I want to make it clear that I’m not saying this is a good thing to do, or a bad thing to do. Some would say it’s a bad thing to do because it increases server-side resource utilization and hence impacts the scalability of the application; and they would be correct. Others would say that by caching data or resources it is possible to avoid repeated round-trips to a database, or to an xfServerPlus service, and hence helps improve runtime performance; and they would also be correct. As with many things in software development … it’s a trade-off. Better runtime performance and a little easier to code, but at the cost of lower scalability.

In reality, unless a web application needs to routinely deal with large numbers of concurrent users, the scalability issue isn’t that important. As a result, good practice or not, many web developers are able to enjoy the luxury of using current user session state without significantly impacting anything … and they do!

So … what’s the problem?

Well, the problem is that the web (HTTP) is a connectionless environment. When a user types a URI into a web browser the browser connects to the web server, requests the resource(s) identified by the URI, and then disconnects. In order to have the concept of a user “session” the web server needs to have a way of recognizing that a subsequent “request” is coming from a browser that has already used the application, and is attempting to continue to use the application. The way that web applications usually do this is to send a “cookie” containing a unique “session ID” to the browser, and the nature of HTTP cookies is that if this happens, the same cookie will be returned to the web server during subsequent connections. Web applications can then detect this cookie, extract the session ID, and re-associate the browser with their existing “current user session”.

This is how most server-side web applications work; this is how they make it possible to have the concept of an on-going user session, despite of the fact that the interaction with the browser is actually just a set of totally unrelated requests.

So now the problem may be becoming clear. Early web browsers would keep these “session cookies” private to a single instance of a browser. New browsers would receive and return different cookies, so if a user opened two browsers and logged in to the same web application twice, the web server would recognize them as two separate “logins”, would allocate two separate “current user sessions” and what the user did in one browser window would be totally separate from what they did in the other.

But … now that modern browsers share these session ID cookies between multiple tabs in the browser, and more recently between multiple instances of the browser window its self, server-side web applications can no longer rely on “session id” to identify unique instances of the client application!

At least that is the case if the application relies on HTTP cookies for the persistence of session ID. In ASP.NET there is a workaround for the problem … but it’s not pretty. It is possible to have the session ID transmitted to and from the browser via the URI. This would work around the multiple instance problems, but has all kinds of other implications, because now the session ID is part of the URI in the browser, and affects the ability to create bookmarks etc.

This whole thing is potentially a huge problem for a lot of server-side web applications. In fact, if your web applications do rely on session state in this way, and have ever encountered “weird random issues”, this could very well explain why!

So what’s the answer for existing applications … well, I honestly don’t know what to tell you. Passing session ID around via the URI may be an easy fix for you, but may not! If it’s not then the only solution I can currently offer is … don’t use session state; but if you use session state, then transitioning to not using it is probably a very large task!

By the way, today I read that the proposed HTML5 standard includes a solution to this very issue. Apparently it’s called “Session Storage”. Great news! The problem is that, according to some Microsoft engineers who are definitely “in the know”, it is very possible that the HTML5 standard may not be fully ratified until the year 2022! Seriously!


Silverlight – What’s it all About?

By Steve Ives, Posted on November 18, 2009 at 5:16 pm

Day two at PDC09, and with between ten and twelve concurrent tracks it’s pretty tough to decide which sessions to attend. Luckily there are three of us attending the conference, so with a little planning we can at least attempt to maximize our coverage. But still … so many choices!

I’ve been involved with Web development for a long time now … in fact I’ve been involved with Web development ever since there was such a thing as Web development, so I decided to spend the day trying to make sense of the choices that are now available to Web developers in the Microsoft space.

If you look across the entire industry there are many and varied Web development platforms available, some well proven, some relatively new. When Microsoft first introduced Active Server Pages (ASP) back in 1996 they changed the game. Web development was taken to a whole new level, and in my humble opinion they have lead the game ever since.

Until recently the choice was clear. The Web development platform of choice was ASP.NET’s WebForms environment. Thanks in part to very rich developer support in Visual Studio, and an absolutely vast array of excellent third-party plug-ins and controls, the ASP.NET WebForms environment was, and still is absolutely dominant in the industry.

However, things are changing. In 2007 Microsoft unveiled a new technology called Silverlight, and while it is fair to say that initial adoption was slow, Silverlight could today be considered to be a key part of Microsoft's vision for the future of computing generally!

Silverlight is essentially a browser plug-in that allows Web browsers to render rich user interfaces. It started out as a relatively simple plug-in which allowed browsers to display streamed video content, much like Adobe Flash, but that is no longer the case today. The technology is less than two years old at this point, and today Microsoft announced the beta for the fourth major release of the product in that short time. Clearly a lot of dedicated effort has been put into into Silverlight … there must be a “bigger picture” here!

Let’s take a step back. A browser plug-in is a software component that allows a Web browser to do something that is not inherently supported by the Web (i.e. by HTTP and HTML). The issue here is that HTTP and HTML were designed as a mechanism for a client system (a browser) to safely display “content” from a server system. That server system could be located anywhere, and owned and operated by anyone. In that situation, how do you trust the publisher of the content? The simple answer is … you can’t. So for that reason early browsers didn’t allow the Web “applications” to interact with the client system in any way … they simply displayed static content.

Of course clever software developers soon found ways around that restriction, but only if the user of the client system gave their permission. That permission is typically granted by allowing the installation of third-party plug-ins on the client system. Once a plug-in is present it can be detected by the browser, and in turn detected by the Web server, which can then take advantage of the capabilities of the plug-in. And because a plug-in is a piece of software that was explicitly allowed to be installed on the client system by the user of the client system, it is not subject to the normal “restrictions” (sandbox) placed on the Web browser … plug-ins can do anything!

Silverlight isn’t the first product that Microsoft has introduced in this arena. Many years ago they introduced support for ActiveX controls to be embedded within Web pages, and the technology was very cool. It allowed very rich and highly interactive user interfaces to be rendered within a web browser, it allowed the web “application” to interact with the “resources” of the client system, and for a while ActiveX in the browser showed a lot of promise. The problem was that the ActiveX technology was only available in the Internet Explorer browser, and at that time IE didn’t have a big enough market penetration for ActiveX in the browser to become a serious player.

Today though, things are different. Internet Explorer is by far the most dominant Web browser in use (although Firefox, Safari and Google Chrome continually eat away at that market lead). But this time around the difference is that Microsoft took a different approach … they made Silverlight plug-ins available for Firefox … and Safari … and today even announced the development of a plug-in for Google Chrome in the up-coming Silverlight 4 release. This is pretty impressive, because the current version of Chrome doesn’t even support plug-ins!

Today during the PDC09 keynote presentations Silverlight was front and center, and I’ll be completely honest here … the demos that I saw totally blew me away! Silverlight can now be used to render fabulous interactive user interfaces which can equal anything that can be achieved in a desktop application, and with appropriate permissions from the user of the client system Silverlight applications can fully interact with the local client system as well. It’s even possible (in the current release) to execute Silverlight applications outside of a traditional web browser, allowing them to appear to the user exactly as desktop applications would … but with no installation required (other than the Silverlight plug-in).

So why is Silverlight so important to Microsoft? And why might it be so important to all of us as software developers, and as software users? Well, the answer to that question is related to my last post to this blog … it’s all about the Cloud! The perfect model for a Cloud application is a Web application, but even with all of the advances in Web technologies it’s still really hard to make a web application that is as feature-rich and capable as a modern desktop application … unless you use a tool like Silverlight.

Now don’t get me wrong, ASP.NET WebForms is still a fabulous technology, and still very much has a place. If you have an ASP.NET WebForms application today … don’t panic, you’re in good shape too. The point here is that you now have options. In fact, there is a third option also … it’s called ASP.NET MVC, and I’ll talk about that in another blog post soon.

If you want to see examples of Silverlight in action then check out the Silverlight Showcase Samples (of course you'll need the Silverlight plug-in installed, but the site will offer it to you if you don't have it), and if you can also get more information about the PDC09 Silverlight 4 (beta) demos.


Don't miss a post!

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts Tag Cloud Archives