Phone800.366.3472 SupportGet Support DocumentationDocumentation Resource CenterResource Center
search
close
Open Menu

Synergex Blog


Exposing WCF Services using xfNetLink .NET

By Steve Ives, Posted on July 1, 2011 at 1:40 pm

Steve Ives

Following a recent enhancement to the product in Synergy 9.5.1, developers can now use xfNetLink .NET to create and expose Windows Communication Foundation (WCF) services. In this post I will describe how to do so.

This is the second in a series of posts relating to the various ways in which Synergy developers can use of Windows Communication Foundation (WCF) when building their applications. The posts in the series are:

  1. Building Distributed Apps with Synergy/DE and WCF
  2. Exposing WCF Services using xfNetLink .NET (this post)
  3. Hosting WCF Services in an ASP.NET Web Application
  4. Exposing WCF Services using Synergy .NET Interop
  5. Self-Hosting WCF Services
  6. Exposing WCF services using Synergy .NET

 

The basic approach is pretty easy; in fact it’s very easy. In 9.5.1 we added a new -w command line switch to the gencs utility, and when you use the new switch you wind up with a WCF service. I told you it was easy!

So, what changes when you use the gencs –w switch? Well, the first thing that changes are the procedural classes that are generated, the ones that correspond to your interfaces and methods in your method catalog, change in the following ways:

  1. Three new namespaces (System.Runtime.Serialization, System.ServiceModel and System.Collections.Generic) are imported.
  2. The class is decorated with the ServiceContract attribute. This identifies the class as providing methods which can be exposed via WCF.
  3. Methods in the class are decorated with the OperationContract attribute. This identifies the individual methods as being part of the service contract exposed by the class, making the methods callable via WCF.
  4. If any methods have collection parameters, the data type of those parameters is changed from System.Collections.ArrayList to System.Collections.Generic.List. Changing the collection data type is not strictly necessary in order to use WCF, but results in a more strongly prototyped environment which is easier to use, and offers more benefits to developers in terms of increased IntelliSense, etc.

The next thing that changes are any data classes, the ones that correspond to the repository structures that are exposed by the parameters of your methods, that are generated. These data classes change in the following ways:

  1. Two new namespaces are imported. These namespaces are System.Runtime.Serialization and System.ServiceModel.
  2. Each data class (which corresponds to one of your repository structures) that is generated is decorated with the DataContract attribute. This identifies the data class as a class which can be used in conjunction with a WCF service, and which can be serialized for transmission over the wire.
  3. Each public property (which corresponds to a field in the structure) is decorated with the DataMemeber attribute. This attribute identifies each property within the class as one which will be serialized, and as a result will be visible to client applications which use the WCF service.

By the way, if you use a Workbench “.NET Component Project” to create your xfNetLink .NET assemblies then there isn’t currently an option in the component information dialog to cause the new –w switch to be used. So, for the time being, what you can do is add the switch to the Workbench “Generate C# Classes” tool command line. To do this you would go into project properties for the component project, go to the Tools tab, select the “Generate C# Classes” tool, then add the –w at the end of the command line, like this:

image

We’ll be adding a new checkbox to the component information dialog in 9.5.3 later this year.

By the way, by using the gencs –w switch you make it possible to use the resulting classes as a WCF service, but you don’t have to do so … you can continue to use the classes as a regular xfNetLink .NET assembly also. Why would you want to do that? Well, I sometimes use gencs –w just to turn my collection parameters into generic lists!

I have submitted a working example to the Synergy/DE CodeExchange, contained within the zip file called ClientServerExamples.zip. Within the example, the following folders are relevant to exposing and consuming a WCF Service using xfNetLink .NET:

Folder / Project

Description

xfServerPlus

Contains the Workbench development environment for xfServerPlus and the exposed Synergy methods. If you wish to actually execute the examples then you must follow the setup instructions in the readme.txt file.

xfNetLink_wcf

Contains the Workbench project used to create the xfNetLink .NET client assembly with the WCF attributes.

3_xfpl_xfnl_wcf_service

Contains the Visual Studio 2010 solution containing an ASP.NET web application used to host the WCF service, as well as three example client applications, written in Synergy .NET, C# and VB.

So, the gencs –w command line switch you make it possible to use the resulting classes to directly expose a WCF service, but there is still the matter of how to “host” the service in order to make it accessible to remote client applications, and that’s what I will begin to address in my next posting.


Optimizing your Visual Studio 2010 Experience

By Steve Ives, Posted on June 22, 2011 at 3:52 pm

Steve Ives

Visual Studio 2010 is one of the best IDE’s I have ever seen and worked with, and provides, out of the box, a truly impressive array of tools to make developers lives easier and more productive. But, as good as it already is, there is still room for improvement.

Luckily Microsoft has provided an extensible platform and an easy way for people who develop extensions to be able to publish those extensions to developers. Check out the “Extension Manager” utility that you will find on the Tools menu.

emThe extension manager, as suggested by its name, provides a simple UI which allows you to locate, download and install all kinds of Visual Studio extensions. The actual extensions are hosted on-line at a web site called the Visual Studio Gallery. Some extensions are written by Microsoft, some by third-parties. Some provide additional functionality throughout Visual Studio, while some are specific to a particular language. Some add new project or item templates to Visual Studio, while others add complex capabilities in a range of scenarios.

Go take a look, there are a lot of extensions to choose from. If you’re looking for recommendations then personally I think that there are two extensions that no Visual Studio 2010 developer should be without … apart from the Synergy Language Integration for Visual Studio extension of course. You’ll find these extensions in the gallery, and if you sort the list by “Highest Ranked” then they should be close to the top of the list … they are VERY popular! My favorite extensions are both published by Microsoft, and both are free. They are called:

· Productivity Power Tools

· PowerCommands for Visual Studio 2010

I won’t even attempt to go into detail about what these extensions do, because the list of enhancements is a long one. You can click on the hyperlinks above and read all about them in the gallery. Suffice it to say that if for some reason I wind up working on a system other than my own laptop, and these extensions are not installed, I miss them both terribly!


Building Distributed Apps with Synergy/DE and WCF

By Steve Ives, Posted on at 1:22 pm

Steve Ives

At the recent SPC’s in Chicago and Oxford several of the sessions that I presented were focused on networking, and in particular on the various tools and technologies that are available to Synergy developers to allow them to build and deploy distributed software applications. In particular I spent quite a bit of time talking about a technology called Windows Communication Foundation.

This is the first in a series of posts relating to the various ways in which Synergy developers can use of Windows Communication Foundation (WCF) when building their applications. The posts in the series are:

  1. Building Distributed Apps with Synergy/DE and WCF (this post)
  2. Exposing WCF Services using xfNetLink .NET
  3. Hosting WCF Services in an ASP.NET Web Application
  4. Exposing WCF Services using Synergy .NET Interop
  5. Self-Hosting WCF Services
  6. Exposing WCF services using Synergy .NET

What is WCF

WCF is a part of the .NET framework that provides a powerful and flexible way to implement network communication between the various parts of a distributed software application. For many years Synergy developers have been able to build distributed applications using xfServerPlus and xfNetLink. Many have done so, and have deployed a wide variety of applications based on these technologies. When developing a new .NET client application developers would use xfNetLink .NET to gain access to their Synergy business logic and data.

This approach has served us well over the years, will continue to do so, and is still the most appropriate solution in some scenarios. But in some other scenarios using WCF might now be more appropriate, and could help developers to broaden the capabilities of their applications in several key areas, some of which are:

WAN Communication

Because the communication between xfServerPlus and xfNetLink uses a custom protocol which operates on a “non-standard” port, it may not be appropriate for use over wide area networks and the Internet. The primary problem here is that there is a high chance that this communication would be blocked by firewalls, some of which may be outside of your control. WCF supports multiple network transport protocols, the most basic of which is HTTP. This means that it is designed with the Internet in mind, and is very suitable for implementing applications which communicate over the Internet.

Multiple Endpoints

Another advantage of WCF is that the same WCF service can be exposed via multiple “endpoints”, and each of these endpoints can be configured to use different communication protocols and mechanisms. For example a single service could be exposed via one endpoint which uses HTTP and SOAP which would be suitable for a wide variety of client applications to be able to access from anywhere (including via the Internet), while at the same time being exposed by a second endpoint which uses low-level TCP/IP communications and binary messaging protocols suitable for use by applications running on the same local network as the service, at much higher performance. A single client application could even be configured to dynamically switch between these endpoints based on the current location of the user.

More Client Options

When using xfServerPlus and xfNetLink, client applications must be written in a programming language which is able to use one of the supported xfNetLink clients (COM, Java or .NET), but because WCF services can be interacted with via “standard” protocols such as HTTP and SOAP there are many more client application possibilities, because almost all development environments and programming languages are able to use these basic protocols.

Asynchronous Support

WCF also provides the ability to easily support asynchronous communication between clients and servers. When a client application calls an xfServerPlus method the client must wait for the execution of the method to complete, and for any resulting data to be returned, before performing any other processing. Using WCF it’s easy to support asynchronous method support, where the client application can call a method and then continue with other processing, later receiving an event when the remote method has completed its operation and when any resulting data is ready for use. This approach can make client applications appear to be much more dynamic and responsive.

Bi-Directional Communication

In many distributed software applications interaction between the client and the server is initiated by some event in the client application which requires some subsequent processing in the server application. But in some situations it is necessary for the server application to be able to, at will, initiate some processing in the client application. Unfortunately xfServerPlus and xfNetLink don’t really address this use case, but it can be addressed with WCF, because WCF service endpoints can be “self-hosted” within any type of .NET application. This means that as well as a client application having the ability to interact with a WCF service on some server somewhere, the client can also expose a WCF service, which the server application (or in fact any other application) could interact with.

WCF is an extensive subject area, and in this BLOG I’m not even going to attempt to teach you about the specifics of using WCF. Instead I’m simply going to try to make you aware of some of the reasons that you might want to explore the subject further, and make sure that you know about the various ways that you can get started.

There are several ways that Synergy developers can utilize WCF in their applications, and I’ll briefly describe each of these mechanisms below.

WCF via xfNetLink .NET

In xfNetLink .NET 9.5.1 a new feature was introduced which gives developers a lot of additional flexibility in the way that they build their applications, or rather the way that they communicate between the various parts of the application. This new feature is enabled through a new command line parameter to the gencs utility (-w). When you use this new option, two things change.

When you use gencs –w, the first thing that changes is that various attributes are added to the wrapper classes that are created by gencs. These attributes in turn allow those generated classes (and the assembly that they are compiled into) to be directly exposed as a WCF service. Previously, if a Synergy developer wanted to expose a WCF service which in turn exposed their Synergy business logic and data, they have to manually “wrap” their xfNetLink .NET procedural and data classes in other classes which in turn exposed a web or WCF service.

The second change when using gencs -w is that any collections which are exposed by the application API are transformed from un-typed “ArrayList” collections, to more useful and flexible “Generic List” collections. This change will require a small amount of re-coding in any existing client applications, but results in a significantly enhanced development experience.

This mechanism is likely to be of interest to developers who already have an existing solution using xfNetLink .NET and xfServerPlus, and want to take advantage of some of the additional capabilities of WCF, or to developers who wish to expose a new WCF service where the server application exposes code running on a UNIX, Linux or OpenVMS server.

WCF via Synergy .NET Interop

Another way that Synergy developers can expose WCF services is via Synergy .NET’s “Interop” project. The primary purpose of an Interop project is to allow developers with existing xfServerPlus applications to migrate their existing Synergy methods to a native .NET environment. By adding their method source code to an interop project they are able to execute those methods directly in the .NET environment, instead of using xfNetLink .NET and xfServerPlus to execute the methods with traditional Synergy on a remote server.

The Interop project works by generating Synergy .NET “wrapper classes” for your traditional Synergy methods (subroutines and functions) and data structures (records). The external interface of these wrapper classes is very similar to that of the C# wrapper classes produced by xfNetLink .NET’s gencs utility. The primary goal of the Interop project is to allow developers to migrate existing applications from a mixed .NET / traditional Synergy environment to a 100% .NET environment, with only minor changes to the code of existing client applications.

When you create an Interop project in Visual Studio, one of the options that you have (via the project properties dialogs) is to “Generate WCF contracts”, and as discussed earlier in this article, the effect of this is to add various attributes to the generated wrapper classes so that they can be directly exposed as a WCF service.

So again, for Synergy developers, this approach can provide a quick-and-easy way to expose a WCF service based on existing code. Again, this approach is likely to be primarily of interest to those who already have xfServerPlus based applications running on the Windows platform.

WCF via Synergy .NET

Both of the previous approaches were primarily focused on those who already use xfServerPlus, and both provide an easy way to get started with WCF. However in both of the previous scenarios, the external interface of your WCF service is directly defined by the interface to your underlying traditional Synergy methods (subroutines and functions), and the WCF data contracts that are exposed are defined by the repository structures exposed by the parameters of those methods. So, while the WCF capabilities provided by xfNetLink .NET and by the Synergy .NET Interop project are easy to use, you don’t get full control over the exposed interface and data. If you want that finer level of control, or if you’re just getting started with server application development and don’t have existing xfServerPlus methods, there is a better way to go.

The third way that Synergy developers can leverage WCF is by using Synergy .NET to create an all-new WCF service with native .NET code. This task was possible in 9.5.1 when Synergy .NET was first introduced, but has been made much easier through the introduction of a new “WCF Service Library” project template in 9.5.1a.

Creating a new WCF service via a WCF Service Library is the way to go for developers who want to write an all-new service application, or who don’t mind a little extra effort to re-work some existing code in order to factor out some of the “wrapper” layers involved with the previous two approaches. As a result you will have full control of the resulting WCF service, and the contracts that it exposes.

Wrapping Up – For Now

So, as you can see, Synergy/DE is now providing you will all kinds of new and exciting things that you can do in your Synergy applications, and I firmly believe that WCF is one of the most important, and powerful. In the near future I will be writing several additional BLOG posts which will explain these three ways of exposing WCF services in more detail, and I’ll also try to provide more detail about WCF generally. So for now, be thinking about how your existing distributed applications are built, and what things you might wish to improve about them. Or be thinking about that all-new distributed application that you’ve wanted to get started on. If you’re thinking of enhancing or implementing a distributed application, then using WCF is almost certainly the way to go.


Airline Baggage Roulette

By Steve Ives, Posted on August 8, 2010 at 7:07 am

Steve Ives

Well it's another day in United Airlines "friendly skies" for me. I'm setting out on a week-long treck around the East Coast and Mid-West to visit several Synergy/DE customers with one of our sales account managers, Nigel David. It takes a LOT to make me smile when leaving home at 4.am on a Sunday morning, but today that's exactly what hapenned.

I walked in to the airport terminal at Sacramento "International" Airport, and I couldn't help noticing that one of the baggage carousels had been somewhat "jazzed up" (see photo). I just had to smile … I don't often check bags but it seems like every time I do there is some kind of issue. Now, at least, it seems like this airport has decided to be honest about the state of things … checking a bag with the airlines is somewhat like a game of roulette! OK, maybe the odds of getting a checked bag back on time may be a little better than one would expect in a casino … but not much!


Starting Services on Linux

By Steve Ives, Posted on July 24, 2010 at 3:14 pm

Steve Ives

For a while now I’ve been wondering about what the correct way is to start boot time services such as the Synergy License Manager, xfServer and xfServerPlus on Linux systems. A few years ago I managed to “cobble something together” that seemed to work OK, but I had a suspicion that I only had part of the solution. For example, while I could make my services start at boot time, I’m not sure that they were getting stopped in a “graceful” way during system shutdown. I also wondered why my “services” didn’t show up in the graphical service management tools.

My cobbled together solution involved placing some appropriately coded scripts into the /etc/rc.d/init.d folder, then creating symbolic links to those files in the appropriate folders for the run levels that I wanted my services started in, for example /etc/rc.d/rc5.d.

This week, while working on a project on a Linux system, I decided to do some additional research and see if I couldn’t fill in the blanks and get things working properly.

My previous solution, as I mentioned, involved placing an appropriately coded script in the /etc/rc.d/init.d folder. Turns out that part of my previous solution was correct. For the purposes of demonstration, I’ll use the Synergy License Manager as an example; my script to start, stop, restart and determine the status of License Manager looked like this:

#

# synlm – Start and stop Synergy License Manager

#

. /home/synergy/931b/setsde

case "$1" in

   

start)

        echo -n "Starting Synergy License Manager"

       

synd

       

;;

   

stop)

       

echo -n "Stopping Synergy License Manager"

       

synd –q

       

;;

   

restart)

       

$0 stop

       

$0 start

       

;;

   

status)

       

if ps ax | grep -v grep | grep -v rsynd | grep synd > /dev/null

       

then

           

echo "License Manager is running (pid is `pidof synd`)"

       

else

           

echo "License Manager is NOT running"

        fi

       

;;

   

*)

       

echo $"Usage: synlm {start|stop|restart|status}"

       

exit 1

esac

exit 0

If you have ever done any work with UNIX shell scripts then this code should be pretty self explanatory. The script accepts a single parameter of start, stop, restart or status, and takes appropriate action. The script conforms to the requirements of the old UNIX System V init subsystem, and if placed in an appropriate location will be called by init as the system runlevel changes. As mentioned earlier, I had found that if I wanted the “service” to start, for example when the system went to runlevel5, I could create a symbolic link to the script in the /etc/rc.d/rc5.d folder, like this:

ln –s /etc/rc.d/init.d/synlm /etc/rc.d/rc5.d/S98synlm

Init seems to process files in a run level folder alphabetically, and the existing scripts in the folder all seemed to start with S followed by a two digit number. So I chose the S98 prefix to ensure that License Manager would be started late in the system boot sequence.

This approach seemed to work pretty well, but it was kind of a pain having to create all those symbolic links … after all, on most UNIX and LINUX systems, run levels 2, 3, 4 and 5 are all multi-user states, and probably required License Manager to be started.

Then, almost by accident, I stumbled across a command called chkconfig. Apparently this command is used to register services (or more accurately init scripts) to be executed at various run levels. PERFECT … I thought! I tried it out:

# chkconfig –-level 2345 synlm on

service synlm does not support chkconfig

Oh! … back to Google… Turns out I was something really critical in my script, and believe it or not, what I was missing was a bunch of comments! After doing a little more research I added these lines towards the top of the script:

# chkconfig: 2345 98 20

# description: Synergy/DE License Manager

# processname: synd

Low and behold, this was the missing piece of the puzzle! Comments … you gotta love UNIX! So now all I have to do to start License Manager at boot time, and stop it at system shutdown is use the chkconfig command to “register” the service.

And there’s more … With License Manager registered as a proper service, you can also use the service command to manipulate it. For example, to manually stop the service you can use the command:

# service synlm stop

And of course you can also use similar commands to start, restart, or find the status of the service. Basically, whatever operations are supported by the init script that you provide.

Oh, by the way, because License Manager is now running as a proper service it also shows up in the graphical management tools, and can be manipulated by those tools … very cool!

Of course License Manager is just one of several Synergy services that you could use this same technique with. There’s also xfServer, xfServerPlus and the SQL OpenNet server.


Visual Studio 2008 SP1 Hangs After Office Upgrade

By Steve Ives, Posted on July 22, 2010 at 5:55 pm

Steve Ives

Just incase you run into the same issue…

This week I had to revert back to using Visual Studio 2008 while working on a customer project, and I pretty quickly found that I had a problem. I was working on an ASP.NET web project, and found that each time I opened a web page for editing, Visual Studio would appear to hang. Clicking anywhere on the Visual Studio window resulted in the ubiquitous Windows “beep” sound.

On researching the problem in the “Universal Documentation System” (Google) I quickly found that I was not alone in my frustrations … in fact it seems like this is a common issue right now.

Turns out that the problem is related to the fact that I recently updated from Office 2007 to Office 2010. I guess Visual Studio 2008 uses some components from Office 2007 when editing HTML and ASPX pages, and I guess that component got screwed up by the Office 2010 upgrade. If you encounter this problem you will likely find that when Visual Studio 2008 hangs it has started a SETUP.EXE process, but that process never seems to complete. Apparently it’s attempting to do a repair of the “Microsoft Visual Studio Web Authoring Component”, but for some reason can’t.

The solution seems to be to manually run the setup program and select “Repair”. On my system the setup program was C:Program Files (x86)Common Filesmicrosoft sharedOFFICE12Office Setup ControllerSetup.exe. My system runs a 64-bit O/S … if you’re using a 32-bit O/S you’ll presumably just need to drop the (x86) part.

The repair took about two or three minutes, and low and behold I have my Visual Studio 2008 installation working just fine again!


Linux ls Color Coding

By Steve Ives, Posted on July 20, 2010 at 4:32 pm

Steve Ives

It’s always driven me CRAZY the way that RedHat, Fedora, and presumably other Linux systems apply color coding to various types of files and directories in the output of the ls command. It wouldn’t be so bad, but it seems like the default colors for various file types and protection modes are just totally unreadable … for example black on dark green doesn’t show up that well!.

Well, today I finally got around to figuring out how to fix it … my preference being to just turn the feature off. Turns out it was pretty easy to do, open a terminal, su to root, and edit /etc/DIR_COLORS. Towards the top of the file there is a command that was set to COLOR tty, and to disable the colorization all I had to do was change it to COLOR none. Problem solved!

Of course if you look further down in the file you’ll see that there are all kinds of settings for the color palettes to be used for various file types, file protection modes, etc. You could spend time “refining” the colors that are used … but personally I’m happier with the feature just GONE!


“Applemania” and iPhone 4

By Steve Ives, Posted on July 1, 2010 at 1:22 pm

Steve Ives

So I finally did what I said I would never do … I set out from home, in the wee hours of the morning, to stand in line for hours in order to buy something!  The venue? … my local AT&T store. The event? … the first in store availability of iPhone 4. In my lifetime I have never done this before, but I figured … what the heck! I grabbed my iPad for a little entertainment while in line and headed out around 3am.

I stopped off at the 24 hour Starbucks drive through on the way there and stocked up with a large black coffee and a sandwich, and by 3.15am I had staked my place in line. I was surprised that there were only around 20 people ahead of me in line, I was expecting more. Apparently the first guy in the line had been there since 7pm the previous evening … a full twelve hours before the store was due to open at 7am!

So how was the wait? Actually it was kind of fun. There were all kinds of people in line, from technology geeks like me, to teens with too much money on their hands, to families, and even a few retired seniors. Everyone was chatting away, and the time passed pretty quickly. It was a beautiful night out, not too cold, not too hot, and before we knew it the sun was rising at 5.30am. By this time the line had grown considerably longer, and by the time the store opened at 7 had probably grown to two or three hundred people! I remember thinking to myself that if the same thing was being repeated at every AT&T store in the country then there were a LOT of people standing in line.

Opening hour arrived and within a few minutes I was walking out of the store with my new phone and heading off for a day in the office.

So … was the new iPhone worth the wait? … ABSOLUTELY! I’ve been using an iPhone 3G for quite a while now and I was already in love with the thing. I’d skipped the whole 3GS iteration of the device, so the differences between my old phone and my new one was … staggering!

The new Retina display, with a resolution of 960 x 640 (vs. the 480 x 320 of earlier models) means that there are four times the number of pixels packed into the same amount of screen real estate. This translates to a screen which looks fabulous, and photos and videos which look considerably better.

Speaking of photos and videos, the upgrade to a 5MP camera and the addition of an LED flash finally make it possible to take reasonably good pictures and videos with an iPhone. There is also a new camera on the front of the phone; it is a much lower resolution (only VGA in fact) but actually that's perfect if you want to take a quick photo, or record a short video and then email it out to someone (especially if you on the new 200MB data plan … but more about that later).

The iPhone 4, like the iPad, uses Apples new proprietary A4 (computer on a chip) silicone, and I must say, the performance gain does seem to be considerable, even compared to the more recent 3GS models. Another benefit of this is that, despite the fact that the new device is smaller than pervious iPhones, there is more room inside for a bigger battery! This is great news, because battery endurance has never been one of iPhones strong points to date.

Of course one of the coolest new features is FaceTime … video calling between iPhone 4 users. I haven’t had a chance to try this out yet, but I’m looking forward to doing so soon. Apparently FaceTime only works over Wi-Fi networks, which is probably a good thing both from a performance point of view, and also potentially a cost point of view … which bring be to the subject of data plans.

In the past, in the US at least with AT&T, all iPhone users had to cough up $30/month for their data plan, and in return were able to use an unlimited amount of data. This was great because it meant that you could happily use your shiny iPhone to the full extent of its considerable capabilities and not have to worry about how much bandwidth you were actually consuming. But now … things have changed!

New iPhone customers now have a choice of two data plans. There is a $15/month plan which allows for 200MB of data transfer, or a $25 plan providing 2GB. AT&T claim that the 200MB plan will cover the data requirements of 65% of all iPhone users, which may or may not be true. Even if you opt for the more expensive 2GB plan you still have a cap, and may need to be careful. Personally I don’t think I’d be very happy on the 200MB plan, mainly because of things outside my control, like email attachments, which iPhone users don’t really have any control of.

I have been trying to find out what happens when you reach your monthly limit, but so far without success. One AT&T employee told me that on reaching your data limit the account will simply be changed for another “block” of data, without any requirement for the user to “opt in”. Another AT&T employee told me essentially the opposite; that network access would be suspended until the user opts in to purchase more data (similar to the way the iPad works). What I do know is that as you draw close to your limit you should receive free text messages (three I believe, at various stages) warning you of the issue. All I can suggest right now is … watch out for those text messages!

For existing iPhone customers, the good news is that your existing unlimited plan will be “grandfathered in” at the same rate that you currently pay, so we can all continue to consume as much bandwidth as we like and not worry too much about it!

Apple seems to have done a pretty nice job with the implementation of the recently introduced iOS 4. The platform finally has multi-tasking capabilities, which some may not immediately appreciate the benefit of, but it just makes the whole user experience so much more streamlined.  Also the new folders feature makes it easy to organize your apps logically without having to flip through endless screens of icons. Pair the advances in the operating system with the significant advances in the hardware of the new device and the overall impact is really quite significant.

Overall, I think Apple did a good job with the iPhone 4, but there are a couple of things I don't like. The main one is … well, with its "squared off" edges … the new device just doesn't feel as good in your hand as the older models. Also, no doubt you'll have heard all the hype about lost signal strength if the device is held in a certain way … well, I must say that it seems like there could be something too that. Unfortunately, when using the device for anything other than making a call, I reckon that most people hold the phone in a way that causes the problem! Of course Apple has offered two solutions to the problem … 1) don't hold the device that way … and 2) purchase a case!

But on balance I think the upgrade was worth it. There are so many cool new things about iPhone 4 but I’m not going to go into more detail here … there are hundreds of other blogs going into minute detail about all the features, and if you want to find out more a good place to start is http://www.apple.com/iphone/features.


Windows Live SkyDrive

By Steve Ives, Posted on June 28, 2010 at 5:13 pm

Steve Ives

Have you ever wished there was an easy way to view and edit your documents on different computers, in different locations, in fact … from anywhere, and without having to carry USB thumb drives, or log in to a VPN. Well there is, and it’s free.

For some time now Microsoft have offered a free service called Office Live Workspace, (http://workspace.officelive.com) which went part of the way to solving the problem. Office Live Workspace essentially provides 5GB of free on-line storage, and a web-based portal, which allows you to upload, manage and view your files. It’s primarily designed to deal with Microsoft Office files, although other files can be stored there also.

Office Live Workspace worked pretty well, but it did have some restrictions, which meant that the experience was somewhat less than optimal. For example, when viewing a document it would be converted to an HTML representation of the actual document and displayed in the browser. You do have the option to edit the document of course, but doing so required you to have a recent copy of Microsoft Office installed on the computer that you were using. This is probably fine if you are using your own system, but was likely a problem if you were using a public computer in a hotel or an airline lounge.

On the positive side, if you did happen to be working on a system with a recent copy of Microsoft Office, and had the Windows Live Workspace extensions installed, it was possible to interact with your on-line storage directly from within the Office applications, similar to the way that you work with files on a SharePoint server, and this worked really well.

So, using Office Live Workspace from within Microsoft Office was a good experience, and at least you could get to, view and download your files from any Internet browser.

There is also another interesting product called Windows Live Sync, which kind of approaches the problem from another angle. Sync allows you to synchronize the files in one or more shared folders with one or more other computers. If you add a file on one computer it is replicated, pretty much instantly, to the other computers that “subscribe” to the shared folder. This is a very different approach, because although your documents clearly flow over the network (securely of course), they don’t get stored on network servers. So this is a great solution if you want to be able to edit a document at home, and have it magically appear in a folder at work so you can work on it the next day. But there is no access to the files via a web browser on some other computer.

Enter Windows Live SkyDrive (http://windowslive.com/online/skydrive), which seems to combine the concepts of both Office Live Workspace and also Windows Live Sync … and then adds even more.

SkyDrive is a free service providing 25GB of on-line storage. Like Office Live Workspace it has a web-based UI, which allows files to be uploaded, viewed, downloaded, etc. It is also possible, of course, to edit your files directly using your local Microsoft Office applications. So far so good … so what’s different?

Well, perhaps the main different is that as well as allowing documents to be viewed in your web browser, SkyDrive also integrates Microsoft’s new Office Web applications. So, not only can you edit your Word Documents, Excel Spreadsheets and PowerPoint presentations locally, you can also do so directly in the web browser! You can even create new Office documents directly on the server in the same way.

Of course the new Office Web applications are somewhat cut-down versions of their desktop counterparts, in fact they only have a fraction of the capabilities of the full products, but never the less they are very usable, and allow you to do most of routine editing tasks that you likely need to for day to day work on your documents. Remember, this is all for free – pretty cool!

But there’s more … SkyDrive also provides Sync capabilities also. Not for the full 25BG of on-line storage, but there is also a 2GB “bucket” that you can use to setup synchronization of documents between computers … the difference is that the documents are also available on the SkyDrive. So now you can edit your documents locally at home, or at work … on your own computers, but still have access to them via a web interface when away from your own systems. Unfortunately the Office Web apps can’t be used on these synchronized files (hopefully that will change at some point), but you do have access to them from any browser.

By default everything that you upload or Sync through any of these products can only be accessed via your own Windows Live login … but you can setup shares and give others access to all or part of your storage too. And there is specific support for creating shared on-line photo albums too.

Oh, I almost forgot, if like me you use a combination of Windows and Mac computers then all of these products work just great on Mac too. In fact, personally I think the Office Live Workspace experience is actually better on the Mac than the PC! I have just finished testing SkyDrive on the Mac too, including Sync, and it works wonderfully well.

SkyDrive is currently a beta service, but is in the process of transitioning to full production use about now. I’ve been playing with it for a little while now, and it seems to work extremely well. Check it out.


Search Engine Optimization (SEO)

By Steve Ives, Posted on June 14, 2010 at 7:08 pm

Steve Ives

I’ve been writing web applications for years, but I’ve never really had to put too much thought into whether search engines such as Google and Bing were finding the sites that I have worked on, or whether they were deducing appropriate information about those sites and giving them their appropriate ranking in search results. The reason for this is that most of the web development that I have been involved with has tended to be web “applications”, where the bulk of the interesting stuff is hidden away behind a login; without logging in there’s not much to look at, and in many cases the content of the site isn’t something you would want search engines to look at anyway … so who cares about SEO!

However, if you have a web site that promotes your company, products and services then you probably do care about SEO. Or if you don’t, you probably should! Improving your ranking with the search engines could have a positive impact on your overall business, and in these economic times we all need all the help we can get.

Turns out that the basic principles of SEO are pretty straight forward, really it’s mainly about making sure that your site conforms to certain standards, and doesn’t contain errors. Sounds simple right? You’d be surprised how few companies pay little, if any, attention to this potentially important subject, and suffer the consequences of doing so … probably without even realizing it.

You have little or no control over when search engine robots visit your site, so all you can do is try to ensure that everything that you publish on the web is up to scratch. Here are a few things that you can do to help improve your search engine ratings:

  • Ensure that the HTML that makes up your site is properly formatted. Robots take a dim view of improperly formatted HTML, and the more errors that are found, the lower your ratings are likely to be.
  • Don’t assume that HTML editors will always produce properly formatted HTML, because it’s not always the case!
  • Try to limit the physical size of each page. Robots have limits regarding the physical amount of data that they will search on any given page. After reading a certain amount of data from a page a robot may simply give up, and if there is important information at the bottom of a large page, it may never get indexed. Unfortunately these limits may be different from robot to robot, and are not published.
  • Ensure that every page has a title specified with the <TITLE> tag, and that the title is short and descriptive. Page titles are very important to search engine robots.
  • Use HTML headings carefully. Robots typically place a lot of importance on HTML heading tags, because it is assumed that the headings will give a good overall description of what the page is about. It is recommended that a page only has a single <H1> tag, and doesn’t make frequent use of subheadings (<H2>, <H3> etc.).
  • Use meta tags in each page. In particular use the meta keywords and meta description tags to describe what the page content is about, but also consider adding other meta tags like meta author and meta copyright. Search engine robots place high importance to the data in meta tags.
  • Don’t get too deep! Search engines have (undocumented) rules about how many levels deep they will go when indexing a site. If you have important content that is buried several levels down in your site it may never get indexed.
  • Avoid having multiple URL’s that point to the same content, especially if you have external links in to your site. How many external links point to your content is an important indicator of how relevant your site is considered to be by other sites, and having multiple URL’s pointing to the same content could dilute the search engine crawlers view of how relevant your content is to others.
  • Be careful how much use is made of technologies like Flash and Silverlight. If a site’s UI is comprised entirely of pages which make heavy use of these technologies then there will be lots of <OBJECT> tags in the site that point the browser to the Flash or Silverlight content, but mot much else! Robots don’t look at <OBJECT> tags, there’s no point because they would not know what do with the binary content anyway, so if you’re not careful you can create a very rich site that looks great in a browser … but has absolutely no content that a search engine robot can index!
  • If your pages do make a lot of use of technologies like Flash and Silverlight, consider using a <NOSCRIPT> tag to add content for search engine robots to index. The <NOSCRIPT> tag is used to hold content to display in browsers that don’t support JavaScript, but these days pretty much all browsers do. However, search engine robots DO NOT support JavaScript, so they WILL see the content in a <NOSCRIPT> section of a page!
  • Related to the previous item, avoid having content that is only available via the execution of JavaScript – the robots won’t execute any JavaScript code, so your valuable content may be hidden.
  • Try to get other web sites, particularly “popular” web sites, to have links to your content. Search engine robots consider inbound links to your site as a good indicator of the relevance and popularity of your content, and links from sites which themselves have high ratings are considered even more important.
  • Tell search engine robots what NOT to look at. If you have content that should not be indexed, for any reason, you can create a special file called robots.txt in the root folder of your site, and you can specify rules for what should be ignored by robots. In particular, make sure you exclude any binary content (images, videos, documents, PDF files, etc.) because these things are relatively large and may cause a robot to give up indexing your entire site! For more information about the robots.txt file refer to http://www.robotstxt.org.
  • Tell search engines what content they SHOULD look at by adding a sitemap.xml file to the root folder of your site. A sitemap.xml file contains information about the pages that you DO want search engine robots to process. For more information refer to http://www.sitemaps.org.
  • Ensure that you don’t host ANY malware on your site. Search engine robots are getting pretty good at identifying malware, and if they detect malware hosted on your site they are likely to not only give up processing the site, but also blacklist the site and never return.

Getting to grips with all of these things can be a real challenge, especially on larger sites, but there are tools out there to help. In particular I recently saw a demo of a new free tool from Microsoft called the SEO Toolkit. This is a simple application that you can use to analyze a web site in a similar way to the way that search engine robots look at a site, and the tool then produces detailed reports and suggestions as to what can be done to improve the SEO ratings for the site. You can also use the tool to compare changes over time, so you can whether changes you make to the site have improved or worsened your likely SEO rating. For more information refer to http://www.microsoft.com/web/spotlight/seo.aspx.

This article only scratches the surface of what is an extensive and complex subject, but hopefully armed with the basics you can at least be aware of some of the basic rules, and start to improve the ratings for your site.


Preparing for Windows Phone 7

By Steve Ives, Posted on June 10, 2010 at 7:59 pm

Steve Ives

By Steve Ives, Senior Consultant, Synergex Professional Services Group

Later this year, probably, Microsoft are releasing a new version of their phone operating system, and it’s going to be a BIG change for developers who have created applications for the earlier Windows Mobile operating systems. The new O/S is called “Windows Phone 7”, and although under the covers it’s really still Windows CE, on the surface things will look VERY different.

Perhaps the largest single change will be the user interface of Windows Phone 7 devices, which will be entirely driven by Microsoft Silverlight. That’s potentially great news for existing Silverlight or WPF developers, but will of course mean a total re-write of the UI for developers with existing applications which were essentially based on a subset of Windows Forms.

Using Silverlight will mean that we can expect some dazzling UI from applications, and indeed the O/S and the standard applications provided with it already look pretty cool, but there will definitely be a learning curve for anyone who has not developed Silverlight applications before.

Part of the good news is that the basic tools that you need to develop Windows Phone 7 applications are free. You can download Visual Studio Express Phone Edition and have pretty much what you need to develop applications. At the time of writing though, these tools are in a “pre-beta” form, and as such you can probably expect some issues, and need to update the tools pretty regularly.

There is, in my humble opinion at least, also some bad news, not least of which is that Microsoft seem to have turned the Windows Phone platform into, essentially, another iPhone! While developers can use free development tools (or full versions of Visual Studio) to create their applications (just like with the iPhone) they will have to sign up for a $99 annual “Windows Phone Developer “subscription in order to have the ability to deploy their application to their physical phone for testing (just like with the iPhone).

It will no longer be possible to deploy applications via “CAB file” installations, in fact for anything other than developer testing, the ONLY way to get an application onto a Windows 7 Phone will be via the Microsoft “Windows Phone Marketplace” (just like with the iPhone). When a developer publishes an application to the marketplace they can chose whether the application is free, or is to be charged for. With iPhone development developers can submit an unlimited number of free applications, and many do. With Windows Phone 7, developers can only submit five free applications, and after that there will be a charge to submit further free applications. If an application is submitted for sale, Microsoft will take a 30% cut of any proceeds (just like with the iPhone).

Applications submitted for inclusion in the marketplace will be subject to “testing and approval” by Microsoft (just like iPhone apps), and apps may be rejected if they don’t meet the guidelines set by Microsoft (just like with iPhone apps). This inevitably means that some types of applications won’t be allowed. For example, with the iPhone it is not possible (in the US at least) to use “tethering” to enable you to plug your iPhone into your laptop in order to access the Internet via the cell phone network, and I would imagine we’re now going to see similar restrictions on Windows 7 Phone applications.

iPhone applications execute in a very strictly defined sandbox, and while this does afford a lot of protection for the platform (because, for example, one application can in no way interact with the data of another application), it can also seriously limit what applications can do. For example, on the iPhone it is not possible to save an email attachment (say a PDF file) and subsequently open that PDF file in another application, Acrobat Reader for example. While I understand the protections offered by the sandbox approach, as a user of the device I feel that it restricts too far what I can do with the device. The Windows Phone 7 platform is essentially exactly the same.

Other restrictions in the Windows Phone 7 platform that developers will have to come to terms with are:

  • No access to TCP/IP sockets
  • No access to Bluetooth communication
  • No access to USB connections to a host computer
  • No Windows Forms UI’s
  • No SQL Express access
  • No ability to execute native code via pinvoke (except device drivers, which must be approved by Microsoft)
  • No customization of O/S features (e.g. no alternate phone dialers)

One thing that strikes me as kind of strange is that, apparently, the web browser on Windows Phone 7 will not support Flash, and apparently will not support Silverlight either! The flash thing is kind of expected, both Apple and Microsoft seem to do everything they can to keep Flash of THEIR devices, but not supporting Silverlight (on an O/S where the entire UI is Silverlight) was a surprise … at first. Then I realized that if the browser supported Silverlight there would be a way for developers to circumvent all of the application approval and marketplace restrictions that I talked about earlier!

Another surprise was that, like all versions of the iPhone until iOS 4.0, Windows Phone 7 devices will only execute a single user application at a time. This is one of the main things that iPhone users have complained about through the versions, and Apple just learned the lesson, but it seems that Microsoft have decided not to. For developers this means that it is imperative that applications save their data and state frequently, because the application could be terminated (with notification and the ability to clean up of course) at any time.

One thing is for sure … Microsoft seem to be betting the company on “The Cloud”, and Windows Phone 7 falls straight into this larger scale objective. The vision is that this new device will be a gateway to The Cloud in the palm of your hand. It is expected that many applications may execute directly from The Cloud (rather than being installed locally on the device) and that the device will have the ability to store (and synchronize) data in The Cloud. Apparently these features will be included for free, with a limited (not announced) amount of on-line storage, and presumably fee-based options for increasing the amount of storage available. Of course using things in The Cloud is all well and good, until youfind yourself in a "roaming" situation, paying $20/MB, or more!

On the bright side, Windows Phone 7 devices will be available from a number of different manufacturers, so there will be choice and competition in the marketplace. Windows Phone 7 devices will (in the US at least) be available from a number of cell phone carriers, unlike Apples exclusive deal with AT&T.

While there is no doubt that Windows Phone 7 promises to be a seriously cool new device, and I have no doubt will sell in larger numbers than any of the predecessor Windows Mobile devices ever did, it remains to be seen whether it will have what it takes to be a serious competitor to the mighty iPhone. I can’t help wishing that Microsoft had done at least some things a little bit differently.


Web Browser “Session Merging”

By Steve Ives, Posted on December 8, 2009 at 5:11 pm

Steve Ives

I just realized something about modern web browsers, and as a long time web developer it kind of took me by surprise! Maybe it shouldn’t have, maybe I should have figured this out a long time ago, but I didn’t.

What I realized is that Internet Explorer 8 shares cookies between multiple tabs that are open to the same web application. In fact it’s worse than that … it also shares those cookies between tabs in multiple instances of the browser! And as if that’s not bad enough, research shows that Firefox, Google Chrome and Apple’s Safari all do the same thing! Internet Explorer 7 on the other hand shares cookies between multiple tabs, but not between browser windows.

If you’re an ASP[.NET] developer you’ve probably figured out by now why I am so concerned, but if you’re not then I’ll try to explain.

ASP, ASP.NET, and in fact most other server-side web development platforms (including Java’s JSP) have the concept of a “current user session”. This is essentially a “context” inside the web server which represents the current users “state”. The easiest way to think about this is to picture a UNIX or OpenVMS system; when a user logs in a new process is created, and (usually) the process goes away when the user logs out. A web applications user session is not a process as such, but it sometimes helps to think of it that way.

Web developers in these environments can, and very often do make use of this current user session. They use it to store state; information about the current user, or application information about what they are doing or have done, and possibly even to cache certain application data or resources to avoid having to repeatedly allocate and free those resources, or to avoid having to obtain a piece of data over and over again when the data doesn’t change.

Now, at this point I want to make it clear that I’m not saying this is a good thing to do, or a bad thing to do. Some would say it’s a bad thing to do because it increases server-side resource utilization and hence impacts the scalability of the application; and they would be correct. Others would say that by caching data or resources it is possible to avoid repeated round-trips to a database, or to an xfServerPlus service, and hence helps improve runtime performance; and they would also be correct. As with many things in software development … it’s a trade-off. Better runtime performance and a little easier to code, but at the cost of lower scalability.

In reality, unless a web application needs to routinely deal with large numbers of concurrent users, the scalability issue isn’t that important. As a result, good practice or not, many web developers are able to enjoy the luxury of using current user session state without significantly impacting anything … and they do!

So … what’s the problem?

Well, the problem is that the web (HTTP) is a connectionless environment. When a user types a URI into a web browser the browser connects to the web server, requests the resource(s) identified by the URI, and then disconnects. In order to have the concept of a user “session” the web server needs to have a way of recognizing that a subsequent “request” is coming from a browser that has already used the application, and is attempting to continue to use the application. The way that web applications usually do this is to send a “cookie” containing a unique “session ID” to the browser, and the nature of HTTP cookies is that if this happens, the same cookie will be returned to the web server during subsequent connections. Web applications can then detect this cookie, extract the session ID, and re-associate the browser with their existing “current user session”.

This is how most server-side web applications work; this is how they make it possible to have the concept of an on-going user session, despite of the fact that the interaction with the browser is actually just a set of totally unrelated requests.

So now the problem may be becoming clear. Early web browsers would keep these “session cookies” private to a single instance of a browser. New browsers would receive and return different cookies, so if a user opened two browsers and logged in to the same web application twice, the web server would recognize them as two separate “logins”, would allocate two separate “current user sessions” and what the user did in one browser window would be totally separate from what they did in the other.

But … now that modern browsers share these session ID cookies between multiple tabs in the browser, and more recently between multiple instances of the browser window its self, server-side web applications can no longer rely on “session id” to identify unique instances of the client application!

At least that is the case if the application relies on HTTP cookies for the persistence of session ID. In ASP.NET there is a workaround for the problem … but it’s not pretty. It is possible to have the session ID transmitted to and from the browser via the URI. This would work around the multiple instance problems, but has all kinds of other implications, because now the session ID is part of the URI in the browser, and affects the ability to create bookmarks etc.

This whole thing is potentially a huge problem for a lot of server-side web applications. In fact, if your web applications do rely on session state in this way, and have ever encountered “weird random issues”, this could very well explain why!

So what’s the answer for existing applications … well, I honestly don’t know what to tell you. Passing session ID around via the URI may be an easy fix for you, but may not! If it’s not then the only solution I can currently offer is … don’t use session state; but if you use session state, then transitioning to not using it is probably a very large task!

By the way, today I read that the proposed HTML5 standard includes a solution to this very issue. Apparently it’s called “Session Storage”. Great news! The problem is that, according to some Microsoft engineers who are definitely “in the know”, it is very possible that the HTML5 standard may not be fully ratified until the year 2022! Seriously!


PDC09 On-Line Content

By Steve Ives, Posted on November 19, 2009 at 5:15 pm

Steve Ives

If you've been reading all about our experiences at PDC09 and would like to watch some of the sessions for yourself, you can!

Videos of many sessions are posted on-line 24 hours after the completion of the actual presentation. You can find the videos here, and many have the associated slide presentations also.


Silverlight – What’s it all About?

By Steve Ives, Posted on November 18, 2009 at 5:16 pm

Steve Ives

Day two at PDC09, and with between ten and twelve concurrent tracks it’s pretty tough to decide which sessions to attend. Luckily there are three of us attending the conference, so with a little planning we can at least attempt to maximize our coverage. But still … so many choices!

I’ve been involved with Web development for a long time now … in fact I’ve been involved with Web development ever since there was such a thing as Web development, so I decided to spend the day trying to make sense of the choices that are now available to Web developers in the Microsoft space.

If you look across the entire industry there are many and varied Web development platforms available, some well proven, some relatively new. When Microsoft first introduced Active Server Pages (ASP) back in 1996 they changed the game. Web development was taken to a whole new level, and in my humble opinion they have lead the game ever since.

Until recently the choice was clear. The Web development platform of choice was ASP.NET’s WebForms environment. Thanks in part to very rich developer support in Visual Studio, and an absolutely vast array of excellent third-party plug-ins and controls, the ASP.NET WebForms environment was, and still is absolutely dominant in the industry.

However, things are changing. In 2007 Microsoft unveiled a new technology called Silverlight, and while it is fair to say that initial adoption was slow, Silverlight could today be considered to be a key part of Microsoft's vision for the future of computing generally!

Silverlight is essentially a browser plug-in that allows Web browsers to render rich user interfaces. It started out as a relatively simple plug-in which allowed browsers to display streamed video content, much like Adobe Flash, but that is no longer the case today. The technology is less than two years old at this point, and today Microsoft announced the beta for the fourth major release of the product in that short time. Clearly a lot of dedicated effort has been put into into Silverlight … there must be a “bigger picture” here!

Let’s take a step back. A browser plug-in is a software component that allows a Web browser to do something that is not inherently supported by the Web (i.e. by HTTP and HTML). The issue here is that HTTP and HTML were designed as a mechanism for a client system (a browser) to safely display “content” from a server system. That server system could be located anywhere, and owned and operated by anyone. In that situation, how do you trust the publisher of the content? The simple answer is … you can’t. So for that reason early browsers didn’t allow the Web “applications” to interact with the client system in any way … they simply displayed static content.

Of course clever software developers soon found ways around that restriction, but only if the user of the client system gave their permission. That permission is typically granted by allowing the installation of third-party plug-ins on the client system. Once a plug-in is present it can be detected by the browser, and in turn detected by the Web server, which can then take advantage of the capabilities of the plug-in. And because a plug-in is a piece of software that was explicitly allowed to be installed on the client system by the user of the client system, it is not subject to the normal “restrictions” (sandbox) placed on the Web browser … plug-ins can do anything!

Silverlight isn’t the first product that Microsoft has introduced in this arena. Many years ago they introduced support for ActiveX controls to be embedded within Web pages, and the technology was very cool. It allowed very rich and highly interactive user interfaces to be rendered within a web browser, it allowed the web “application” to interact with the “resources” of the client system, and for a while ActiveX in the browser showed a lot of promise. The problem was that the ActiveX technology was only available in the Internet Explorer browser, and at that time IE didn’t have a big enough market penetration for ActiveX in the browser to become a serious player.

Today though, things are different. Internet Explorer is by far the most dominant Web browser in use (although Firefox, Safari and Google Chrome continually eat away at that market lead). But this time around the difference is that Microsoft took a different approach … they made Silverlight plug-ins available for Firefox … and Safari … and today even announced the development of a plug-in for Google Chrome in the up-coming Silverlight 4 release. This is pretty impressive, because the current version of Chrome doesn’t even support plug-ins!

Today during the PDC09 keynote presentations Silverlight was front and center, and I’ll be completely honest here … the demos that I saw totally blew me away! Silverlight can now be used to render fabulous interactive user interfaces which can equal anything that can be achieved in a desktop application, and with appropriate permissions from the user of the client system Silverlight applications can fully interact with the local client system as well. It’s even possible (in the current release) to execute Silverlight applications outside of a traditional web browser, allowing them to appear to the user exactly as desktop applications would … but with no installation required (other than the Silverlight plug-in).

So why is Silverlight so important to Microsoft? And why might it be so important to all of us as software developers, and as software users? Well, the answer to that question is related to my last post to this blog … it’s all about the Cloud! The perfect model for a Cloud application is a Web application, but even with all of the advances in Web technologies it’s still really hard to make a web application that is as feature-rich and capable as a modern desktop application … unless you use a tool like Silverlight.

Now don’t get me wrong, ASP.NET WebForms is still a fabulous technology, and still very much has a place. If you have an ASP.NET WebForms application today … don’t panic, you’re in good shape too. The point here is that you now have options. In fact, there is a third option also … it’s called ASP.NET MVC, and I’ll talk about that in another blog post soon.

If you want to see examples of Silverlight in action then check out the Silverlight Showcase Samples (of course you'll need the Silverlight plug-in installed, but the site will offer it to you if you don't have it), and if you can also get more information about the PDC09 Silverlight 4 (beta) demos.


Cloudy Days at PDC09

By Steve Ives, Posted on November 17, 2009 at 5:19 pm

Steve Ives

pdc09_logoLast week we heard about Richard Morris and Tod Phillips experiences when they visited the Microsoft TechEd conference in Berlin. Well, it's a whole new week, and now there’s a different conference to tell you about. This week Roger Andrews, William Hawkins and I are attending PDC09, which is Microsoft’s Professional Developer Conference, held at the Los Angeles Convention Center. As is usual in Southern California for most of the year the skies are blue, but for some reason everyone seems to be talking about Clouds!

Of course the reason for this is that during PDC08 Microsoft made several very significant announcements, and two of biggest were Windows 7 and the Windows Azure platform.

Of course Windows 7 is already with us now, and by all accounts is proving to be extremely popular with developers and users alike. Windows Azure on the other hand is still an emerging technology, and is causing quite a buzz around here! If you don't already know, Windows Azure is Microsoft’s upcoming Windows operating system for Cloud computing. It's been in CTP (community technical preview) since PDC08, and is on the verge of transitioning to full production use on February 1st 2010.

Much of the content at PDC09 is focused on Windows Azure, on the wider subject of Cloud computing generally, and on the many Microsoft tools and technologies that enable developers to start developing new applications for deployment in the Cloud. Of course there is also a considerable amount of discussion about how to modify existing applications, either for Cloud deployment, or to allow those applications to interact with other components or entities that are Cloud based.

When the Windows Azure platform was announced twelve months ago it was in its infancy, and it showed. But in the last year it seems that Microsoft have really gone to town, extending the capabilities of the platform and other related services, and adding the API’s and tools that will make Azure easier to embrace. Many of these tools and technologies are being developed and delivered as part of .NET 4 and the new Visual Studio 2010 (currently in a beta 2) includes many new features to help developers in this and many other areas. It's certainly going to be very interesting to see how developers are able to embrace the concept of Cloud computing platforms (Azure, and others). The possibilities are incredible, and the potential for increased efficiencies and cost savings is also considerable, but for sure there will be challenges to addressed and overcome.

pc_shooting_macjpgThere have also been lots of sessions detailing how applications can leverage many new capabilities in the Windows 7 platform. Most of us were just delighted to get our hands on a version of Windows that boots faster, looks better, is more robust and easier to use, and one which doesn’t exhibit many of the frustrating traits that we have endured with Windows Vista. But as is turns out there are also lots of new features in the Windows 7 platform that application developers can leverage in their products. In a recent post Richard mentioned some of the new features associated with explorer (such as jump lists), and there are also some interesting capabilities that allow applications to easily interact with “sensors” that may be present on the PC. Many modern PC systems already have some of these sensors present, sensors such as ambient light meters, temperature sensors, accelerometers, and location awareness sensors such as GSM interfaces and GPS devices. It is already possible to write code to use the information published by these sensors, and a new API in the .NET Framework 4 will make it very easy to do so.

When it comes to user interface it's now clearly all about WPF, and its Web counterpart, Silverlight. The visual presentation that can be achieved with these technologies is truly impressive, and it seems that the developer productivity tools such as Expression Blend have finally caught up with the capabilities of these underlying subsystems. This will make it so much easier for people learning these new UI technologies to get up and running more effectively.

pdc09_before_afterMy brain hurts! Well, those are my thoughts after my first interesting (and exhausting) day at PDC09, and there are two more action-packed days still to experience. It really is remarkable what can be achieved by leveraging the .NET environment, and it makes me look forward even more to the introduction of Synergy/DE for .NET. There are exciting times ahead, and so many possibilities


Don't miss a post!

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts Categories Tag Cloud Archives