Phone800.366.3472 SupportGet Support DocumentationDocumentation Resource CenterResource Center
search
close
Open Menu

Synergex Blog


Using Workbench to Build Applications on Remote Servers

By William Hawkins, Posted on February 18, 2010 at 4:21 pm

I recently went to a customer site, to help them integrate Workbench into their OpenVMS development environment.  As a source code editor, the “integration” is relatively simple, you just need to have NFS/CIFS/SAMBA installed, and use it to make your OpenVMS (or UNIX) drives look like they’re actually Windows drives.  However, when you want to compile or link your application, you either need to go back to your telnet window and build it there, or you can download the RemoteBuild utility from the SynergyDE CodeExchange.  There are two versions for OpenVMS out there right now, both provided by Chris Blundell from United Natural Foods Inc.

Synergex PSG decided that we wanted to provide a remote build facility that could talk to multiple environments using information from a single Workbench project.  We wanted to minimize any potential firewall issues (for companies that have internal firewalls), and we also wanted to build on the SynPSG.System classes (distributed with ChronoTrack – our SPC 2009 demonstration application) for the network communication.  For those of you unfamiliar with the SynPSG.System classes, they are a partial implementation of the Microsoft System classes written in Synergy, (so they work on all supported platforms,) but when we have Synergy for .Net available, we'll be able to use the native .NET framework System class without modifying code. (ok, we'll have to change the import statements, but that should be all.)

So PSG has posted our flavor of a remote building application into CodeExchange – it's called remoteServer.   There is a client component to install into Workbench, and a server component (written in Synergy) that runs on each development server.  If you have both SAMBA (or equivalent) and remoteServer installed and configured, you are able to compile your application on the remote system, and in the unlikely event of a compile error (I know, you never have any coding errors) you will be able to double click on the error on the output window, and go straight to that line of code in the source file on your remote server.

If you work in a multi-platform development environment, I would encourage you to go download any of the remote build offerings in CodeExchange, and start using the power of Workbench to help improve the productivity of your developers.


To Print or not To Print

By , Posted on February 12, 2010 at 10:10 pm

The Synergy Windows Printing API is a collection of routines that allow you to fully control printing on the Windows platform, and make use of extended printer features.  The API records the print information in a Windows enhanced metafile which can then be played back to devices such as a printer or the print preview window.

Many of you will be using this API to bring true Windows printing capabilities to you applications. So, when you upgraded to version 9.3 of Synergy and rebuilt your applications, where did your prints disappear to?  You may have been caught out by a new feature added to the API.  In version 9.3 two new integer fields were introduced to the font specifications structure to enable you to change the orientation and escapement of the font.  The orientation allows you to specify the angle between the baseline of a character and the page’s horizontal axis.  The escapement specifies the angle between the baseline of a string of text and the page’s horizontal axis.  For full details see the version 9.3 on-line manuals.  The default value for both of these two new fields is zero, meaning that no rotation of the text will occur.

To set the required font characteristics you use the DWP_FONT sub-function of the WPR_SETDEVICE() function.  The font characteristics are defined within a structure called “font_specs”, included in the DBLDIR:winprint.def header file.  This structure is used to allocate memory to store the font specifications, and provide them to the %WPR_SETDEVICE() function.  You allocate the required memory using the %MEM_PROC() function, for example;

    fontHandle = %mem_proc(DM_ALLOC+DM_STATIC, ^size(font_specs))

You can then set the required font details;

    ^m(font_specs.face_name, fontHandle) = “Courier”

However, if you don’t specify values for all of the fields defined within the “font_specs” structure, the values will be undefined, and for the integer fields, most likely be non-zero.  So, the two new fields will actually contain values, and so orientation and escapement settings will be passed to the %WPR_SETDEVICE() function.  Things will no longer print as you expect!

One way to ensure that the integer data is initialised when you allocate dynamic memory is to use the DM_NULL qualifier.  This ensures that any integer data is initialised correctly.  For example;

    fontHandle = %mem_proc(DM_ALLOC+DM_STATIC+DM_NULL, ^size(font_specs))

However, this does not correctly initialise any non-integer data, and is not future-proof.  If the structure is modified to include new alpha/decimal/implied decimal fields in the future, these fields would then contain incorrect values.  An alternative is to initialise a local copy of the font specifications structure and then assign that to the allocated dynamic memory.  The INIT statement ensures that fields within a record or structure are initialised correctly based on the field type.  Firstly, create a structfield, which is a field defined as a structure type.

.ifdef DBLV9

record

    tmpFont    ,font_specs

endrecord

.endc

The code to allocate the memory for the font specification remains the same;

    fontHandle = %mem_proc(DM_ALLOC+DM_STATIC+DM_NULL, ^size(font_specs))

Now we use the structfield to correctly initialise the dynamic memory;

.ifdev DBLV9

init tmpFont    ;ensures individual fields correctly initialised

^m(font_specs, fontHandle) = tmpFont

.endc

This same coding structure can be applied to the other structure specifications defined in the DBLDIR:winprint.def header file.  The code is backward compatible with earlier versions of Synergy, and will prevent any similar issues in the future.

Alternatively, from version 9.1 you can remove the need to allocate memory, and simply use the structfield.  For example;

record

    textFont    ,font_specs

endrecord

And then use the structfield to define font characteristics;

    init textFont    ;ensures individual fields are correctly initialised

    textFont.facename = “Courier”

    textFont.weight = 700

    wpr_setdevice(rptHandle, DWP_FONT, textFont)

    wpr_pint(rptHandle, DWP_WRITEOUT, x, y, “Hello Bloggers!”)

This second approach has two advantages.  Firstly, you no longer need to allocate and clean up any dynamic memory.  The second is that you get full intellisense within workbench, listing the available fields within the font structfield.


What’s in my library?

By William Hawkins, Posted on February 4, 2010 at 4:22 pm

The obvious answer that springs to mind is "books", but some may respond "what sort of library?".  Of course, in this context, I'm really referring to a library containing Synergy object code. 

When referring to Synergy subroutines and functions, on both Windows & Unix, you can perform a "listdbo" or "dblibr -t" on the object file/object library and peruse the output to see the names of your routines.  However, when referring to methods (in classes/namespaces), the name used is mangled.  This mangling process takes the fully qualified name of the method, the return type, and all the parameter types, and reduces it down to a mangled name.  In a lot of cases, you can look at the mangled name and stand a chance of actually recognizing the name of the routine, but decoding the parameters in your head may require the use of illegal drugs.

For example, if you see a mangled name of '7SYNPSG5CORE11UTILITIES3CC6SYNCC11GETCCNAME_O7SYSTEM7STRINGI', you could intuitively see that it's probably this routine: 'SYNPSG.CORE.UTILITIES.CC.SYNCC.GETCCNAME(I)@SYSTEM.STRING'. 

But what about this one: '7SYNPSG5CORE11UTILITIES3CC6SYNCC11GETCCNAME_O7SYSTEM7STRINGSP0P1P2P39CARDTYPE'?  It’s the "same" overloaded routine, but it has a SYNPSG.CORE.UTILITIES.CC.CARDTYPE parameter instead of an integer parameter.  Similarly, if you saw '7SYNPSG5CORE11UTILITIES8WORKING9SHOWFORM_XO7SYSTEM7STRING', you could probably see that it's 'SYNPSG.CORE.UTILITIES.WORKING.SHOWFORM(@SYSTEM.STRING)'.  Now, what if you saw this '7SYNPSGCR1UTLTESWRKNGSHW9HTGB13'?   Well, it's surprisingly the same SHOWFORM method, but it's been mangled beyond recognition.  The term used here at Synergex is "crushed".  In fact, if you have a crushed name, it's basically impossible to determine the original method name. If the mangled name of the method is too long for the environment, the mangled name is crushed down to the maximum size permissible.  OpenVMS has a limit of 31 characters, 64-bit systems have a 188 character limit, and 32-bit systems have a 200 character limit. Actually, the limit is one character less, because we use the rule that if the name is exactly the maximum size, it must be a crushed name.  As the last 7 characters of a crushed name is a checksum, on OpenVMS you’re only left with 24 characters for a human “readable” name.

So how, exactly, does a mangled name become a crushed name? Well, characters are removed, one by one, until the name is exactly the correct length. First non-alphanumeric characters are removed, then vowels, then letters from the name of your first born child, then random letters based on the cycle of the moon, until you eventually get a name that fits. So, with only 31 characters for method ames, the OpenVMS users out there will have to become accustomed to seeing crushed (i.e. indecipherable) method names inside Shared Image Libraries.  If you need to create an OpenVMS Shared image library, I would recommend creating an object library, and using the make_share.com file to convert to a shared image library.  However, you may need to review the use of the MATCH qualifier on your shared image libraries, as method names can change with the modification of a parameter (or return) type. So changing a method to (for example) have an additional optional parameter will cause a new method name to be created.  Unless you rebuild your application to see (and use) the new name, you could find that the application starts giving “routine not found” errors.

You might think that not knowing the name of the routine would be a problem – it's not really, because the compiler has a consistent supply of the same high quality drugs, and given a constant method signature, will always generate the same crushed name.  So it really doesn't care that that your code said  "object.showform(1)", because it'll know that you really want to call the method '7SYNPSGCR1UTLTESWRKNGSHW9HTGB13' from your library.

For most developers out there, the actual name of the method inside a library is unimportant, but I though the more curious of you out there would be interested in this.


Picture This!

By , Posted on January 28, 2010 at 11:29 pm

In days of old, carrying your trusted, heavy weight camera around your neck, you’d take the perfect snap.  You’d then continue snapping away until the film was full, which, for the impatient among us, meant taking a large number of “I was just finishing off the film darling” type shots. On returning home you’d quickly rewind the film back into its cartridge, unless you had one of those fancy modern “advanced” film cameras of course, where the camera did it for you, and then pop it into your “postage free” envelope.  And off you sent it, in the hope that your “once in a life time” shot would be processed and returned to you post haste.

And what of the results?  Normally a hazy, slightly out of focus, batch of glossy pictures that really don’t give your artistic prowess justice.  After all, the subject must have moved because you’d never intentionally crop the top of the head of the bride, just above the eye line, and why was the gentleman third from the right picking his nose?

If this sounds like your photography experiences, the chances are if you live in the UK the people processing your film were a company called Harrier L L C.  You may know them better as “TRUPRINT”.  At their peak they were processing 85,000 rolls of film per day!  Today, however, they are processing no more than 1,000 films.  Not really a great statistic if film processing is your business.  But Harrier saw the potential of the digital world and has embraced the processing of the digital image.  Today they average over 200,000 prints a day, which can rise to over 1,000,000 prints at times like Christmas.  However, although this figure is significantly lower than the prints they were processing in the heyday of film, it’s now a very small part of their product portfolio.  The key to success was to diversify.  Today, in this digital age, people want more than just a glossy print of their blurred, half-cropped pictures.  They want the t-shirt, a coffee mug and of course the family calendar, all donned with their own artistic compositions.  Today, with a few clicks of a mouse you can upload your pictures and have them delivered to your door on anything from coffee mugs, placemats to a full size framed canvases.  You can even have your prized picture delivered to you in lots of tiny pieces – in the form of a jigsaw!

So where does Synergy fit into their IT strategy?  Their OpenVMS based Synergy /DE applications manage the order processing and management of every item they process.  Once an order is accepted through the many portals including a host of web sites, major supermarket chains, leading pharmacies, and of course by post, the Synergy application takes control.  It manages the processing or the required prints, storybooks or mugs (to name but a few product lines – they have over 500) through to despatch and successful delivery to the customer.  The Synergex Professional Services Group is assisting Harrier to evaluate to work needed to migrate their Synergy/DE applications from the OpenVMS platform to Microsoft Windows.


Web Browser “Session Merging”

By Steve Ives, Posted on December 8, 2009 at 5:11 pm

I just realized something about modern web browsers, and as a long time web developer it kind of took me by surprise! Maybe it shouldn’t have, maybe I should have figured this out a long time ago, but I didn’t.

What I realized is that Internet Explorer 8 shares cookies between multiple tabs that are open to the same web application. In fact it’s worse than that … it also shares those cookies between tabs in multiple instances of the browser! And as if that’s not bad enough, research shows that Firefox, Google Chrome and Apple’s Safari all do the same thing! Internet Explorer 7 on the other hand shares cookies between multiple tabs, but not between browser windows.

If you’re an ASP[.NET] developer you’ve probably figured out by now why I am so concerned, but if you’re not then I’ll try to explain.

ASP, ASP.NET, and in fact most other server-side web development platforms (including Java’s JSP) have the concept of a “current user session”. This is essentially a “context” inside the web server which represents the current users “state”. The easiest way to think about this is to picture a UNIX or OpenVMS system; when a user logs in a new process is created, and (usually) the process goes away when the user logs out. A web applications user session is not a process as such, but it sometimes helps to think of it that way.

Web developers in these environments can, and very often do make use of this current user session. They use it to store state; information about the current user, or application information about what they are doing or have done, and possibly even to cache certain application data or resources to avoid having to repeatedly allocate and free those resources, or to avoid having to obtain a piece of data over and over again when the data doesn’t change.

Now, at this point I want to make it clear that I’m not saying this is a good thing to do, or a bad thing to do. Some would say it’s a bad thing to do because it increases server-side resource utilization and hence impacts the scalability of the application; and they would be correct. Others would say that by caching data or resources it is possible to avoid repeated round-trips to a database, or to an xfServerPlus service, and hence helps improve runtime performance; and they would also be correct. As with many things in software development … it’s a trade-off. Better runtime performance and a little easier to code, but at the cost of lower scalability.

In reality, unless a web application needs to routinely deal with large numbers of concurrent users, the scalability issue isn’t that important. As a result, good practice or not, many web developers are able to enjoy the luxury of using current user session state without significantly impacting anything … and they do!

So … what’s the problem?

Well, the problem is that the web (HTTP) is a connectionless environment. When a user types a URI into a web browser the browser connects to the web server, requests the resource(s) identified by the URI, and then disconnects. In order to have the concept of a user “session” the web server needs to have a way of recognizing that a subsequent “request” is coming from a browser that has already used the application, and is attempting to continue to use the application. The way that web applications usually do this is to send a “cookie” containing a unique “session ID” to the browser, and the nature of HTTP cookies is that if this happens, the same cookie will be returned to the web server during subsequent connections. Web applications can then detect this cookie, extract the session ID, and re-associate the browser with their existing “current user session”.

This is how most server-side web applications work; this is how they make it possible to have the concept of an on-going user session, despite of the fact that the interaction with the browser is actually just a set of totally unrelated requests.

So now the problem may be becoming clear. Early web browsers would keep these “session cookies” private to a single instance of a browser. New browsers would receive and return different cookies, so if a user opened two browsers and logged in to the same web application twice, the web server would recognize them as two separate “logins”, would allocate two separate “current user sessions” and what the user did in one browser window would be totally separate from what they did in the other.

But … now that modern browsers share these session ID cookies between multiple tabs in the browser, and more recently between multiple instances of the browser window its self, server-side web applications can no longer rely on “session id” to identify unique instances of the client application!

At least that is the case if the application relies on HTTP cookies for the persistence of session ID. In ASP.NET there is a workaround for the problem … but it’s not pretty. It is possible to have the session ID transmitted to and from the browser via the URI. This would work around the multiple instance problems, but has all kinds of other implications, because now the session ID is part of the URI in the browser, and affects the ability to create bookmarks etc.

This whole thing is potentially a huge problem for a lot of server-side web applications. In fact, if your web applications do rely on session state in this way, and have ever encountered “weird random issues”, this could very well explain why!

So what’s the answer for existing applications … well, I honestly don’t know what to tell you. Passing session ID around via the URI may be an easy fix for you, but may not! If it’s not then the only solution I can currently offer is … don’t use session state; but if you use session state, then transitioning to not using it is probably a very large task!

By the way, today I read that the proposed HTML5 standard includes a solution to this very issue. Apparently it’s called “Session Storage”. Great news! The problem is that, according to some Microsoft engineers who are definitely “in the know”, it is very possible that the HTML5 standard may not be fully ratified until the year 2022! Seriously!


Testing Times

By , Posted on November 30, 2009 at 11:35 pm

Today, I am a Net Ninja!  Well that’s what my freebie t-shirt, procured from TechEd, says I am :).

To be honest I’m not sure what a Net Ninja is, but my understanding of a “Ninja” is a warrior, a fighter.  Someone who battles against adversity is pursuit of perfection.  Well, I feel a little like that at the moment.  Battle scared and bruised.  Not in the physical sense you understand.  I’ve been on beta testing duties, which brings both frustration and joy in equal amounts.

We currently have two products out in beta test at the moment.  The 9.3 beta contains some really cool new features, including encryption and a new “select” class.  The data encryption allows you to store your data in your SDBMS/RMS files in a form that can’t be read, even if you unload (or for you VMS guys – edit!) your files.  You can also encrypt your data between client and server for both xfServer and xfServerPlus.  The latter two ensure that any data you are transferring between server and client is “un-sniff-able!” 

Encryption is becoming more and more important in today’s world, and the ability to simply “switch on” encryption within your client/server Synergy applications is a really powerful capability.  I was stung earlier this year by identity theft.  I’d ordered a product off a web site, secure from the outside world because I ensured it was all done over HTTPS!  Suddenly I noticed credit card transactions that I didn’t know about – first port of call, my wife!  She knew nothing of the transactions and after some investigations I figured the only way my information could have been got at was from within the company – their employees!  Version 9.3 offers you the ability to encrypt your data at field, record, routine (xfServerPlus) or file level.  It’s extremely flexible.  When 9.3 is released ChronoTrack has been updated to provide examples of using encryption, and we’ll post it onto code exchange at the same time.

The “select” class is a cool and mega efficient, especially over xfServer, way of selecting data from a file that matches your selection criteria – your where clause!  I think Tod is preparing a blog about this as we speak :).

The other area of testing I’ve been doing is with our beta version of Synergy for .NET (not sure of the official title yet :)).  It was released last month.  This is really where the pain and ecstasy belong.  It’s such a cool product.  Being able to build applications in Visual Studio and have all the code in Synergy is very reassuring.  And my testing bed?  You guessed it, ChronoTrack!  It’s taken a lot of effort (the pain) to get to a point where ChronoTrack will build and run (the ecstasy) in the .NET framework, but we got there last week!  Our development team has worked tirelessly to build a product that’s going to allow our users to take full advantage of the .NET environment.  And what, I hear you ask, are the changes to the existing Synergy Code?  Well, to be honest, if the code didn’t interact with the UI then I changed no code.  I had to comment out a few references to statements (line INIT) that are not quite supported yet, but other than that the code remains the same.  It really is an endorsement of Synergy that we can build code that could have been written twenty years ago and run it in the latest and greatest environments without change.

If you want to beta test either version 9.3 or our .NET products then please sign-up and sharpen your blade.  It can be tuff, but the rewards are worth it.  And you do get ROI – you’ll have coded all your routines to support encryption and the select class before we even release the product, meaning you will be ready to take full advantage when the product is official!

If you are interested in seeing ChronoTrack running under Synergy for .NET then please let me know.  I may even produce a video about it!

I’m not sure I have made the grade as a “Ninja”, but my testing duties are complete.  I’m out and about visiting customers in the UK and Ireland this week with Nigel David.  If we encounter anything note worthy then I’ll keep you posted.


Application Design Model

By William Hawkins, Posted on November 24, 2009 at 4:39 pm

During the past few years, the process of designing an application has gone through another revolution of terms.  When I started out in computing, data was in records and you wrote programs that have subroutines to perform repetitive tasks.  Then there was the short-lived  foray into 4GL’s,  More recently with the introduction of OO-based languages,  we had to learn about data in structures, instance objects, methods, enumerations and a whole host of new terms. At the same time, we got into client/server and N-tier application design.  Mostly, these were just variations on what we were already familiar with, although making the most out of the new terminology can require a new way of thinking.  In the past few years, the way you design applications has sort of been through another evolution.  I say “sort of” because what’s happened is that some new terms have entered common usage.  One of which is design patterns.  Design patterns are just what they say, a pattern that you use when designing an application.  They define/redefine some of the practices that we have been following for years.   Two design patterns I want to highlight are MVC and MVVM.

MVC is Model-View-Control.   This is where the Model (business logic & data) is separated from the View (what you see) and the Controller (dispatch logic).  The Controller monitors both the Model and View components, and acts as the communicator between these two components.  When you are using a well- designed UI Toolkit application, you’re probably using a MVC design.  If you have your real business logic abstracted away from the UI Logic (which UI Toolkit doesn’t really help you implement), you have the Controller logic separated from your View, which is part of the MVC pattern.  At a very simplistic level, the Synergy UI Toolkit List processor is the controller part of a MVC pattern, the load method is the Model and the list window is the View.  The UI Toolkit code is partially agnostic to the actual data being processed – you just pass it the data as a parameter and the forms processing inside UI Toolkit takes care of the rendering of the data on screen.   In a recent project, I created wrappers for the Synergy UI Toolkit logic in order to implement a formal MVC design, such that almost all of the logic required to drive the UI was abstracted into standard routines. So as far as the application developer was concerned, all they had to provide were the window scripts that defined the View, and the various business logic routines that were registered with the Controller.

MVVM is Model-View-ViewModel.  This is a variation of the MVC, where the controller is replaced by the ViewModel.   The ViewModel component instantiates instances of the Model component and it exposes public properties that the View component consumes. In a MVVM design, the Model is oblivious to the ViewModel and the View, and the ViewModel is oblivious to the View.  MVVM seems to be the design pattern of choice for WPF applications, and was used by Microsoft when they developed Expression Blend.  Because the View is separate to the ViewModel and Model components, it’s really easy to apply a new skin to an application that is implemented with this design.   Synergy applications have been moving in this direction since the release of xfNetLink/xfServerPlus, so it’s a natural evolution to consider this design practice when updating the UI of an application.

In addition to the two I mentioned there are other variations in application design patterns (e.g. MVP). In the course of reading up on the various application design models, I came across a reference to a blog by Josh Smith on The Code Project, where he states “If you put ten software architects into a room and have them discuss what the Model-View-Controller pattern is, you will end up with twelve different opinions.” In a lot of cases, you’ll have an in-house design pattern that you’ve been using for years, but as you continue to develop your Synergy application, you should consider reassessing all the available design patterns to determine which pattern is the most appropriate for future-proofing your application.


ChronoTrack goes virtual!

By , Posted on November 20, 2009 at 11:36 pm

Did you attend SPC2009?  If you did then you’ll know all about ChronoTrack.  If you didn’t, ChronoTrack is an application that the PSG team developed to showcase the latest technologies available to Synergy developers.  It’s a UI Toolkit application that’s had a face lift!  It’s a cool, slick web site, oh, and a mobile app, and did I mention the system tray monitor and dashboard?  It uses the latest OO capabilities within Synergy and enhances the user experience by hosting .NET WinForms.  It exposes Synergy data through traditional functions via xfServerPlus/xfNetLink.NET to both a Web Service and fully functional Web site.

One customer, who attended SPC2009, was so impressed with the new capabilities of Synergy that ChronoTrack demonstrated they wanted to present it to their team!  So, I set the recorder going and recorded a ten minute overview video.  And what was the customer’s response?  “Perfect, I’ll be presenting this at next week’s steering committee!”

Fancy a peek?  Take your virtual tour of the ChronoTrack Windows application at http://media.synergex.com/chronotrack/ChronoTrackInAction.html.

There’ll be more videos available soon – just working on topics.  If you have any requests, please let me know by commenting against this blog.

And remember, ChronoTrack is available on CodeExchange, so if you want to see the code-behind, log in to the Synergy resource centre, click the CodeExchange link and search for ChronoTrack.

 

 


PDC09 On-Line Content

By Steve Ives, Posted on November 19, 2009 at 5:15 pm

If you've been reading all about our experiences at PDC09 and would like to watch some of the sessions for yourself, you can!

Videos of many sessions are posted on-line 24 hours after the completion of the actual presentation. You can find the videos here, and many have the associated slide presentations also.


Silverlight – What’s it all About?

By Steve Ives, Posted on November 18, 2009 at 5:16 pm

Day two at PDC09, and with between ten and twelve concurrent tracks it’s pretty tough to decide which sessions to attend. Luckily there are three of us attending the conference, so with a little planning we can at least attempt to maximize our coverage. But still … so many choices!

I’ve been involved with Web development for a long time now … in fact I’ve been involved with Web development ever since there was such a thing as Web development, so I decided to spend the day trying to make sense of the choices that are now available to Web developers in the Microsoft space.

If you look across the entire industry there are many and varied Web development platforms available, some well proven, some relatively new. When Microsoft first introduced Active Server Pages (ASP) back in 1996 they changed the game. Web development was taken to a whole new level, and in my humble opinion they have lead the game ever since.

Until recently the choice was clear. The Web development platform of choice was ASP.NET’s WebForms environment. Thanks in part to very rich developer support in Visual Studio, and an absolutely vast array of excellent third-party plug-ins and controls, the ASP.NET WebForms environment was, and still is absolutely dominant in the industry.

However, things are changing. In 2007 Microsoft unveiled a new technology called Silverlight, and while it is fair to say that initial adoption was slow, Silverlight could today be considered to be a key part of Microsoft's vision for the future of computing generally!

Silverlight is essentially a browser plug-in that allows Web browsers to render rich user interfaces. It started out as a relatively simple plug-in which allowed browsers to display streamed video content, much like Adobe Flash, but that is no longer the case today. The technology is less than two years old at this point, and today Microsoft announced the beta for the fourth major release of the product in that short time. Clearly a lot of dedicated effort has been put into into Silverlight … there must be a “bigger picture” here!

Let’s take a step back. A browser plug-in is a software component that allows a Web browser to do something that is not inherently supported by the Web (i.e. by HTTP and HTML). The issue here is that HTTP and HTML were designed as a mechanism for a client system (a browser) to safely display “content” from a server system. That server system could be located anywhere, and owned and operated by anyone. In that situation, how do you trust the publisher of the content? The simple answer is … you can’t. So for that reason early browsers didn’t allow the Web “applications” to interact with the client system in any way … they simply displayed static content.

Of course clever software developers soon found ways around that restriction, but only if the user of the client system gave their permission. That permission is typically granted by allowing the installation of third-party plug-ins on the client system. Once a plug-in is present it can be detected by the browser, and in turn detected by the Web server, which can then take advantage of the capabilities of the plug-in. And because a plug-in is a piece of software that was explicitly allowed to be installed on the client system by the user of the client system, it is not subject to the normal “restrictions” (sandbox) placed on the Web browser … plug-ins can do anything!

Silverlight isn’t the first product that Microsoft has introduced in this arena. Many years ago they introduced support for ActiveX controls to be embedded within Web pages, and the technology was very cool. It allowed very rich and highly interactive user interfaces to be rendered within a web browser, it allowed the web “application” to interact with the “resources” of the client system, and for a while ActiveX in the browser showed a lot of promise. The problem was that the ActiveX technology was only available in the Internet Explorer browser, and at that time IE didn’t have a big enough market penetration for ActiveX in the browser to become a serious player.

Today though, things are different. Internet Explorer is by far the most dominant Web browser in use (although Firefox, Safari and Google Chrome continually eat away at that market lead). But this time around the difference is that Microsoft took a different approach … they made Silverlight plug-ins available for Firefox … and Safari … and today even announced the development of a plug-in for Google Chrome in the up-coming Silverlight 4 release. This is pretty impressive, because the current version of Chrome doesn’t even support plug-ins!

Today during the PDC09 keynote presentations Silverlight was front and center, and I’ll be completely honest here … the demos that I saw totally blew me away! Silverlight can now be used to render fabulous interactive user interfaces which can equal anything that can be achieved in a desktop application, and with appropriate permissions from the user of the client system Silverlight applications can fully interact with the local client system as well. It’s even possible (in the current release) to execute Silverlight applications outside of a traditional web browser, allowing them to appear to the user exactly as desktop applications would … but with no installation required (other than the Silverlight plug-in).

So why is Silverlight so important to Microsoft? And why might it be so important to all of us as software developers, and as software users? Well, the answer to that question is related to my last post to this blog … it’s all about the Cloud! The perfect model for a Cloud application is a Web application, but even with all of the advances in Web technologies it’s still really hard to make a web application that is as feature-rich and capable as a modern desktop application … unless you use a tool like Silverlight.

Now don’t get me wrong, ASP.NET WebForms is still a fabulous technology, and still very much has a place. If you have an ASP.NET WebForms application today … don’t panic, you’re in good shape too. The point here is that you now have options. In fact, there is a third option also … it’s called ASP.NET MVC, and I’ll talk about that in another blog post soon.

If you want to see examples of Silverlight in action then check out the Silverlight Showcase Samples (of course you'll need the Silverlight plug-in installed, but the site will offer it to you if you don't have it), and if you can also get more information about the PDC09 Silverlight 4 (beta) demos.


Cloud and Sunlight at PDC09

By William Hawkins, Posted on at 4:26 pm

PDC09 is my first Microsoft conference, and I wasn't quite sure what to expect. Some excitement at seeing some of leading edge technologies being demonstrated by resident experts, Some trepidation at being presented with a huge variety of different technology and buzzwords.

The first session yesterday was a two hour keynote based around some of the non-UI technologies being presented – Windows Azure and Cloud. Today's keynote was based on the UI side of the equation – Silverlight and Sharepoint. There were some great demos from developers on how they had used Silverlight to develop new UI's that leverage the technology being provided, using web camera''s to import pictures directly from the device, playing video's and use multitouch to rearrange/resize on screen items. Part of the keynote discussed that Microsoft's employees don't really get involved in the hardware that is the platform for their great software, so they decided to get involved in the design of a PC. After a short discussion on the features of the PC, they announced that every PDC09 attendee was to get their own copy of the laptop. Imagine the reaction from 5000 attendees, when the clouds parted, and we realized that Microsoft was giving us a laptop to take home.

Of course, a cynical person would say that in order for Microsoft to get developers to write software for the Cloud and/or with Silverlight, they need the appropriate hardware, so a multitouch-enabled tablet PC with webcam is a great way to do this. Not to look a gift horse in the mouth, I'm trying out my new laptop in writing this blog 🙂 We had already started to discuss what Synergex PSG will have in stall for our conference attendees at SPC 2010, and the ability to run Synergy applications based upon multitouch applications, WCF & Silverlight have been topics of conversation. As laptops like this become more available, be prepared to leverage the hardware with your Synergy application.

The past two days have been a real eye opener for me, as I've seen the Microsoft technology that is coming down the pipe in the next few months. Of course the real trick for Synergex is to take this great technology and work out how it can be applied in the Synergy environment. While Microsoft only seem to see the "Cloud and Silverlight" in their future direction, I can see both Cloud and sunlight in the future for Synergy applications.


Cloudy Days at PDC09

By Steve Ives, Posted on November 17, 2009 at 5:19 pm

pdc09_logoLast week we heard about Richard Morris and Tod Phillips experiences when they visited the Microsoft TechEd conference in Berlin. Well, it's a whole new week, and now there’s a different conference to tell you about. This week Roger Andrews, William Hawkins and I are attending PDC09, which is Microsoft’s Professional Developer Conference, held at the Los Angeles Convention Center. As is usual in Southern California for most of the year the skies are blue, but for some reason everyone seems to be talking about Clouds!

Of course the reason for this is that during PDC08 Microsoft made several very significant announcements, and two of biggest were Windows 7 and the Windows Azure platform.

Of course Windows 7 is already with us now, and by all accounts is proving to be extremely popular with developers and users alike. Windows Azure on the other hand is still an emerging technology, and is causing quite a buzz around here! If you don't already know, Windows Azure is Microsoft’s upcoming Windows operating system for Cloud computing. It's been in CTP (community technical preview) since PDC08, and is on the verge of transitioning to full production use on February 1st 2010.

Much of the content at PDC09 is focused on Windows Azure, on the wider subject of Cloud computing generally, and on the many Microsoft tools and technologies that enable developers to start developing new applications for deployment in the Cloud. Of course there is also a considerable amount of discussion about how to modify existing applications, either for Cloud deployment, or to allow those applications to interact with other components or entities that are Cloud based.

When the Windows Azure platform was announced twelve months ago it was in its infancy, and it showed. But in the last year it seems that Microsoft have really gone to town, extending the capabilities of the platform and other related services, and adding the API’s and tools that will make Azure easier to embrace. Many of these tools and technologies are being developed and delivered as part of .NET 4 and the new Visual Studio 2010 (currently in a beta 2) includes many new features to help developers in this and many other areas. It's certainly going to be very interesting to see how developers are able to embrace the concept of Cloud computing platforms (Azure, and others). The possibilities are incredible, and the potential for increased efficiencies and cost savings is also considerable, but for sure there will be challenges to addressed and overcome.

pc_shooting_macjpgThere have also been lots of sessions detailing how applications can leverage many new capabilities in the Windows 7 platform. Most of us were just delighted to get our hands on a version of Windows that boots faster, looks better, is more robust and easier to use, and one which doesn’t exhibit many of the frustrating traits that we have endured with Windows Vista. But as is turns out there are also lots of new features in the Windows 7 platform that application developers can leverage in their products. In a recent post Richard mentioned some of the new features associated with explorer (such as jump lists), and there are also some interesting capabilities that allow applications to easily interact with “sensors” that may be present on the PC. Many modern PC systems already have some of these sensors present, sensors such as ambient light meters, temperature sensors, accelerometers, and location awareness sensors such as GSM interfaces and GPS devices. It is already possible to write code to use the information published by these sensors, and a new API in the .NET Framework 4 will make it very easy to do so.

When it comes to user interface it's now clearly all about WPF, and its Web counterpart, Silverlight. The visual presentation that can be achieved with these technologies is truly impressive, and it seems that the developer productivity tools such as Expression Blend have finally caught up with the capabilities of these underlying subsystems. This will make it so much easier for people learning these new UI technologies to get up and running more effectively.

pdc09_before_afterMy brain hurts! Well, those are my thoughts after my first interesting (and exhausting) day at PDC09, and there are two more action-packed days still to experience. It really is remarkable what can be achieved by leveraging the .NET environment, and it makes me look forward even more to the introduction of Synergy/DE for .NET. There are exciting times ahead, and so many possibilities


Live from TechED, Updated!

By , Posted on November 13, 2009 at 11:37 pm

The end of TechED is nigh, and to be honest I'm sort of happy.  It's been an intense week of presentations, workshops and in depth discussions with fellow developers, Microsoft techies and the guys at all the UI control vendors.  I know what our customers mean now after they have attended our SPC: info overload followed closely by brain fade!

As the vendors begin to pack up, Tod and myself unselfishly offered to burden ourselves by offering to help lessen the vendors packing toils by relieving them of any spare goodie-bag items they may have.  It's tough at times!  As we toil, we notice 6998 TechHeads filing off to catch a train that carries at maximum a few hundred people.  They must have all missed the multi-threading, delayed processing and smooth streaming sessions.  We on the other hand thread our way, via a routed URL, through the streets of Berlin, testing our interop skills with the locals.

On a more technical level this week has shown just how determined Microsoft are to be market leaders, and with the tools on show this week it's difficult to see why they shouldn't be.  And the best news of all?  Synergex, with our current capabilities to integrate with .NET and our emerging integration with Visual Studio, we'll be able to take full advantage of these latest technologies.  The future is looking good if you develop with Synergy.


More from TechEd Berlin – Windows 7 tips – It’s the little things…

By , Posted on November 12, 2009 at 11:38 pm

My last post from TechED was from my Windows Mobile phone, while having lunch with another 7000 TechHead's! 

Currently I'm waiting for the next presentation to begin, so I have a spare few minutes.  No more "swag" has been blagged today, although I did win a flashing blue pen for getting all the Windows 7 questions right!  Tod won one as well (the questions were not that hard:))

So, on the subject of Windows 7 – have you tried it?  If so you'll have noticed some subtle changes to the task bar.  Right click a program icon and you now get the "jump list".  Hover over and you get the frame view of all instances of that application.  The concept of the system tray and the ability to display notifications has changed, you now change the taskbar icon.  Much of this can be controlled programmaticaly and I plan to GenNet some synergy wrappers and post some sample code onto CodeExchange when I'm back in the office to show how we can implement this in Synergy.  It's the little things that can make a big difference.


Wave Goodbye to the MDU

By Steve Ives, Posted on at 5:41 pm

If you have ever developed with xfServerPlus and xfNetLink then, like me, you may have a “love-hate” relationship with the Method Definition Utility (MDU). You love it because it is an enabling technology … it is one step in the process of extending your Synergy applications with all types of cool client applications. But at the same time you hate it … because it’s an inconvenience to have to remember to run up the utility and update the method catalog each time you want to add a new method, or change the interface of an existing one. Today, using the MDU is a “necessary evil”. It has served us well over the years, but things are about to get a whole lot better.

Synergy/DE 9 introduced many new features to the language, and some of these new features could make entering information into the MDU kind of redundant. For example, rather than declaring a routine like this:

function get_product, ^val
  arg1 ,a ;Product code (passed in, a10)
  arg2 ,a ;Product record (returned, product record)

Synergy 9 allows us to define the routine like this:

function get_product, boolean
  required in productCode ,a10
  required out productRecord ,sProduct

 

As you can see, we can now specify much more information about the external interface of a routine actually in the source code … and this additional information is the same as the information that we specify when defining the routine in the MDU. But in 9.1 the picture wasn’t complete. There is still information that xfServerPlus (and tools like GENXML and GENCS) require from the method catalog, for which there is no language syntax to allow that information to be expressed in the actual code. For example, at a minimum we need to specify the name of the “interface” that the routine (method) will belong to, and the name of the library in which the routine in located.

Enter Synergy/DE 9.3, and an array of new features. One of those features is the introduction of support for “attributes”. Attributes are a mechanism which allows a programmer to “decorate” source code with additional information, or metadata, which provides information about the code. The metadata can then be used by compilers, or other tools that may process the source code in order to extract information, or take some other action.

We’re likely to see many and varied uses for attributes in Synergy/DE for .NET, but for now, as well as adding support for attributes in the language, 9.3 introduces the first use of attributes for Synergy/DE xfServerPlus developers. Attributes can be used to provide all of the remaining information needed to automatically populate the method catalog!

Here’s the same function that we looked at earlier, but with an attribute added:

{xfMethod(interface="MyRoutines",name="getProduct",elb="EXE:MyLibrary")}
function get_product, boolean
  required in productCode ,a10
  required out productRecord ,sProduct

By the way, this is a simple example. There are many properties that can be specified in the xfMethod attribute, and there is also an xfParameter attribute which allows you to provide information about the routines parameters.

It’s also possible to specify “documentation comments” within source code, and these comments can be used to populate the method, parameter and return value description fields in the method catalog. A routine with documentation comments would look something like this:

;;;<summary>Retrieves a product record.</summary>
;;;<returns>Returns true for success or false for failure.</returns>
;;;<param name="productCode">SKU of product to retrieve</param>
;;;<param name="productRecord">Returned product record</param>
{xfMethod(interface="MyRoutines",name="getProduct",elb="EXE:AttrTest")}
function get_product, boolean
  required in productCode ,a10
  required out productRecord ,sProduct

Once you have “decorated” your code with attributes and doc comments, it is possible that you may never have to interact with the MDU ever again! But wait a minute … how does all this work?

Well, there is a new utility called dbl2xml, and this utility reads all of the information that is now contained in your source code and it creates an XML file containing just that information. You can then load that XML file into the method catalog using a command-line invocation of the MDU program. Of course you’ll probably automate these steps in the script that you already use to build your methods. The additional steps you’ll need will look something like this:

dbl2xml -out XFPL_SMCPATH:smc.xml SRC:AllMyMethods.dbl
dbr DBLDIR:mdu -i XFPL_SMCPATH:smc.xml

There is one “gotcha” with this new approach, and that is that you have to use the dbl2xml utility one time and process ALL of the code for all of the routines that are to be included in your method catalog. They don’t all have to be in the same source file, but the dbl2xml utility needs to process them all at once so that it can create the entire method catalog in a single pass. But that shouldn’t be a problem.

If you’ve ever done xfServerPlus development then I’m sure you’ll agree with me that this is a very nice new feature in Synergy/DE 9.3 … and it’s just one of many.

By the way 9.3 has been in beta for a while now. So if you want to check out the new features early, then why not help us to validate the release? Head on over to www.synergyde.com, log in to the Resource Center, and download the beta today.


Recent Posts Categories Tag Cloud Archives