Open Menu

Synergex Blog

OpenVMS is alive and well

By Don Fillion, Posted on October 27, 2015 at 4:51 pm

I recently attended the OpenVMS Boot Camp in Nashua New Hampshire. I am pleased to report (with a nod to Mark Twain) that rumors of the death of OpenVMS were greatly exaggerated! VMS Software Incorporated (VSI) has taken over the product and appears to have the situation well in hand.

There were over 100 companies in attendance at the Boot Camp, with Hewlett Packard a very visible participant. The conference was quite lively, with multiple tracks running from 8:00 to 6:00 daily, and events planned each evening. At the conference, there was an undercurrent of optimism and energy, which was no doubt tied to the future of VMS. VSI has already released OpenVMS 8.4-1H1, which provides support for HP Integrity i4 server models based on the Intel® Itanium® 9500 series processors. Moving forward, VSI presented at the Boot Camp a rolling roadmap that provides for at least one release per year for the next few years, improving and extending the software on its current HP platforms—including new versions of TCP/IP and Java, a new file system, and CLI improvements. Concurrently, they are working on VSI OpenVMS 9, which will add support for x86-64 bit processers (slated for 2018). They are planning to support select HP (Intel and AMD) servers first, then Dell and others as well. ARM support is slated to be considered after x86-64.

VSI has pledged at least 5 years of active product support per release, followed by a minimum of 2 years of prior-version support. With releases planned into 2018, this provides a viable, supported future for OpenVMS at least into 2025 and likely well beyond.

The future of OpenVMS is now being tended to by some very experienced engineers—many have come from HP and have been with the O/S throughout its various versions and ownership.

So, VMS users, the immediate takeaway is to listen to the words of the late great Douglas Adams: “Don’t Panic!“ OpenVMS is not going away anytime soon.

Updated PDF API

By Steve Ives, Posted on October 16, 2015 at 11:24 am

A few weeks ago I announced that a new API called SynPSG_PDF had been added to the code exchange. Today I am pleased to announce that the API has been updated and, in addition to Windows, is now also supported on systems running Linux (32-bit and 64-bit), OpenVMS (AXP and IA64) and Synergy .NET.

Also, as a direct result of recent customer feedback, I have added a mechanism that allows a PDF file to be easily created from an existing text file with just a few lines of code. This means that existing report programs that already produce plain text output can be easily modified to produce PDF output with a small amount of code like this:


If  you would like to check out the API you can download the code from

Investing in the look and feel of your applications doesn’t matter…or does it?

By William Mooney, Posted on October 7, 2015 at 10:29 am

synergex-blog-image2Years ago I used to say to our direct corporate end-user customers, “You’re lucky. It doesn’t matter what your application(s) looks like because you’re not selling to compete for new business—all that matters is that it works well and meets your business needs.” End-users plugged merrily along, content to focus on functionality and substance, often in the form of a green-screen front end. In fact, many of those customers claimed that a character-based/green-screen application was much more efficient than using a “cumbersome mouse”—especially when it came to data entry. In the 90’s when Windows, GUI, and the like came on the scene our Independent Software Vendors (ISVs) had a different story—to be competitive, the ISVs suddenly had to worry about both how well their applications functioned AND how they looked. People and companies didn’t want to buy applications that weren’t shiny and new with a great user interface (UI)—even if a sophisticated UI didn’t always correlate with a sophisticated application under the hood. It became a game of how flashy can you make it as opposed to how well does it function.

Fast-forward a few years, and now everyone has to play on the same field—ISVs and corporate end-users. In today’s world, even corporate end-users need to make the move to modernization. If they don’t, the next generation of decision makers will. And when that happens, it’s likely the existing, time-proven solution that has been customized and fine-tuned over the past 30+ years, the one that makes the business unique and competitive, the one that has solved—and continues to solve—everyday business issues, will not survive. Yep, this new generation of decision makers will judge the book by its cover and determine the value of the application based on the way it looks and not what it does. It makes sense, because this new generation grew up knowing only great-looking applications—applications that are generally simpler and more discrete in functionality than complete, integrated solutions that touch every part of the organization but appear less shiny and sophisticated.

So, the bottom line is that if your application doesn’t look great, it will be perceived as less than great, and when that new decision maker comes in—it may be too late to save what you’ve spent so many years perfecting. Needless to say, I strongly recommend that all customers invest in modernizing their application(s) with a great looking UI and UX (user experience). As Billy Hollis affirmed at the recent Synergy DevPartner Conference, UX is equally as important. It’s not just the look and feel, but also the experience of the user that’s critical. It’s important to emphasize here too that a great UI/UX design and a high-performing/highly productive solution are not mutually exclusive. Having a well-designed GUI based application can only add to the functionality and power of your solution. So even if you feel your character-based solution is really the best one for your business, it’s rare for the look and feel to be overlooked in favor of substance. I can’t stress enough the importance of making this investment.

A significant benefit of having a Synergy-based application is that you can separate the UI from the logic and data. This means you can use future UIs without sacrificing the years of investment you have put into your business application. While the look and feel is what everyone sees, in reality the business logic is the true value. And once these two are separated, you can extend the life of your application(s) indefinitely, taking advantage of the ever evolving UI trends that come along. Although it may take some effort initially to separate the UI from the back-end, this is the course of least resistance and investment, and it will offer the largest and longest return.

At Synergex, our main focus is to develop solutions to help you advance and leverage your investment to take advantage of the latest modern technologies. In fact, with our recent release of Synergy DBL, we are venturing into the Universal Windows Platform (UWP), the latest UI experience. And while none of us can be certain what UI trends will be popular 10 years from now, just as none of us back in the ‘80s could have imagined what today’s UI would look like, I’m confident that we will be able to help you leverage your back-end and take advantage of whatever the future holds.

The Digital World … Going too Far?

By Steve Ives, Posted on October 1, 2015 at 7:32 pm

So Hilton’s latest thing is the “Digital Key”; while standing within 5 feet of the door to your hotel room it is now possible (in certain locations) to click a virtual button in the Hilton App on your smart phone and have the door to your hotel room unlock, as if by magic. The digital key also knows about other areas of the hotel that you have access to, such as the Executive lounge (I tried it, it works) and gymnasium (apparently) and provides access to those places too.

Last week I used the app to make my reservation. Yesterday I used the app to check in for my stay and and also to select my room. And today, having already checked in electronically, I was able to totally bypass the reception desk and proceed directly to my room.

Tomorrow morning the credit card associated with my profile will be automatically charged, and I will walk out of the front door and drive to the airport a few exits down the freeway.

If it hasn’t dawned on you what my point is here, it is that I will have booked and totally completed a stay in a hotel … without ever having the need to interact with a single other other human being; all of which seems to me to be a pretty sad state of affairs! Maybe we’re taking this whole technology thing a little too far in some areas?

New Tools for Working with PDF Files

By Steve Ives, Posted on September 11, 2015 at 2:44 pm

For some time now the Synergy/DE Code Exchange has included an item called PDFKIT which essentially contains a set of DBL wrapper code that allows the open source Haru PDF Library to be used from DBL. The work done in PDFKIT was a great start and has been used successfully by several developers, but I don’t think that anyone would disagree with me if I were to suggest that it’s not exactly the most intuitive software to use, and it’s not exactly what you would call well documented either; just like the underlying Haru library!

So as time permitted for the last few weeks I have been working on what I hope is an improved solution. I certainly didn’t want to totally reinvent the wheel by starting from scratch, as I mentioned PDFKIT was a great start, but I did want to take a slightly different approach that I thought would be more useful to a wider number of developers, and I did want to make sure that complete documentation was included. What I came up with is called SynPSG.PDF, and it is available in Code Exchange now.

When you download and extract the zip file ( you will find that it contains these main elements:

  • pdfdbl.dbl
    • This is the DBL code that wraps the Haru PDF library and is taken directly from the latest version of PDFKIT.

    Haru PDF Library DLL’s

    • The same DLL’s that are distributed with PDFKIT. Refer to the documentation for instructions on where to place these DLL’s.
  • SynPSG.PDF.dbl
    • A source file containing the new API that I have created.
  • SynPDG.PDF.vpw
    • A Synergy/DE Workbench workspace that can be used to build the code, as well as build and run several sample programs that are also included (this is a Workbench 10.3.1 workspace and will not work with earlier versions of Workbench).
  • SynPSG.PDF.chm
    • A Windows help file containing documentation for the new API 

You don’t need to use the Workbench configuration that I have provided, if you prefer you can simply include the pdfdbl.dbl and SynPSG.PDF.dbl files into the build for your subroutine library. But remember that both of these files contain OO code, so you will need to prototype that code with DBLPROTO.

As you will see when you refer to the documentation, most things in the API revolve around a class called PdfFile. This class lets you basically do four things:

  1. Create a PDF file.
  2. Save the PDF file to disk.
  3. View the PDF file by launching it in a PDF viewer application.
  4. Print the PDF file to a printer.

I’m not going to go into a huge amount of detail about creating PDF documents or using the API here because these topics are discussed in the documentation, but I will mention a couple of basic things.

PDF documents inherently use an X,Y coordinates system that is based on a unit called a device independent pixel. These pixels are square and are 1/72 of an inch in each direction. The coordinates system that is used within pages of a PDF document is rooted in the lower left corner of the page which is assigned the X,Y coordinate 0,0. The width and height of the page in pixels depends on the page type as well as the orientation. So for example a standard US Letter page in a portrait orientation is 8.5 x 11 inches, so in device independent pixels it has the dimensions 612 x 792.

With most PDF API’s you work directly with this coordinates system, and you can do so with this API also, but doing so can require a lot of complex calculations, and hence can be a slow process. But often times when we’re writing software it is convenient for us to work in simple “rows and columns” of characters, using a fixed-pitch font. The new API makes it very easy to do just that, meaning that results can be produced very quickly, and also meaning that existing report programs (that already work in terms of rows and columns) can be easily modified to produce PDF output.

Here is an example of a simple row / column based report that took only a few minutes to create:


Of course there are times when you need to produce more complex output, and the new API lets you do that too. To give you an idea of what it is capable of, here’s a screen shot of a mock up of a delivery ticket document that I created while working on a recent customer project:


As you can see this second example is considerably more complex; it uses multiple fonts and font sizes, line drawing, box drawing, custom line and stroke colors, etc. And although not shown on these examples, there is of course support for including images also.

The new API is currently available on Windows under traditional Synergy. It should be possible to make the code portable to other platforms in the near future, and .NET compatibility is definitely in the pipeline. The software requires the latest version of Synergy which at the time of writing is V10.3.1b. You can download the code from here:

It is early days for this new API and I have many ideas for how it can be extended and enhanced. I am looking forward to working on it some more soon, and also to receiving any feedback or suggestions that you may have.

CodeGen 5.0.5 Released

By Steve Ives, Posted on August 28, 2015 at 12:59 pm

Just a quick note to let all of you CodeGen users out there that a new version (CodeGen 5.0.5) has just been released. You can get more information about the changes and download the release from

If you would like to receive email or RSS notifications when new CodeGen versions are released then there are links on the above mentioned page to allow you to set that up, and we encourage you to do so.

And you thought hiring good Synergy programmers was hard…

By William Mooney, Posted on July 31, 2015 at 2:05 pm

WJMBlog“Hiring good programmers is hard.” I can’t count the number of times I’ve heard this phrase during the past 30+ years I’ve been in this business. And, from my experience and research, I agree. A few customers have also told me that good Synergy programmers are harder to find than others, but over the years I’ve found that it doesn’t matter whether you’re looking for developers experienced in Synergy DBL, C#, Java, VB.NET, or any other language… hiring good programmers is just hard. The exception, of course, is the gaming industry, where a plethora of young talented programmers are excited to spend countless hours writing games for almost no money. Sort of reminds me of the early programmers who wrote business application solutions back in the day!

So, how do you find a good Synergy developer? Well, for starters, don’t limit your pool to developers experienced in Synergy. Find a great programmer and make him/her a master in the language you use. Any good programmer can learn Synergy, or C#, or Java, etc. But not every programmer who knows Synergy or C# or Java is or will become a great programmer. Seek out developers who have current modern day developer skills such as OO, .NET, etc. If they don’t already know Synergy, they’ll pick it up quickly and will appreciate that it is a modern OO language that runs on virtually all platforms, including mobile, and is fully integrated with Visual Studio. Then, send your new developers to a Synergex class, have PSG come on-site to get them up to speed, and (of course) send them to the annual Synergy DevPartner conference.

Tip: Consider domain knowledge specific to your industry. You are much better off hiring a good developer who is knowledgeable in your particular vertical market and teaching him/her DBL than vice versa.

Also, open the door to hiring developers with programming experience (vs. just having a computer science degree). When you look back at the early years of our industry there were very few universities offering programming degrees—most of the original developers of what are now world class enterprise applications had no formal education on programming. These developers had raw talent and enthusiasm to solve problems and create solutions. (Some of you reading this blog are likely those original developers!) This too is how Synergex started. In fact, many of our top talent never received formal education in programming. That said, I’m not recommending that you seek developers without formal degrees but I am encouraging you to focus on smart, eager developers whom you can train and educate to be part of your next generation of leaders. Here at Synergex we’ve developed and use a variety of third-party tests that can help vet sharp young talent—this talent has made a big impact on our development team. We would be happy to share the tools we use.

So you’ve advertised for a talented, trainable, language-agnostic developer, interviewed your candidates to confirm a good fit with your culture, vetted out analytical aptitude, and are convinced that your candidate will be a great addition to your team… What if the candidate turns the tables on you and asks, “Why would I want to program in Synergy DBL?” What do you say?

I recommend that you have this question answered in their minds long before they have the opportunity to ask it. Make sure your candidates all understand the value of Synergy DBL and the exciting opportunities they will have to work with these modern development tools. Let them know that skills they will gain using Synergy DBL will provide a lifetime of employment opportunities any place in the world they want to live and work.

Congratulations on hiring your next great programmer!


Microsoft launched Visual Studio 2015 today

By Roger Andrews, Posted on July 20, 2015 at 4:49 pm

Today Microsoft announced that Visual Studio 2015 and .NET 4.6 are available for download. As a member of the Microsoft Visual Studio Industry Partner (VSIP) program, Synergex will soon be supporting this version of Visual Studio with a Developer Build, followed by a fully supported release that also supports Windows 10. Our new release includes support for the new concord-based debugger and Light Bulb features.

We are excited about all of the new performance tools that Synergy developers can utilize in Visual Studio 2015.

Two of Synergex’s senior developers were quoted in today’s eWeek article announcing the release:


Why is That First WCF Operation SO Slow?

By Steve Ives, Posted on June 25, 2015 at 12:06 pm

If you have ever developed and worked with a WCF service you may have noticed that the very first time you connect to a newly started instance of the service there can sometimes be a noticeable delay before the service responds. But invoking subsequent operations often seems almost instantaneous. Usually the delay is relatively short, perhaps even just a fraction of a second, but still noticeable. Well earlier this week I encountered a WCF service that exhibited this behavior, but the delay for the first operation was almost three minutes! Something had to be done.

Some time later, after much debugging, web searching and more than a little head scratching, we realized that the “problem” that we were seeing was actually “by design” in WCF and was related to the generation of metadata for the service. It turns out that if “metadata exchange” is enabled for the service then WCF generates the metadata, regardless of whether anyone is currently requesting it or not, at the time that the first operation is requested by a client. Often the generation of the metadata takes almost no time at all, but as the size and complexity of a service grows (in terms of the number of operations exposed, the number of parameters, the number and nature of complex types exposed, etc.) the time taken to generate the metadata grows. In the case of this particular service there were over 800 individual operations defined, with lots and lots of complex types being exposed, and the service was still growing!

The only time you need metadata exchange enabled is when you need to access the WSDL for the service, so in simple terms whenever you need to do an “Add Service Reference” or “Update Service Reference”. The rest of the time having it enabled is just slowing things down at runtime.

I can’t tell you exactly how to enable and disable metadata exchange with your service, because there are several different ways it can be configured, but it’s likely going to be one of these:

  1. A <serviceMetadata/> token used in the <serviceBehaviors> section of a Web.config or App.config file.
  2. An <endpoint/> token that uses the IMetaDataExchange contract defined in a <service/>section of a Web.config or App.config file.
  3. Code that does the equivalent of one of the two options above.

So the lesson learned was to enable metadata exchange only when it is needed, for the purpose of creating or updating client proxy code; the result was an almost instantaneous response from the service once metadata exchange had been disabled. Of course it goes without saying that metadata exchange should NEVER be enabled on production services.

Old Dog … New Tricks … Done!

By Steve Ives, Posted on June 3, 2015 at 3:59 pm

The old adage tells us that you can’t teach an old dog new tricks. But after the last three days, I beg to differ! It’s been an interesting few days for sure; fun, challenging, rewarding and heated are all words that come to mind when reflecting on the last few days. But at this point, three days into a four-day engagement, I think that we may just have dispelled that old adage. For one this “old dog” certainly feels like he has learned several new tricks.

So what was the gig? It was to visit a company that has an extensive application deployed on OpenVMS, and to help them to explore possible ways to extend the reach of those applications beyond the current OpenVMS platform. Not so hard I hear you say, there are any number of ways of doing that. xfServerPlus immediately comes to mind, as do xfODBC and the SQL Connection API, and even things like the HTTP API that could be used to allow the OpenVMS application to do things like interacting with web services. All true, but there was one thing that was threatening to throw a “spanner (wrench) in the works”. Did I mention that the application in question was developed in COBOL? That’s right, not a line of DBL code anywhere in sight! Oh and by the way, until about a week ago I’d never even seen a single line of COBOL code.

Now perhaps you understand the reason that challenging was one of the words I mentioned earlier. But I’m up for a challenge, as long as I think I have a fighting chance of coming up with something cool that addresses a customers needs. And in this case I did. I didn’t yet know all of the details, but I figured the odds of coming up with something were pretty good.

Why all of this confidence? Well, partly because I’m really good at what I do (can’t believe I just said that), but seriously, it was mainly because of the fact that a lot of the really cool things that we developers just take for granted these days, like the ability to write Synergy .NET code and call it from C#, or write VB.NET code and call it from Synergy .NET, have their roots in innovations that were made 30+ years ago by a company named Digital Equipment Corporation (DEC).

You see OpenVMS had this little thing called the Common Language Environment. In a nutshell this meant that the operating system provided a core environment in which programming languages could interoperate. Any language that chose to play in that ball park would be compatible with other such languages, and most languages on OpenVMS (incuding DIBOL and DBL) did just that. This meant that BASIC could call FORTRAN, FORTRAN could call C, C could call PASCAL and … well you get the idea. Any YES it means that COBOL can call DBL and DBL can call COBOL. OK, now we’re talking!

So why is this such a big deal? Well it turns out that Digital, later Compaq, and later still HP didn’t do such a great job of protecting their customers investments in their COBOL code. It’s been quite a while since there was a new release of COBOL on OpenVMS, so it’s been quite a while since OpenVMS COBOL developers had access to any new features. This means that there isn’t a way to call OpenVMS COBOL routines from .NET or Java, there isn’t a way for OpenVMS COBOL code to interact with SQL Server or Oracle, and there isn’t an HTTP API … so don’t even think about calling web services from COBOL code.

But wait a minute, COBOL can call DBL … and DBL can call COBOL … so YES, COBOL CAN do all of those things … via DBL! And that fact was essentially the basis for my visit to Toronto this week.

I’m not going to get into lots of details about exactly what we did. Suffice it to say that we were able to leverage two core Synergy/DE technologies in order to implement two main things:

  1. A generic mechanism allowing COBOL code executing on OpenVMS to interact with Windows “stuff” on the users desktop (the same desktop that their terminal emulator is running on).
  2. A generic mechanism allowing Windows “stuff” executing on the users desktop to interact with COBOL code back on the OpenVMS system.

The two core technologies have already been mentioned. Outbound from OpenVMS was achieved by COBOL calling a DBL routine that in turn used the Synergy HTTP API to communicate with a WCF REST web service that was hosted in a Windows application running in the users system tray. Inbound to OpenVMS was of course achieved with a combination of xfNetLink .NET and xfServerPlus.

So just who is the old dog? Well as I mentioned earlier I probably fall into that category at this point, as do several of the other developers that it was my privilege to work with this week. But as I set out to write this article I must admit that the main old dogs in my mind were OpenVMS and COBOL. Whatever, I think that all of the old dogs learned new tricks this week.

It’s been an action packed three days but I’m pretty pleased with what has been accomplished, and I think the customer is too. I have one more day on site tomorrow to wrap up the more mundane things like documentation (yawn) and code walkthroughs to ensure that everyone understands what was done and how all the pieces fit together. Then it’s back home on Friday before a well deserved vacation next week, on a beach, with my wife.

So what did I learn this week?

  1. I really, really, REALLY don’t like COBOL!
  2. OpenVMS was WAY ahead of its time and offered LOTS of really cool features. Actually I didn’t just learn this, I always knew it, but I wanted to recognize it in this list … and it’s MY BLOG so I can Smile.
  3. Synergy/DE is every bit as cool as I always believed; and this week I proved it to a bunch of people that had never even heard of it before.
  4. New fangled elevators are very confusing for old dogs!

UX Design; Elevator Controls

By Steve Ives, Posted on June 1, 2015 at 3:09 pm

Those of you who attended the recent DevPartner conference in Philadelphia will no doubt remember the excellent presentation on UX Design that was given by guest speaker, Billy Hollis. During his presentation Billy cited photographs of a couple of elevator control panels. He used one as an example of bad design, the other an example of good.

I won’t show the actual photos that Billy used (sorry, you had to be there for that!) but in a nutshell the layout of the buttons and other information (floor numbers, etc.) on the first panel was at best confusing. There was clear physical evidence that users had been confused by the panel and frequently had not understood how to operate the elevator!

The second example was much a much better panel design. The designer had successfully used techniques such as visually grouping related things together in a way that made the correct operation of the elevator a much more obvious task … intuitive even.

Well, upon arriving at a customers office building in Toronto, Canada earlier today I encountered an elevator control panel that, for me at least, took the confusion to a whole new level.

I should make it clear that the elevator in question was one of a cluster if four in the lobby of a shared office building, and that I was arriving at the customer site at about the same time that everyone was arriving at work. The point is that the lobby was pretty busy at the time, it wasn’t as simple as just walking up and pressing an “I want to go up” button.

No problem I thought, it may be two or three elevator cars before I get to make the final step of my journey up to the 4th floor. I’m a few minutes early and all is good.

image1Finally my turn came, I waited while a few other people stepped on, then I took my place in the elevator car. Intuitively I spun around to determine whether one of my elevator buddies had already pressed the 4th floor button, and I was ready to press it myself if not. The panel opposite is what I encountered.

Now I like to think of myself as a reasonably bright guy, so I instantly figured it out; the buttons would be on the OTHER SIDE of the door. And I was correct … well … kind of. I glanced to the opposite side of the elevator door … and saw an identical panel on that side too!

Not wanting to appear totally inept I just waited quietly until the other people got off at their (somehow) chosen floors … and no, unfortunately nobody else was going to 4.

The doors swished closed and I was finally alone in the elevator. I don’t remember exactly what my out loud remark to myself was, but I believe it started something along the lines of “WHAT THE ….”. So, patiently I waited and sure enough after a little while the doors once again swished open and I was back where I started from in the lobby!

I’ll be honest with you, I was getting a little “pissed” at this point (excuse my language, but its true). But not wanting to appear like a total fool I stepped away as if I had intentionally returned to the lobby, and waited for the crowd to clear … all the time subtly (I thought) observing to see HOW THE HECK THESE FREAKING ELEVATORS WORKED!!! And then … I saw it … everything instantly became clear. The floor selector buttons were indeed on the other side of the elevator door … they were on the OUTSIDE!!!!

IMG_0957Yep … believe it or not in this building you need to indicate which floor you want to go to BEFORE you step on to the elevator. After you have stepped on it’s too late; way too late!

And further, having selected your intended destination on the small tough-screen display in the lobby you are then instructed WHICH of the four elevators (conveniently labeled A, B, C and D) you should step onto in order to reach your desired floor!

Actually this is a pretty clever system, but other than the fancy 6” touch screen display there was absolutely nothing to indicate that anything was different here. Brilliant system but totally unintuitive … and so very frustrating for first-time users. Which I guess was one of the points that Billy was making in the first place.

All Selective

By Richard Morris, Posted on May 29, 2015 at 8:39 am

During DevPartner 2015 a number of people ran through the Utilizing the Repository tutorial which sets out to demonstrate how the meta-data stored in the repository describing your Synergy database can be utilized when building a modern Windows Presentation Foundation desktop application using Synergy and the Symphony Framework.

Using your repository, CodeGen and the associated Symphony Framework templates you can build, from the ground up, a complete WPF application, and this is exactly what you do during the tutorial.

Using the Model-View-View Model pattern you code-generate the model elements as repository based data objects that extend the base Symphony Framework DataObjectBase – this provides field level properties with validation and data bindings. Then we code generate the view – the UI element the user interacts with. The view comprises of windows containing the individual edit controls which in turn use code generated styles. These styles define the visual attributes and data bindings of each field in the repository.

Great – you would think. But I’ve been asked about a default behaviour of a WPF application a number of times and again at the conference, and that is the fact that edit controls, specifically text boxes, don’t auto-select all content when they receive focus. I also find it frustrating but thus far have been unable to think of a solution. “It’s a deal breaker” according to Gayle – who’d just completed the tutorial. Well considering Gayle is a rather fine chap I guess it’s time for me to look at the problem again. I spoke with Jeff @ Synergex who pointed me to a blog by Oliver Lohmann which addresses just this problem.

The solution is to register a behaviour against the TextBox control and handle the GotFocus event – and in the event handler force the selection of the data in the TextBox control. Simple!

And simple it was – and it usually is when you are looking for that “complex” answer. I’ve not done much with behaviours so far, but I think that is about to change! The Symphony Framework has been updated (did that on the plane home) and I’ll be releasing that to GuGet very shortly. The Symphony Framework “style” template will be updated – it’s now released as part of CodeGen – to reflect the new capabilities and normal “behaviour” will be resumed.

Learning REST Basics

By Steve Ives, Posted on May 23, 2015 at 1:48 pm

One of the sessions that I presented at the recent DevPartner Conference was on the subject of building RESTful web services using ServiceStack. The basic concepts of REST are often difficult to grasp when you’re first getting started, but while browsing this morning I came across a web site that I thought was a great REST resource, and in particular included a video that I thought did a really nice job of explaining the basic concepts of REST.

The site is and the video can be found at

Synergy .NET Platform Targeting Options

By Steve Ives, Posted on May 21, 2015 at 10:25 am

One of the many options that is available to developers each time they create a new project in Visual Studio is how to configure the platform targeting options in the project properties dialogs Build panel. In the latest versions of Synergy there’s probably not much to worry about because the default values probably do exactly what you want most of the time, but the default values were not the same in some older versions of Synergy .NET, so it’s a good idea to have a good understanding of what your options are, and what the implication of choosing each option is.

In Synergy 10.3.1a we’re talking about two options; the Platform target drop-down and the Prefer 32-bit checkbox.

Project Build Options

The Platform target drop-down allows you to select from three different ways that the assembly that is created by your project (the .DLL or .EXE file) can be created; essentially you are choosing which .NET CLR (and Framework) will be used to execute your code. The options are:

Any CPU (default) Assembly can be executed by either the 64-bit or 32-bit CLR, with 64-bit preferred.
x86 Assembly can ONLY be executed by the 32-bit CLR on an Intel x86 CPU.
x64 Assembly can ONLY be executed by the 64-bit CLR on an Intel x64 CPU.

It is important to understand that we’re not talking about which Synergy runtime components will be used, we’re talking about which .NET CLR will be used, and the matching Synergy runtime components must be present on the system in order for the assemblies to be used.

The Prefer 32-bit checkbox (which was added in Synergy 10.3.1a and is only available when the platform target is set to Any CPU) provides the ability for you to determine that even though your assembly will support both 32-bit and 64-bit environments, you would prefer that it executes as 32-bit if a 32-bit environment is available.

The chart below summarizes which environment (32-bit or 64-bit) will be selected for all possible combinations of platform targeting settings and deployment platform.

Framework Selection Grid

A red n/a entry in the table indicates that an assembly would not be available for use in that particular environment.

So what’s the take-away from all of this? Well it’s pretty simple; Stick to the defaults unless you have a good reason to do so. The current default is Any CPU, Prefer 32-bit which means that on Intel x86 and x64 systems your apps will run as 32-bit. This in turn means that you only need to install the 32-bit version of Synergy on runtime only systems unless you also need to run services such as license sever, xfServer, xfServerPlus or SQL OpenNET on the same system. Development systems should ALWAYS have 32-bit and 64-bit Synergy installed.

DevPartner 2015 – WOW!

By Richard Morris, Posted on May 15, 2015 at 6:37 pm

That was the week that was the DevPartner 2015 conference in Philadelphia. Ok, so I’m biased but I really have to say this was one of the best conference weeks I’ve had the pleasure to be part of for many years. There were some really great sessions: The HBS customer demonstration rocked! They came to a conference a couple of years ago, did a tutorial on xfServerPlus and with this new found knowledge (and some PSG guidance) created a cool web bolt-on to their existing Synergy app.

We saw some fresh new faces from Synergex: Marty blasted through the Workbench and visual Studio development environments we provide and showed some really great tools and techniques. Phil gave us a 101 introduction to many of the “must know” features and capabilities of Synergy SDBMS – and of course was able to address mine and Jeff’s performance issues – you had to be there:). Roger demonstrated his wizardry to enlighten everyone as to the issues you need to consider when transferring your data within local and wide area networks – I was the bad router!

Bill Mooney set the whole tone of the conference with a great opening presentation showing just how committed Synergex are to empowering our customers with the best software development capabilities available.

My first day’s session followed and gave me the opportunity to demonstrate how you actually can bring all our great tools together to create true single-source, cross-platform applications which run on platforms as diverse as OpenVMS, UNIX and Microsoft Windows and onto a Sony watch running Google Wear!

Steve Ives went 3D holographic with videos from his recent trip to the Microsoft Build conference that showed just how amazing the Microsoft platform is becoming – and we aim to continue to be a first class player in that arena.

So many of our products are reaching a level of maturity that blows the competition away. Gary Hoffman from TechAnalysts presented a session showing how to use CodeGen and Symphony in the real world and showed just what you can achieve today in Synergy.

Jeff Greene (Senior .NET engineer @ Synergex) and I presented a rather informal (read written the night before) presentation showing the performance and analysis tools in Visual Studio 2015 that you can use to identify problem area and memory leaks in your application. Within minutes Brad from Automated System forwarded me an email he’d just sent to his team:

“At the Synergex conference just this morning, they just showed fantastic new diagnostics tools in Visual Studio 2015.  I just put the Team on the trail of potential memory issues with these new tools in a Virtual PC environment so we don’t alter our current developer stations. This could both reduce the memory footprint and improve performance.” – You can’t beat such instant feedback!

The tutorial time gives attendees the opportunity to play with the latest tools on a pre-configured virtual machine – plug in and code! And we continued the hands-on theme with Friday’s post conference workshop – where we built the DevPartner 2015 App from the ground up!


Thanks to everyone for coming and making the conference such a great success. It’s our 30th conference next year so keep your eyes and ears open for dates and details – it will be a conference not to miss!

Page 1 of 1012345...10...Last »

Subscribe to the RSS Feed!

Recent Posts Categories Tag Cloud Archives