Phone800.366.3472 SupportGet Support DocumentationDocumentation Resource CenterResource Center
search
close
Open Menu

Synergex Blog


The Synergy DevPartner Conference: There’s no “one-way” about it

By William Mooney, Posted on April 23, 2014 at 11:23 am

With the 2014 Synergy DevPartner Conference right around the corner, I have been doing some reflecting on how our mission statement, “Advancing Applications. Partnering for Success,” applies to the conference. On the surface, a conference seems like a one-way conversation: the conference host presents information to attendees, who (hopefully) absorb it. However, over the past 28 years (!) of hosting the DevPartner Conference (formerly SPC), I have come to really appreciate the element of partnership involved in the DevPartner Conference—a partnership that transforms a few days of knowledge transfer into enduring business success.

So what does this collaboration look like? It starts with attendees feeling engaged and challenged throughout the three busy days of sessions and tutorials. Their enthusiasm about the tools we have to offer builds over the course of the week, inspiring them to pass the information they glean up the chain of command once they return to the office. The leadership team is receptive to the new ideas, which are incorporated into the company’s short and long-term strategic plans. From there, the information turns into action, and the ideas presented at the conference reappear in the form of a modernized and more powerful application based on proven functionality. Of course, this process could take months, or even years—but with the partnership in place, these goals CAN and WILL be achieved. We’ll do our part and give it our all during those three days of presentations and demos—but it is up to attendees and their managers to meet us halfway by ensuring that the lessons learned don’t get swallowed up in the daily grind, and that the investment in education leads to innovation and increased efficiency back at the office.

I was discussing this vision with a customer’s Senior Manager who is planning on sending a contingent of software developers to this year’s conference. She made my day when she described the system her company has adopted to maximize the value of the DevPartner Conference: after returning to the office, the team presents a condensed version of what they learned to the staff back at home. Studies have shown that explanation significantly facilitates learning, so this is a great way for attendees to reinforce the skills learned at the conference—while at the same time sharing the information with the rest of the team.

Whether or not you go with this model, you have a little over 6 weeks to prepare your own post-conference plan for success. Let us know how we can partner with you to make it happen!

See you in Birmingham or Chicago.

Cheers!


Performance problem upgrading to Server 2012

By Roger Andrews, Posted on February 28, 2014 at 12:06 pm

When one of our customers recently upgraded a file server from Windows Server 2008 to Server 2012, their customers complained of significantly increased end-of-day processing times—up to three times slower with Server 2012 than the previous system.

The system used terminal services to allow remote administration and running of such tasks as day-end processing. There were no general interactive users. Running the same software on a Windows 8 client system resulted in better performance, all other things being equal (disk, network, CPU, etc.). But a brand new Server 2012 R2 system with just the basic GUI role, performed twice as fast as the Server 2008 system. However, as roles were added, the customer noticed that the RDS role caused the slowdown.

Since Server 2008 R2 was introduced, RDS (Remote Desktop Services, formerly known as Terminal Services) has included a feature called Fairshare CPU Scheduling. With this feature, if more than one user is logged on a system, processing time is limited for a given user based on the number of sessions and their loads (see Remote Desktop Session Host on Microsoft TechNet). This feature is enabled by default and can cause performance problems if more than one user is logged on the system. With Server 2012, two more “fairshare” options were added: Disk Fairshare and Network Fairshare (see What’s New in Remote Desktop Services in Windows Server 2012 on TechNet). These features are enabled by default and can come into play when only one user is logged on. And these options proved to be the cause of the slowdown for our customer. They limited I/O for the single logged-on user (or scheduled task), though the day-end processing was always I/O bound. We were able to remove this bottleneck by either disabling the RDS role or turning off fairshare options.

In summary, if a system is used for file sharing services only (no interactive users), use remote administrative mode and disable the RDS role. If the RDS role must be enabled, consider turning off Disk Fairshare if the server runs disk-intensive software (such as database or I/O-intensive programs), and turn off Network Fairshare if the server has services (such as Microsoft SQL Server or xfServer) to prevent client access from being throttled. For information on turning off Disk Fairshare and Network Fairshare, see Win32 TerminalServiceSetting Class on Microsoft Developer Network (MSDN).

More articles related to this issue:

·         Resource Sharing in Windows Remote Desktop Services
·         Roles, Role Services, and Features


Using Synergy .NET in Multi-Threaded Applications

By Steve Ives, Posted on October 10, 2013 at 12:28 pm

There are special considerations that must be taken into account when implementing non thread-aware Synergy .NET code that will execute in a multi-threaded environment. As will be explained later, one such environment is when code executes within an ASP.NET / Internet Information Server (IIS) environment.

The Microsoft .NET environment provides a mechanism to allow the isolation of multiple instances of an application from one another. This mechanism is called Application Domains, and is often referred to as AppDomains. Essentially an AppDomain entirely isolates an instance of some piece of executing code from all other executing instances of that, and any other code, so that these instances cannot interfere with one another in any way. AppDomains also ensure that the failure of any code executing within an AppDomain cannot adversely affect any other executing code.

AppDomains are specifically useful in situations where there may be multiple instances of a piece of code executing within the context of a single process, for example where different execution threads are used to perform multiple concurrent streams of processing (perhaps on behalf of different users) all within a single host process. An example of such an environment is ASP.NET Web applications executing on an IIS Web Server.

Synergy .NET provides specific support for isolating non thread-aware code in AppDomains, and in some situations it can be critical that AppDomains are used in order to have your Synergy .NET code behave as expected in several key areas. Those key areas are channels, static data, common data and global data.

If any of the above items are used in a multi-threaded environment without the use of AppDomains, then they are SHARED between all instances of the code running in the multiple threads. By using an AppDomain, code executing within a given thread can isolate itself from code running in other threads, thus returning to the normal or expected behavior in Synergy environments.

If you are implementing Synergy .NET code that does not make use of multi-threading, and will not execute in a multi-threaded application host then you don’t need to worry about any of this.

If you are implementing multi-threaded Synergy .NET code then you have the option of using AppDomains to isolate parts of your code from other instances if you chose or need to do so.

However if you are implementing non thread-aware Synergy .NET code that will execute in an ASP.NET / IIS environment then you are automatically in a multi-threaded environment and it is critical that you use AppDomains to isolate instances of your application code from each other.

The basic problem is this: ASP.NET and IIS isolate different APPLICATIONS within their own AppDomains, but within an ASP.NET application, multiple (potentially lots of) user “sessions” all execute within the SAME AppDomain. This means that by default there is no isolation of executing code between multiple ASP.NET user sessions, and with Synergy .NET that in turn means that channels, and static, common and global data is shared between those multiple user sessions. As an ASP.NET user session is often considered to correlate to a user process in a regular application, the problem becomes apparent. If one “user” opens a new channel and reads and locks a record, that same channel and record lock are shared with all other users. If one user places a certain value in a common field for use by a routine that is going to be called, that value could be changed by code running for a different user before the first users routine gets called.

Clearly it would be very difficult, if not impossible to build and execute reliable code in this kind of environment. Without support for AppDomains in a multi-threaded environment a Synergy developer would need to:

  • Always use automatic channel number selection.
  • Always close any channels before returning from the routine that opened the channel (no persistent open files)
  • Not use static, common or global data unless the nature of that data is that it contains application wide information that does not change during execution.

While it is possible to write code which adheres to these rules, it would be at the least inconvenient to do so, and because of the way that Synergy code has typically been written in the past, existing code would likely require significant reworking.

The solution to this problem is to have each “users” code isolate its self from other “users” code by loading its self into an AppDomain, and this is relatively easy to do. Specifically in the case of an ASP.NET Web application, code can be written to hook the Session_Start event that signals the beginning of a new user session and create a new AppDomain in which to execute, and hook the Session_End event and execute code to clean up by deleting the AppDomain. There are other possible approaches that may be more appropriate, but the principal is basically the same; have the code isolate its self in an AppDomain before any non-thread aware Synergy .NET code executes.

By isolating non thread-aware Synergy .NET code in an AppDomain in a multi-threaded environment you have essentially the same operating environment that you would expect for any other Synergy code executing in a process, with one notable exception. That exception is that environment variables created with XCALL SETLOG are always applied at the PROCESS level. This means that Synergy .NET code executing in any multi-threaded environment should never rely on the use of XCALL SETLOG unless the value being set is applicable to code executing in all other threads. An example of this might be an environment variable that identifies the fixed path to a data file.

Synergex Professional Services Group is in the process of developing code for a sample ASP.NET Web Application that will demonstrate how to use AppDomains to ensure that code executing in one ASP.NET Session is isolated from other ASP.NET Sessions. We will publish this code in the Synergy/DE CodeExchange soon. I will post another article on this BLOG once the code has been published.


Symphony Framework Basics: Control Styling

By , Posted on September 6, 2013 at 5:05 am

In my previous article (Symphony Framework Basics: Data Binding) I demonstrated how to perform simple data binding between your XAML UI controls and your Data Objects.  This article demonstrates how to build powerful styles to define and control your user interface and provide automated data binding to your Data Objects.

Before we look at styles, let’s recap how we do data binding.  Consider the following simple repository structure;

Record group_record

GROUP_ID    ,A20   ; (1,20) group id

DESCRIPTION ,A100  ; (21,120) description

When created as a Data Object this creates two properties;

public property Group_id, a20

public property Description, a100

In the XAML code we can data bind the properties exposed by the Data Object to standard UI controls;

<TextBox Text=”{Binding Path=Group_id, Converter={StaticResource alphaConverter}}”/>

<TextBox Text=”{Binding Path=Description, Converter={StaticResource alphaConverter}}”/>

There are a number of issues here, and not all of them are obvious.  Although we have performed the data binding, there is no code in the XAML to prevent the user typing more characters than the underlying data allows.  The Group_id property for example only allows up to twenty characters, so we need to add code to prevent more being entered.  In the repository we’ve defined the field to only contain uppercase characters and again the XAML is not honouring this requirement.  When a field is in error, for example a required field that is blank, the underlying Data Object exposes this information, but we are not utilising it here.  Also, controlling if the field is read-only, if entry is disabled, etc.  All these setting and more can be configured against the field in the Synergy Repository.

Using CodeGen and the correct Symphony templates we can generate styles that define exactly how we require field entry to be controlled.

Generating the style files is very simple.  The syntax to execute CodeGen with is;

codegen -s GROUP -t Symphony_Style -n GroupMaint -ut ASSEMBLYNAME=GroupMaint -cw 16

One interesting item on the CodeGen command line is the “-cw 16”.  This simply defines the standard width as 16 pixels for each character and is used when defining the size of a control.

The generated style file contains individual styles for each field in the repository structure, as well as a style for the prompt.  Here is an example of a prompt style;

<Style x:Key=”Group_Group_id_prompt” TargetType=”{x:Type Label}”>

<Setter Property=”Template”>

<Setter.Value>

<ControlTemplate TargetType=”{x:Type Label}”>

<Label

Content=”Group ID”

IsEnabled=”{Binding Path=Group_idIsEnabled}”>

</Label>

</ControlTemplate>

</Setter.Value>

</Setter>

</Style>

And a field style;

<Style x:Key=”Group_Group_id_style” TargetType=”{x:Type symphonyControls:FieldControl}”>

<Setter Property=”FocusVisualStyle” Value=”{x:Null}”/>

<Setter Property=”Focusable” Value=”False”></Setter>

<Setter Property=”Template”>

<Setter.Value>

<ControlTemplate TargetType=”{x:Type symphonyControls:FieldControl}”>

<TextBox Name=”ctlGroup_Group_id”

Text=”{Binding Path=Group_id, Converter={StaticResource alphaConverter},

UpdateSourceTrigger=PropertyChanged,

ValidatesOnDataErrors=True}”

Validation.ErrorTemplate=”{StaticResource validationTemplate}”

MaxLength=”20″

Width=”320″

CharacterCasing=”Upper”

IsEnabled=”{Binding Path=Group_idIsEnabled}”

IsReadOnly=”{Binding Path=Group_idIsReadOnly}”

VerticalAlignment=”Center”

HorizontalAlignment=”Left”

ToolTip=”{Binding RelativeSource={RelativeSource Self},Path=(Validation.Errors), Converter={StaticResource errorConveter}}”>

<TextBox.Style>

<Style>

<Style.Triggers>

<DataTrigger Binding=”{Binding Path=Group_idIsFocused}” Value=”true”>

<Setter Property=”FocusManager.FocusedElement”

Value=”{Binding ElementName=ctlGroup_Group_id}”></Setter>

</DataTrigger>

<DataTrigger Binding=”{Binding RelativeSource={RelativeSource Self},Path=(Validation.HasError)}” Value=”True”>

<Setter Property=”TextBox.Background”>

<Setter.Value>

<LinearGradientBrush StartPoint=”0.5,0″ EndPoint=”0.5,1″>

<LinearGradientBrush.GradientStops>

<GradientStop Offset=”0.2″ Color=”WhiteSmoke” />

<GradientStop Offset=”3″ Color=”Red” />

</LinearGradientBrush.GradientStops>

</LinearGradientBrush>

</Setter.Value>

</Setter>

</DataTrigger>

</Style.Triggers>

</Style>

</TextBox.Style>

</TextBox>

</ControlTemplate>

</Setter.Value>

</Setter>

</Style>

This code may look a little verbose but enables a number of capabilities, including;

  • Data binds the underlying UI control to the Data Object property
  • Control features like field length, character casing, read-only, etc.
  • The Tooltip is used to display any error information if the field is in error.
  • The control background colour is made red if the field is in error.

Once you have created your styles and added them to your Visual Studio project you can then reference and use them in your UI design.  To reference the style;

<ResourceDictionary Source=”pack:/GroupMaint;component/Resources/Group_style.CodeGen.xaml”/>

Each style is based on a control in the Symphony Framework called “FieldControl” which can be found in the Symphony.Conductor.Controls namespace.  You must add a reference to this namespace in your XAML code;

xmlns:symphonyControls=”clr-namespace:Symphony.Conductor.Controls;assembly=SymphonyConductor”

Now you can reference the FieldControl and apply the required style to it;

<symphonyControls:FieldControl      DataContext=”{Binding Path=MasterData}”

Style=”{StaticResource Group_Group_id_style}”>

</symphonyControls:FieldControl>

And to add the prompt, or label style use;

<Label Style=”{StaticResource Group_Group_id_prompt}”

DataContext=”{Binding Path=MasterData}” />

Because the styles are linked to the same property in the same Data Object when your code disables the input control the prompt will be greyed out as well.

The code snippets here are just part of the overall solution.  To see the full details you can watch a short video at http://youtu.be/FqWpMRrSb4w. This article convers styling of the user interface.  The next article will demonstrate using all of the difference Synergy fields types and utilizing controls like date pickers, check boxes, etc.

 


Are you ready for the demise of Windows XP?

By Roger Andrews, Posted on August 28, 2013 at 10:28 am

As Microsoft has stated and many other companies have echoed (see links below), the end of life for Windows XP is April 2014. Yep, in just 7 months, Microsoft, anti-virus vendors, and many software vendors (including Synergex) will no longer provide support or security patches for IE and XP. And the end of life for Windows Server 2003 will follow soon after.

Why is this so important? And, if what you’re using isn’t broken, why fix it?

Let’s consider, for example, a doctor’s, dentist’s, or optician’s office running Windows XP and almost certainly Internet-connected – in fact, probably using an Internet-based application. All it takes is an infected web site, a Google search gone astray, or a mistyped URL, and the PC is infected – INCLUDING all of the office’s confidential medical data. Plus, most offices allow their workers to browse the Internet at some point in the day – to catch up on emails and IM, conduct searches, surf eBay, etc. If the office is running XP after 2014, it is almost certain that it will be open to infection by a rootkit or other malicious software, because the malware authors will take advantage of every vulnerability not fixed by Microsoft due to the end of life. Add the fact that the antivirus vendors will also reduce or eliminate support, and you have a mass Bot-like infection waiting to happen. Once a system gets a rootkit, it’s nigh on impossible to remove it without a wipe clean. To further complicate things, it usually takes a boot-time scan from a different device to detect many of these infections.

Further, while Windows XP had an impressive 13-year run, it is far less secure than later operating systems, and the hardware it runs on in many cases is also at the end of its life.

If you or your customers are running Windows XP or Server 2003, it’s time to upgrade to a more modern, more secure operating system like Windows 7. At least you can rest assured that Microsoft’s monthly Patch Tuesday will provide protection with security fixes in conjunction with your anti-virus vendors to protect sensitive information and keep the business running.

https://community.mcafee.com/community/security/blog/2013/06/07/xp-end-of-life–08-april-2014

https://community.mcafee.com/community/security/blog/2013/06/07/xp-end-of-life–08-april-2014

http://blogs.windows.com/windows/b/springboard/archive/2013/04/08/365-days-remaining-until-xp-end-of-support-the-countdown-begins.aspx

http://blogs.technet.com/b/mspfe/archive/2013/04/29/windows-server-2003-rapidly-approaches-end-of-life-watch-out-for-performance-bottlenecks.aspx

http://www.microsoft.com/en-us/windows/endofsupport.aspx

 

 


2013 DevPartner Conference Tutorials Now Available On-Line

By Steve Ives, Posted on June 30, 2013 at 10:24 am

We and many of our customers have just returned home from another great conference, during which we introduced another batch of Synergy/DE related developer tutorials. These tutorials have now been added to those from previous years and made available on-line. The tutorials can be downloaded via a small client application. If you already have the tutorials client installed then you will see the new content when you start the application. If not you can download and install the tutorials client from the following URL:

http://tutorials.synergex.com/Download.aspx

 


We’re Ready for for the 2013 DevPartner Conference … Are You?

By Steve Ives, Posted on June 5, 2013 at 9:36 pm

photo

What you’re looking at is fifty terabytes of external USB 3.0 hard drives (for the oldies amongst us that’s the equivalent of 10,000 RL01’s), and we’ll be giving them away during the DevPartner conferences in Bristol (UK) and Providence (RI) in the next three weeks.

Of course it’s not really about the hardware, anyone can get that! It’s really about what’s ON the hardware. Each of these disks contains a Windows 8 virtual machine that includes the latest version of Synergy/DE (10.1.1a, just released today) and Microsoft Visual Studio 2012 (Update 2).

But it’s not really about the products that are installed on the virtual machines either. It’s really about what you can learn from these “freebies”, and how you can use what you have learned to the benefit of your employer.

During this years DevPartner conferences the Synergex Professional Services Group will introduce seventeen all-new hands-on tutorials that will provide you with a quick-start to all of the latest features of Synergy/DE. And in addition we’ll be including updated versions of three of the most popular tutorials from previous conferences.

It’s not too late! If you haven’t already signed up for the 2013 DevPartner Conference then all you have to do is visit this link:

http://conference.synergex.com/register.aspx

But talk to your boss before you visit the link, because if your company is already a member of the DevPartner program you might find that your conference registration is free!

We are all looking forward to seeing you again for another great conference.


Windows 8: If I can do it, so can you

By William Mooney, Posted on February 13, 2013 at 9:10 am

Although I’ve always considered myself to be an early adopter, I must admit I’ve been a bit skeptical about upgrading to Windows 8 for a number of reasons: its completely new look and feel, all the negative propaganda surrounding it, its missing “Start” button, our internal struggles with the new icon requirements, other people’s horror stories… just to name a few. This past weekend I was forced to face my fears head on when I offered to help my father purchase a laptop. We found ourselves at one of the big box stores, where I quickly realized that any laptop (or desktop, for that matter) we purchased would have Windows 8 installed (and I would have to pay a downgrade fee to get to my tried and true Windows 7).  When I expressed my doubts to the salesperson, he countered with surprising enthusiasm. He raved about Windows 8 and was so reassuring that I soon felt comfortable enough to make the leap, instead of going online to purchase a Windows 7 system.  It certainly got me thinking that I’ve been paying attention to only the negative stuff, and not the positive. Even more importantly—just like 64-bit systems—I know that our customers will soon be dealing with end-users just like me.

At home a few hours later with my dad’s new Windows 8 laptop, buyer’s remorse and the dreaded Windows 8 user experience was full on. Things that were so familiar and intuitive were gone. Armed with my trusty Windows 7 laptop and my BFF Google at my side, I slowly and painfully learned how to turn on Windows Defender, get the Windows Explorer 10 address bar to reappear, get to traditional Windows, etc., etc.

Unsure about how I would be able to support my dad on Windows 8 when I didn’t have a clue myself, I decided the next day that the sooner I upgraded my own system and got on with it, the sooner I would be able to help my dad—and hopefully our customers as well.

So my next step was to figure out what on my system was—and more importantly, was not—supported. I stumbled upon a great little tool, the Windows 8 Upgrade Assistant. You basically just run this tool to determine which programs on your computer are or are not compatible with Windows 8. I highly recommend it. When you run it, you’ll notice that Synergy/DE 10.1 is listed as compatible with Windows 8, which brings me to the real point of this post.

You probably have users/customers who will ultimately upgrade to Windows 8 (or the latest version of whatever platform you’re on), either because they drink the Kool-Aid or, frankly, because they have no other choice. You’ll have some who will be gung-ho about going to the latest version and oblivious of any reason not to. Others, like me, will be skeptical about upgrading, but it’s in their nature to go for it anyway. And you’ll have others who will just happen to buy a new system and will assume all of their software is supported. No matter what the reason, you’ll want to be prepared when your users ultimately upgrade, so make sure your applications can support these customers when they inevitably ask. First step: make sure to get your Synergy/DE version current by upgrading to version 10.1 right away. Then, do what I’m doing: vow to learn something new about your new version/platform every day. In other words, embrace the change!


HTTP API Enhancements in DBL 10.1

By Steve Ives, Posted on January 14, 2013 at 11:24 pm

In addition to introducing several totally new features DBL 10.1 also includes enhancements to the client portion of the HTTP API. These enhancements make the API significantly easier to use, and also make it possible to achieve things that were not previously possible.

Since the HTTP API was introduced in DBL 7.5 the client part of the API consisted of two routines. These routines are HTTP_CLIENT_GET and HTTP_CLIENT_POST. As suggested by their names these routines allowed you to issue GET and POST requests to an HTTP server. A GET request is a simple request to a server in which a URI is sent to the server and a response (which may include data) comes back. A POST request is slightly different in that in addition to the URI, additional data may also be sent to the server in the body of the HTTP request.

When dealing with an HTTP server it isn’t always possible to pre-determine the amount of data to be sent to the server, and it’s certainly not possible to know how much data will come back from the server for any given request. So in order to implement the HTTP API it was necessary to have a mechanism to deal with variable length data of any size, and at that time the only solution was to use dynamic memory.

Using dynamic memory worked fine, any data to be sent to the HTTP server as part of a POST request was placed into dynamic memory and the memory handle passed to the API, and any data returned from a GET or POST request was placed into dynamic memory by the API and the handle returned to the application. Dealing with variable length strings using dynamic memory isn’t particularly hard, but the fact of the matter is that while only a single line of code is required to perform an HTTP GET or POST, typically several lines of code were required in order to marshal data into and out of memory handles.

When the System.String class was introduced in DBL 9.1, so was the opportunity to simplify the use of the HTTP API, and that became a reality in DBL 10.1.

In order to maintain compatibility with existing code the HTTP_CLIENT_GET and HTTP_CLIENT_POST routines remain unchanged, but they are joined by two new siblings named HTTP_GET and HTTP_POST. These routines are similar to the original routines, essentially performing the same task, but they are easier to use because they use string objects instead of dynamic memory. And because the string class has a length property it is no longer necessary to pass separate parameters to indicate the length of the data being sent, or to determine the length of the data that was received. String objects are also used when passing and receiving HTTP headers.

So the new HTTP_GET and HTTP_POST routines make the HTTP API easier to use, but there is a second part to this story, so read on.

One of the primary use cases for the HTTP API is to implement code that interacts with Web Services, and in recent years a new flavor of Web Services called REST Services (REST stands for Representational State Transfer) has become popular. With traditional Web Services all requests were typically sent to the server via either an HTTP GET or POST request, but with REST Services two additional HTTP methods are typically used; the HTTP PUT and DELETE methods.

Many of you will be familiar with the term “CRUD” which stands for “Create, Read, Update and Delete”. Of course these are four operations that commonly occur in software applications. The code that we write often creates, reads, updates or deletes something. When designing traditional Web Services we would often indicate the type of operation via a parameter to a method, or perhaps even implement a separate method for each of these operations. With REST based web services however, the type of operation (create, read, update or delete) is indicated by the type of HTTP request used (PUT, GET, POST or DELETE).

To enable DBL developers to use the HTTP API to interact with REST services an extension to the HTTP API was required, and DBL 10.1 delivers that enhancement in the form of another two new routines capable of performing HTTP PUT and DELETE requests. As you can probably guess the names of these two new routines are HTTP_PUT and HTTP_DELETE. And of course, in order to make these new routines easy to use, they also use string parameters where variable length data is being passed or received.

You can find much more information about the HTTP API in the DBL Language Reference Manual, which of course you can also find on-line at http://docs.synergyde.com. In fact, if you’re feeling really adventurous you could try Googling something like “Synergy DBL HTTP_PUT”.


Unit Testing with Synergy .NET

By Steve Ives, Posted on at 11:02 pm

One of the “sexy” buzz words, or more accurately “buzz phrases” that is being bandied around with increased frequency is “unit testing”. Put simply unit testing is the ability to implement specific tests of small “units” of an application (often down at the individual method level) and then automate those tests in a predictably repeatable way. The theory goes that if you are able to automate the testing of all of the individual building blocks of your application, ensuring that each of those components behaves as expected under various circumstances, testing what happens when you use those components as expected, and also when you use them in ways that they are not supposed to be used, then you stand a much better change of the application as a whole behaving as expected.

There are several popular unit testing frameworks available and in common use today, many of which integrate directly with common development tools such as Microsoft Visual Studio. In fact some versions of Visual Studio have an excellent unit testing framework build in; it’s called the Microsoft Unit Test Framework for Managed Code and it is included in the Visual Studio Premium and Ultimate editions. I am delighted to be able to tell you that in Synergy .NET version 10.1 we have added support for unit testing Synergy applications with that framework.

I’ve always been of the opinion that unit testing is a good idea, but it was never really something that I had ever set out to actually do. But that all changed in December, when I found that I had a few spare days on my hands. I decided to give it a try.

As many of you know I develop the CodeGen tool that is used by my team, as well as by an increasing number of customers. I decided to set about writing some unit tests for some areas of the code generator.

I was surprised by how easy it was to do, and by how quickly I was able to start to see some tangible results from the relatively minimal effort; I probably spent around two days developing around 700 individual unit tests for various parts of the CodeGen environment.

Now bear in mind that when I started this effort I wasn’t aware of any bugs. I wasn’t naive enough to think that my “baby” was bug free, but I was pretty sure there weren’t many bugs in the code, I pretty much thought that everything was “hunky dory”. Boy was I in for a surprise!

By developing these SIMPLE tests … call this routine, pass these parameters, expect to get this result type of stuff … I was able to identify (and fix) over 20 bugs! Now to be fair most of these bugs were in pretty remote areas of the code, in places that perhaps rarely get executed. After all there are lots of people using CodeGen every day … but a bug is a bug … the app would have fallen over for someone, somewhere, sometime, eventually. We all have those kind of bugs … right?

Anyway, suffice it to say that I’m now a unit testing convert. So much so in fact that I think that the next time I get to develop a new application I’m pretty sure that the first code that I’ll write after the specs are agreed will be the unit tests … BEFORE the actual application code is written!

Unit testing is a pretty big subject, and I’m really just scratching the surface at this point, so I’m not going to go into more detail just yet. So for now I’m just throwing this information out there as a little “teaser” … I’ll be talking more about unit testing with Synergy .NET at the DevPartner conferences a little later in the year, and I’ll certainly write some more in-depth articles on the subject for the BLOG also.


What would you say to a prospect who questions why your app is written in Synergy?

By William Mooney, Posted on September 20, 2012 at 5:38 pm

Recently the technology director for one of our top customers forwarded to me a copy of a lengthy email he had sent to the employees at his company, just raving about the new features available in the latest version of Synergy/DE. Apparently his sales director had responded to the email with great enthusiasm, but requested a condensed version of the email that he could forward out to decision makers and prospects. So my contact, the development manager, asked me if I could repeat something I had said during one of the evening events at the DevPartner Conference in May—something about how to respond to a prospect who questions why your app is written in Synergy. He thought what I had said represented a condensed version of his email, and was something that his sales director would be able to use. At first, I tried to remember exactly what it was that I said (bear in mind, I probably had a pint of Guinness under my belt at the time) but then quickly decided that there was an easy answer to this question—Guinness or no Guinness.

So, I provided the response below.

If asked, “What would you say to a prospect who questions why your app is written in Synergy?”, I would say…

Application X [the customer’s application] is developed with Synergy DBL, which is one of the most advanced languages in existence today for developing enterprise business applications. While Synergy/DE is a modern OO development suite that rivals any popular tool set today, what separates it from the pack is its portability. When we first developed Application X 30 years ago, we could never have possibly imagined that our customers would need to run on Windows 10 years ago or that there would be a .NET environment as we know it today. Because we use Synergy/DE, which over the years has consistently added support for the platforms we’ve needed to get to, and which currently compiles and runs on OpenVMS, all flavors of UNIX, Linux, Windows, .NET, handheld devices, and the Cloud, we at Company X can focus on functionality. There is no question there will be new user platforms down the road, but because Application X is based on Synergy/DE, we will be in a position to leverage our current business logic without the need for rewrites, no matter what shape a new platform happens to take. For “future proofing” an application, there’s no better place to be.

Cheers!

 


Welcome Back, DBL

By William Mooney, Posted on September 12, 2012 at 12:17 pm

OK, we were wrong… and many of you were right! Back before the turn of the century, we went through some re-branding efforts of our development tools—and renaming “Synergy DBL” to “Synergy Language” was one of the biggies. Ever since then, we’ve been trying to correct both customers and employees whenever they reference “DBL”. The primary reasons for making the change were 1) to move away from something associated with a “3GL,” and 2) to create a simpler, more descriptive name. The challenge here was twofold: first, going from “DBL” to “Language” really wasn’t that exciting in the first place. And, second, no matter what we changed the name to, you and our employees would always and forever call it DBL–because that’s the name that’s embedded in decades of code. DBL is the name of our compiler, after all, and the extension of our source files.  So, as our portable, advanced, object-oriented Synergy products have stood the test of time and have soared to even greater heights while other development tools have come and gone (4GLs, RAD-Rapid Application Development, etc.), we are now returning to, and embracing, the name that was there at the beginning—and always has been there—welcome back, Synergy DBL!


Live from Bell Harbor

By Roger Andrews, Posted on at 11:43 am

Today I’m excited to be blogging from the TV studio at the Bell Harbor Convention Center in Seattle for the live Visual Studio 2012 launch.

Since the Build conference last September, Synergex has been working closely with the Microsoft development teams ensuring that Synergy/DE works seamlessly with all the new exciting Microsoft technologies being released this fall–Visual Studio 2012, Windows 8, and Synergy for Windows Store applications on both ARM and Intel processors. Our team has made several visits up to Redmond to work directly with Microsoft engineers to enhance Visual Studio, Windows 8, and Synergy/DE.

You can download 10.0.3 of Synergy/DE today to start using the latest Visual Studio 2012 features, including the new async and await functionality demonstrated by Microsoft at the visual studio launch event.

I’m also incredibly pleased to talk about our new KitaroDB NoSQL database for Windows Store applications that we are releasing today. Built on our solid, high performance Synergy DBMS product, KitaroDB is the first on disk NoSQL database in the Windows 8 sandbox working with X86, X64 and ARM processors.

We have a Netflix sample application that uses KitaroDB, and in the next few weeks will be launching a great new Windows Store application that takes advantage of KitaroDB for its local persistent storage.

See www.kitarodb.com for more details.

 


Symphony Framework and CodeGen Helping Customers Migrate to a WPF UI

By Steve Ives, Posted on August 1, 2012 at 4:20 pm

Richard Morris and I have just returned from a seven-day consulting engagement with a customer, and I wanted to share with you some information about our achievements during the visit. The customer we visited with is one of our larger ISV’s and has an extensive Synergy application, much of which is implemented using the Synergy/DE UI Toolkit. As with many applications, life started out on cell-based systems many years ago, but for many years now it has been deployed exclusively on Windows systems.

SymphonyLogoThe customer in question had listened to us talking about Symphony Framework and CodeGen at the Chicago DevPartner conference, and was interested in how they might be able to leverage them to accelerate the process of updating the look and feel of their applications by replacing their existing UI Toolkit user interface with a Windows Presentation Foundation (WPF) user interface. Needless to say we were eager to help, because we believe we have a great story to tell, and some great tools to share!

Most in the industry would agree that WPF represents the current “state of the art” when it comes to implementing user interfaces for Windows Desktop applications. We have had our eyes on WPF for some time now, and have been learning about it’s capabilities. We have also been encouraging customers to consider WPF as a great way of updating the UI of their existing applications, or implementing the UI of new applications. And thanks to new features in Synergy/DE 9.3 and 9.5 there are a couple of great ways of doing just that.

For existing applications the Synergy/DE .NET Assembly API can be used to embed WPF directly into existing applications. Of course one of the main benefits of doing so is that the application can be enhanced screen-by-screen, without the need for extensive re-writes and long development cycles. Of course for new development Synergy .NET can be used to build all-new applications with a WPF user interface. There is also a realistic migration path between the two; you can chose to start off by enhancing screens in an existing application via the .NET Assembly API today, and then ultimately migrate the entire application to a native Synergy .NET solution. All very “doable”.

Before I get into the specifics of our tools and what was achieved, there is one more thing that I should mention. Just as most industry pundits would agree that WPF is the way to go for Windows Desktop applications, most would also tell you that there is a specific WAY that WPF applications should be implemented; it’s called the “Model-View-ViewModel Design Pattern”, which is often abbreviated as MVVM.

A design pattern describes a methodology for implementing a software solution to a certain problem. The MVVM design pattern sets out a way of designing software such that there are clear lines of demarcation between code that deals with different parts of an application. Specifically it prescribes how the “Model” (code which implements an applications data definitions and business logic) should be separated from the “View” (code which implements the applications user interface), and how these two entities should be joined by the “ViewModel”. We’ve talked quite extensively about MVVM in the past, and there are lots of great resources available on line, so I don’t intend to go into more detail here. Suffice it to say that adhering to the MVVM design pattern is strongly recommended when implementing WPF applications.

I mentioned earlier that we have been focusing on WPF for some time now, and also on MVVM. But as well as just learning about the technologies and patterns, Richard Morris has been “beavering away” at his home office in England pondering the question “how can we help our customers to easily use WPF and MVVM in the context of their existing Synergy applications?”. Once he’d finished pondering the question, he then started coding the answer … and the Symphony Framework was born.

So just what is the Symphony Framework and how can it help you? Well in a nutshell, Symphony Framework is an MVVM Framework (library) which can be leveraged by Synergy developers to significantly simplify the process of implementing WPF user interfaces in their Synergy applications, while at the same time adhering to the best practices prescribed by the MVVM design pattern. Symphony Framework can be used to incrementally add WPF UI to existing applications in conjunction with the .NET Assembly API, and it can be used to implement all-new Synergy .NET applications with a WPF UI.

Some areas of Symphony Framework are designed to specifically address the somewhat unique challenges associated with migrating UI Toolkit applications, but the framework is in no way limited to working only with UI Toolkit applications. To cut a long story short, if you have an existing Synergy application and want to “spiff it up” with a fabulous new WPF UI, then Symphony Framework might just become your new best friend!

I’m not going to go into a huge amount of detail about HOW this all takes place, but I do want to briefly describe some of the common steps. For the purpose of this illustration I’ll ask you to imagine an existing UI Toolkit application (but remember, it doesn’t have to be Toolkit), and imagine that I want to do a gradual screen-by screen migration to a WPF UI, initially driven by the existing application (via the .NET assembly API). What might the steps be? Well, for each screen (or window) we might:

  • Create a “Model” class that describes and exposes the underlying data to be worked with. Model classes inherit a lot of functionality from base classes in the Symphony Framework.
  • Create a “View” using WPF.
  • Create a “ViewModel” that exposes the information in the model to the view, and provides the functionality needed to service the requirements of the view. ViewModel classes also inherit a lot of functionality (in some cases all of the required functionality) from base classes in the Symphony framework.

The code for all of the steps described above would be implemented in Synergy .NET and would be compiled into a .NET assembly.

  • Use the .NET assembly APIs “GENNET” tool to create Traditional Synergy “wrapper” classes that allow these various components to be accessed and manipulated from the existing Traditional Synergy application.
  • Create a “Manager” class (we’re still trying to figure out what to call this bit!) which contains the bulk of the code required to instantiate and drive the underlying .NET code.
  • Modify the existing application to present the new WPF UI instead of the existing UI, primarily by accessing functionality exposed by the “Manager” class.

You might be tempted to see this last bullet point and think “there is is, modify our existing code, that’s the hard and time consuming part”! But don’t let this thought put you off, believe it or not the changes that typically need to be made to the existing code are relatively small and painless. This is due in no small part to the things that the Symphony Framework is doing for you!

During our visit with our customer we initially worked on what it would take to replace existing “lookup” routines with new WPF implementations. In a UI Toolkit application a lookup routine is often accessed via a “drill method” associated with an input field, and often uses a combination of input processing to allow the user to define search criteria, and list processing to present matching results. When the user picks an item the associated value is returned into the input field. We managed to get this process down to a fine art. and this is where CodeGen comes in.

CodeGen1We were able to create CodeGen templates which allowed us to generate most of the code that was required to “switch out” a UI Toolkit lookup for a WPF lookup. We were able to code generate 100% of the Model class, 100% of the View class, 100% of the ViewModel class, and 100% of the “Manager” class. All that remained was to modify the existing application to utilize the new code instead of the existing UI Toolkit UI. Figuring out how to do the first lookup probably took in the order of half a day, but with the process and CodeGen templates in place, the next four lookups probably took around 20 minutes each to implement. We left it at that, because we were confident that we had the process down at that point.

Then we moved on to other areas, attacking a “maintenance” type program. The process is actually remarkably similar, and actually not that much more complex, again because much of the base functionality required is inherited from the Symphony Framework. By the end of our engagement we pretty much had that process down also, and again much of the required new code was being code generated, leaving only relatively minor changes to the existing application to be made.

Of course not all aspects of an application are as simple to deal with as the scenarios that I just described, some parts of an application and its UI get pretty complex, and it isn’t always possible to code generate all of the required components, and it isn’t always possible for the Symphony Framework to provide all of the required functionality in the form of base classes. Programmers still have a role to play, and thank goodness for that! But I do believe that the tools that Richard and I have developed can play a significant role in projects of this type, and it’s not just theory, we just proved it!

Actually that’s probably not a totally fair statement for me to make, as there are several other customers who have already used the Symphony Framework with great effect. Just as there are many customers who already use CodeGen to generate many different types of code to address various application needs. But Richard and I don’t often get to work together on a project, and this is perhaps the first time that we have really tried to use both of these tools together and push them HARD to see what could be achieved. And I for one am confident that everyone involved, including our customer of course, was pretty impressed with the results.

By the way, if your goal is to build an all-new WPF application directly in Synergy .NET, while retaining and re-using large portions of your existing code, then the steps aren’t that different to those described above. The Models, Views, ViewModels and “Manager” classes would be essentially the same, but would be driven by a Synergy .NET WPF application rather than by a DBR application via the .NET Assembly API. We actually proved this recently while preparing a proof of concept demo for another customer. Having been provided with the code for a small sample application, and some sample data, we migrated the UI of the application to WPF using the steps described above. Once the application was working with a new UI, and armed with the classes that we had just created, we were able to re-purpose those classes without change in a 100% Synergy .NET WPF application. In all the process took about four hours … but probably could have been completed faster had we not been sat at a bar at the time! This really is pretty cool stuff!

Before finish I do want to stress one more time that the Symphony Framework and CodeGen tools are not just about UI Toolkit applications on Windows. Symphony Framework helps you migrate to an ultra-modern WPF UI on Windows, but the starting point could easily be an application that runs on a cell-based platform today. And CodeGen can be and is being used on all systems supported by Synergy.


Release Notifications for CodeGen and Symphony Framework

By Steve Ives, Posted on July 31, 2012 at 5:19 pm

At the DevPartner conference I told people that in order to receive notifications for new releases of the open source CodeGen and Symphony Framework projects they should “Follow” the projects on CodePlex.

It turns out that “following” a project doesn’t send release notifications. If you want to get release notifications then you must go to the “Downloads” page for each project and subscribe for notifications. If you are interested in either of these two projects then I would recommend that you do just that, as we’ve done some great enhancements to both recently, and there are more great things still in the pipeline.

The downloads page for CodeGen is at http://codegen.codeplex.com/releases and the downloads page for Symphony Framework is at http://symphonyframework.codeplex.com/releases.


Recent Posts Categories Tag Cloud Archives