Phone800.366.3472 SupportGet Support DocumentationDocumentation Resource CenterResource Center
search
close
Open Menu

Synergex Blog


The changing world of agile product releases

By Roger Andrews, Posted on December 10, 2014 at 10:16 am

Avatar

The world of enterprise software has changed for everyone with the introduction of device-first applications. Devices can include PCs, laptops, tablets and smartphones. The days of operating system and developer tools releases every two years, or even annually, have gone. The competitive nature of the competing Android, iOS, Windows, and Linux environments, with developer tools supporting all of these platforms, means the OS and tools vendors are adopting an agile approach to release cadences with monthly updates that are not just hotfixes and security updates. The agile vendor release process is now designed to get product updates and improvements out faster, knowing that if something breaks, it can be updated the following month, especially with developer tools.

For example, when Apple releases a new phone/tablet, they update their development stack. This is usually a forced update, and application developers always want to support the new hardware and current iOS release that is automatically available to all devices. This has the knock-on effect that all layered tools (for example Xamarin tools) also have to update to support the new Apple release, usually on the day the release ships. Then companies like Synergex have to update their tools, layered on the layers below them.

The newly announced Windows 10 will have fast and slow cadences for enterprises to choose, but Synergex will need to test for the fastest cadence to ensure compatibility. Visual Studio updates ship as CTPs monthly, with quarterly release track updates. For non-Windows development, Xamarin provides bi-weekly updates for the whole stack for iOS and Android.

So what does this mean to the Synergex customer base? Some Synergy customers do not update their Synergy versions regularly. If you are one of these customers, and you move into the world of devices, you will also need to move into this agile development mindset. Just as you are forced to accept monthly .NET Framework, Java, and security updates, you will also have to accept regular Synergy updates.

With Synergy/DE 10.3.1, Synergex has taken a new approach to help customers needing to keep up with the agile world we live in. Our Visual Studio .NET product set no longer includes the traditional Synergy runtime packages. This allows us to ship hotfixes and continual product updates matching those of the products on which we layer as frequently as bi-weekly, while keeping the traditional runtime and tools at a more stable level. We realize that due to its nature, this agile process has the potential to introduce bugs, and sometimes there will be a code-breaking change that requires a quick fix, but we believe it’s necessary to align with other vendors in this regard. The Synergy device runtimes for iOS and Android are also NuGet packages, allowing us to update them independently of the core Synergy product as new features and support are required. Customers choose whether to take the latest runtime packages on a per-project basis. Finally with 10.3.1 Synergy .NET, we allow the development tools to generate code that’s compatible with earlier versions (10.1.1 for 10.3.1), so customers can take advantage of the latest tool and code generation improvements without having to update every customer’s Synergy/DE installation.

In conclusion, the new device-first world is changing the way we develop and ship software, and all who participate in this world will need to change with it. Synergex has been making changes to meet this challenge, and we can help you meet that challenge while still providing stability to your end users.

For more information about Synergy/DE 10.3.1, visit the Synergex web site.

 

 


New Code Exchange Items

By Steve Ives, Posted on September 23, 2014 at 7:59 pm

Steve Ives

Just a quick post to let you know about a couple of new items that will be added to the Code Exchange in the next day or so. I’m working with one of our customers in Montreal, Canada this week. We’re in the process of moving their extensive suite of Synergy applications from a cluster of OpenVMS Integrity servers to a Windows environment. We migrated most of the code during a previous visit, and now we’re in the process of working through some of the more challenging issues such as how we replace things like background processing runs and various other things that get scheduled on OpenVMS batch queues or run as detached jobs. Most of these will ultimately be implemented as either scheduled tasks or Windows Services, so we started to develop some tools to help us achieve that, and we thought we’d share 🙂

The first is a new submission named ScheduledTask. It’s essentially a class that can be used to submit Traditional Synergy applications (dbr’s) for execution as Windows Scheduled Tasks. These tasks can be scheduled then kicked off manually as required, or scheduled to run every few minutes, hours, days, months, etc. You also get to control the user account that is used to schedule the tasks, as well as run the tasks. The zip file also contains two sample programs, one that can be run as a scheduled task, and another that creates a scheduled task to run the program.

The second submission is called Clipboard, and you can probably guess it contains a simple class that gives you the ability to programmatically interact with the Windows Clipboard; you can write code to put text onto the clipboard, or retrieve text from the clipboard. Again the submission includes a small sample program to get you up and running.

Hopefully someone else will find a use for some of this code.


Breaking News … a New Life for OpenVMS?

By Steve Ives, Posted on July 31, 2014 at 4:15 pm

Steve Ives

It seems like there is new hope for those organizations still running OpenVMS, after HP recently announced a partnership with a company called VMS Software Inc. (or VSI). There is now talk of adding support for Intel’s Itanium “Poulson” chips by early 2015, as well as the upcoming “Kittson” chip. There is talk of new versions of OpenVMS, and even mention of a possible port of OpenVMS to the x86 platform.

More information here:

http://www.computerworld.com/s/article/9250087/HP_gives_OpenVMS_new_life


CodeGen Training Videos

By Steve Ives, Posted on April 28, 2014 at 7:36 pm

Steve Ives

I finally got around to something that I have been meaning to do for a while, creating some short training videos for CodeGen. Just five videos right now, but I have a growing list of subjects for future videos.

You can view the videos on the Synergex Channel on YouTube.

Please subscribe to the YouTube channel to receive notifications when new videos are added.


The Synergy DevPartner Conference: There’s no “one-way” about it

By William Mooney, Posted on April 23, 2014 at 11:23 am

Avatar

With the 2014 Synergy DevPartner Conference right around the corner, I have been doing some reflecting on how our mission statement, “Advancing Applications. Partnering for Success,” applies to the conference. On the surface, a conference seems like a one-way conversation: the conference host presents information to attendees, who (hopefully) absorb it. However, over the past 28 years (!) of hosting the DevPartner Conference (formerly SPC), I have come to really appreciate the element of partnership involved in the DevPartner Conference—a partnership that transforms a few days of knowledge transfer into enduring business success.

So what does this collaboration look like? It starts with attendees feeling engaged and challenged throughout the three busy days of sessions and tutorials. Their enthusiasm about the tools we have to offer builds over the course of the week, inspiring them to pass the information they glean up the chain of command once they return to the office. The leadership team is receptive to the new ideas, which are incorporated into the company’s short and long-term strategic plans. From there, the information turns into action, and the ideas presented at the conference reappear in the form of a modernized and more powerful application based on proven functionality. Of course, this process could take months, or even years—but with the partnership in place, these goals CAN and WILL be achieved. We’ll do our part and give it our all during those three days of presentations and demos—but it is up to attendees and their managers to meet us halfway by ensuring that the lessons learned don’t get swallowed up in the daily grind, and that the investment in education leads to innovation and increased efficiency back at the office.

I was discussing this vision with a customer’s Senior Manager who is planning on sending a contingent of software developers to this year’s conference. She made my day when she described the system her company has adopted to maximize the value of the DevPartner Conference: after returning to the office, the team presents a condensed version of what they learned to the staff back at home. Studies have shown that explanation significantly facilitates learning, so this is a great way for attendees to reinforce the skills learned at the conference—while at the same time sharing the information with the rest of the team.

Whether or not you go with this model, you have a little over 6 weeks to prepare your own post-conference plan for success. Let us know how we can partner with you to make it happen!

See you in Birmingham or Chicago.

Cheers!


Performance problem upgrading to Server 2012

By Roger Andrews, Posted on February 28, 2014 at 12:06 pm

Avatar

When one of our customers recently upgraded a file server from Windows Server 2008 to Server 2012, their customers complained of significantly increased end-of-day processing times—up to three times slower with Server 2012 than the previous system.

The system used terminal services to allow remote administration and running of such tasks as day-end processing. There were no general interactive users. Running the same software on a Windows 8 client system resulted in better performance, all other things being equal (disk, network, CPU, etc.). But a brand new Server 2012 R2 system with just the basic GUI role, performed twice as fast as the Server 2008 system. However, as roles were added, the customer noticed that the RDS role caused the slowdown.

Since Server 2008 R2 was introduced, RDS (Remote Desktop Services, formerly known as Terminal Services) has included a feature called Fairshare CPU Scheduling. With this feature, if more than one user is logged on a system, processing time is limited for a given user based on the number of sessions and their loads (see Remote Desktop Session Host on Microsoft TechNet). This feature is enabled by default and can cause performance problems if more than one user is logged on the system. With Server 2012, two more “fairshare” options were added: Disk Fairshare and Network Fairshare (see What’s New in Remote Desktop Services in Windows Server 2012 on TechNet). These features are enabled by default and can come into play when only one user is logged on. And these options proved to be the cause of the slowdown for our customer. They limited I/O for the single logged-on user (or scheduled task), though the day-end processing was always I/O bound. We were able to remove this bottleneck by either disabling the RDS role or turning off fairshare options.

In summary, if a system is used for file sharing services only (no interactive users), use remote administrative mode and disable the RDS role. If the RDS role must be enabled, consider turning off Disk Fairshare if the server runs disk-intensive software (such as database or I/O-intensive programs), and turn off Network Fairshare if the server has services (such as Microsoft SQL Server or xfServer) to prevent client access from being throttled. For information on turning off Disk Fairshare and Network Fairshare, see Win32 TerminalServiceSetting Class on Microsoft Developer Network (MSDN).

More articles related to this issue:

·         Resource Sharing in Windows Remote Desktop Services
·         Roles, Role Services, and Features


Using Synergy .NET in Multi-Threaded Applications

By Steve Ives, Posted on October 10, 2013 at 12:28 pm

Steve Ives

There are special considerations that must be taken into account when implementing non thread-aware Synergy .NET code that will execute in a multi-threaded environment. As will be explained later, one such environment is when code executes within an ASP.NET / Internet Information Server (IIS) environment.

The Microsoft .NET environment provides a mechanism to allow the isolation of multiple instances of an application from one another. This mechanism is called Application Domains, and is often referred to as AppDomains. Essentially an AppDomain entirely isolates an instance of some piece of executing code from all other executing instances of that, and any other code, so that these instances cannot interfere with one another in any way. AppDomains also ensure that the failure of any code executing within an AppDomain cannot adversely affect any other executing code.

AppDomains are specifically useful in situations where there may be multiple instances of a piece of code executing within the context of a single process, for example where different execution threads are used to perform multiple concurrent streams of processing (perhaps on behalf of different users) all within a single host process. An example of such an environment is ASP.NET Web applications executing on an IIS Web Server.

Synergy .NET provides specific support for isolating non thread-aware code in AppDomains, and in some situations it can be critical that AppDomains are used in order to have your Synergy .NET code behave as expected in several key areas. Those key areas are channels, static data, common data and global data.

If any of the above items are used in a multi-threaded environment without the use of AppDomains, then they are SHARED between all instances of the code running in the multiple threads. By using an AppDomain, code executing within a given thread can isolate itself from code running in other threads, thus returning to the normal or expected behavior in Synergy environments.

If you are implementing Synergy .NET code that does not make use of multi-threading, and will not execute in a multi-threaded application host then you don’t need to worry about any of this.

If you are implementing multi-threaded Synergy .NET code then you have the option of using AppDomains to isolate parts of your code from other instances if you chose or need to do so.

However if you are implementing non thread-aware Synergy .NET code that will execute in an ASP.NET / IIS environment then you are automatically in a multi-threaded environment and it is critical that you use AppDomains to isolate instances of your application code from each other.

The basic problem is this: ASP.NET and IIS isolate different APPLICATIONS within their own AppDomains, but within an ASP.NET application, multiple (potentially lots of) user “sessions” all execute within the SAME AppDomain. This means that by default there is no isolation of executing code between multiple ASP.NET user sessions, and with Synergy .NET that in turn means that channels, and static, common and global data is shared between those multiple user sessions. As an ASP.NET user session is often considered to correlate to a user process in a regular application, the problem becomes apparent. If one “user” opens a new channel and reads and locks a record, that same channel and record lock are shared with all other users. If one user places a certain value in a common field for use by a routine that is going to be called, that value could be changed by code running for a different user before the first users routine gets called.

Clearly it would be very difficult, if not impossible to build and execute reliable code in this kind of environment. Without support for AppDomains in a multi-threaded environment a Synergy developer would need to:

  • Always use automatic channel number selection.
  • Always close any channels before returning from the routine that opened the channel (no persistent open files)
  • Not use static, common or global data unless the nature of that data is that it contains application wide information that does not change during execution.

While it is possible to write code which adheres to these rules, it would be at the least inconvenient to do so, and because of the way that Synergy code has typically been written in the past, existing code would likely require significant reworking.

The solution to this problem is to have each “users” code isolate its self from other “users” code by loading its self into an AppDomain, and this is relatively easy to do. Specifically in the case of an ASP.NET Web application, code can be written to hook the Session_Start event that signals the beginning of a new user session and create a new AppDomain in which to execute, and hook the Session_End event and execute code to clean up by deleting the AppDomain. There are other possible approaches that may be more appropriate, but the principal is basically the same; have the code isolate its self in an AppDomain before any non-thread aware Synergy .NET code executes.

By isolating non thread-aware Synergy .NET code in an AppDomain in a multi-threaded environment you have essentially the same operating environment that you would expect for any other Synergy code executing in a process, with one notable exception. That exception is that environment variables created with XCALL SETLOG are always applied at the PROCESS level. This means that Synergy .NET code executing in any multi-threaded environment should never rely on the use of XCALL SETLOG unless the value being set is applicable to code executing in all other threads. An example of this might be an environment variable that identifies the fixed path to a data file.

Synergex Professional Services Group is in the process of developing code for a sample ASP.NET Web Application that will demonstrate how to use AppDomains to ensure that code executing in one ASP.NET Session is isolated from other ASP.NET Sessions. We will publish this code in the Synergy/DE CodeExchange soon. I will post another article on this BLOG once the code has been published.


Symphony Framework Basics: Control Styling

By , Posted on September 6, 2013 at 5:05 am

Avatar

In my previous article (Symphony Framework Basics: Data Binding) I demonstrated how to perform simple data binding between your XAML UI controls and your Data Objects.  This article demonstrates how to build powerful styles to define and control your user interface and provide automated data binding to your Data Objects.

Before we look at styles, let’s recap how we do data binding.  Consider the following simple repository structure;

Record group_record

GROUP_ID    ,A20   ; (1,20) group id

DESCRIPTION ,A100  ; (21,120) description

When created as a Data Object this creates two properties;

public property Group_id, a20

public property Description, a100

In the XAML code we can data bind the properties exposed by the Data Object to standard UI controls;

<TextBox Text=”{Binding Path=Group_id, Converter={StaticResource alphaConverter}}”/>

<TextBox Text=”{Binding Path=Description, Converter={StaticResource alphaConverter}}”/>

There are a number of issues here, and not all of them are obvious.  Although we have performed the data binding, there is no code in the XAML to prevent the user typing more characters than the underlying data allows.  The Group_id property for example only allows up to twenty characters, so we need to add code to prevent more being entered.  In the repository we’ve defined the field to only contain uppercase characters and again the XAML is not honouring this requirement.  When a field is in error, for example a required field that is blank, the underlying Data Object exposes this information, but we are not utilising it here.  Also, controlling if the field is read-only, if entry is disabled, etc.  All these setting and more can be configured against the field in the Synergy Repository.

Using CodeGen and the correct Symphony templates we can generate styles that define exactly how we require field entry to be controlled.

Generating the style files is very simple.  The syntax to execute CodeGen with is;

codegen -s GROUP -t Symphony_Style -n GroupMaint -ut ASSEMBLYNAME=GroupMaint -cw 16

One interesting item on the CodeGen command line is the “-cw 16”.  This simply defines the standard width as 16 pixels for each character and is used when defining the size of a control.

The generated style file contains individual styles for each field in the repository structure, as well as a style for the prompt.  Here is an example of a prompt style;

<Style x:Key=”Group_Group_id_prompt” TargetType=”{x:Type Label}”>

<Setter Property=”Template”>

<Setter.Value>

<ControlTemplate TargetType=”{x:Type Label}”>

<Label

Content=”Group ID”

IsEnabled=”{Binding Path=Group_idIsEnabled}”>

</Label>

</ControlTemplate>

</Setter.Value>

</Setter>

</Style>

And a field style;

<Style x:Key=”Group_Group_id_style” TargetType=”{x:Type symphonyControls:FieldControl}”>

<Setter Property=”FocusVisualStyle” Value=”{x:Null}”/>

<Setter Property=”Focusable” Value=”False”></Setter>

<Setter Property=”Template”>

<Setter.Value>

<ControlTemplate TargetType=”{x:Type symphonyControls:FieldControl}”>

<TextBox Name=”ctlGroup_Group_id”

Text=”{Binding Path=Group_id, Converter={StaticResource alphaConverter},

UpdateSourceTrigger=PropertyChanged,

ValidatesOnDataErrors=True}”

Validation.ErrorTemplate=”{StaticResource validationTemplate}”

MaxLength=”20″

Width=”320″

CharacterCasing=”Upper”

IsEnabled=”{Binding Path=Group_idIsEnabled}”

IsReadOnly=”{Binding Path=Group_idIsReadOnly}”

VerticalAlignment=”Center”

HorizontalAlignment=”Left”

ToolTip=”{Binding RelativeSource={RelativeSource Self},Path=(Validation.Errors), Converter={StaticResource errorConveter}}”>

<TextBox.Style>

<Style>

<Style.Triggers>

<DataTrigger Binding=”{Binding Path=Group_idIsFocused}” Value=”true”>

<Setter Property=”FocusManager.FocusedElement”

Value=”{Binding ElementName=ctlGroup_Group_id}”></Setter>

</DataTrigger>

<DataTrigger Binding=”{Binding RelativeSource={RelativeSource Self},Path=(Validation.HasError)}” Value=”True”>

<Setter Property=”TextBox.Background”>

<Setter.Value>

<LinearGradientBrush StartPoint=”0.5,0″ EndPoint=”0.5,1″>

<LinearGradientBrush.GradientStops>

<GradientStop Offset=”0.2″ Color=”WhiteSmoke” />

<GradientStop Offset=”3″ Color=”Red” />

</LinearGradientBrush.GradientStops>

</LinearGradientBrush>

</Setter.Value>

</Setter>

</DataTrigger>

</Style.Triggers>

</Style>

</TextBox.Style>

</TextBox>

</ControlTemplate>

</Setter.Value>

</Setter>

</Style>

This code may look a little verbose but enables a number of capabilities, including;

  • Data binds the underlying UI control to the Data Object property
  • Control features like field length, character casing, read-only, etc.
  • The Tooltip is used to display any error information if the field is in error.
  • The control background colour is made red if the field is in error.

Once you have created your styles and added them to your Visual Studio project you can then reference and use them in your UI design.  To reference the style;

<ResourceDictionary Source=”pack:/GroupMaint;component/Resources/Group_style.CodeGen.xaml”/>

Each style is based on a control in the Symphony Framework called “FieldControl” which can be found in the Symphony.Conductor.Controls namespace.  You must add a reference to this namespace in your XAML code;

xmlns:symphonyControls=”clr-namespace:Symphony.Conductor.Controls;assembly=SymphonyConductor”

Now you can reference the FieldControl and apply the required style to it;

<symphonyControls:FieldControl      DataContext=”{Binding Path=MasterData}”

Style=”{StaticResource Group_Group_id_style}”>

</symphonyControls:FieldControl>

And to add the prompt, or label style use;

<Label Style=”{StaticResource Group_Group_id_prompt}”

DataContext=”{Binding Path=MasterData}” />

Because the styles are linked to the same property in the same Data Object when your code disables the input control the prompt will be greyed out as well.

The code snippets here are just part of the overall solution.  To see the full details you can watch a short video at http://youtu.be/FqWpMRrSb4w. This article convers styling of the user interface.  The next article will demonstrate using all of the difference Synergy fields types and utilizing controls like date pickers, check boxes, etc.

 


Are you ready for the demise of Windows XP?

By Roger Andrews, Posted on August 28, 2013 at 10:28 am

Avatar

As Microsoft has stated and many other companies have echoed (see links below), the end of life for Windows XP is April 2014. Yep, in just 7 months, Microsoft, anti-virus vendors, and many software vendors (including Synergex) will no longer provide support or security patches for IE and XP. And the end of life for Windows Server 2003 will follow soon after.

Why is this so important? And, if what you’re using isn’t broken, why fix it?

Let’s consider, for example, a doctor’s, dentist’s, or optician’s office running Windows XP and almost certainly Internet-connected – in fact, probably using an Internet-based application. All it takes is an infected web site, a Google search gone astray, or a mistyped URL, and the PC is infected – INCLUDING all of the office’s confidential medical data. Plus, most offices allow their workers to browse the Internet at some point in the day – to catch up on emails and IM, conduct searches, surf eBay, etc. If the office is running XP after 2014, it is almost certain that it will be open to infection by a rootkit or other malicious software, because the malware authors will take advantage of every vulnerability not fixed by Microsoft due to the end of life. Add the fact that the antivirus vendors will also reduce or eliminate support, and you have a mass Bot-like infection waiting to happen. Once a system gets a rootkit, it’s nigh on impossible to remove it without a wipe clean. To further complicate things, it usually takes a boot-time scan from a different device to detect many of these infections.

Further, while Windows XP had an impressive 13-year run, it is far less secure than later operating systems, and the hardware it runs on in many cases is also at the end of its life.

If you or your customers are running Windows XP or Server 2003, it’s time to upgrade to a more modern, more secure operating system like Windows 7. At least you can rest assured that Microsoft’s monthly Patch Tuesday will provide protection with security fixes in conjunction with your anti-virus vendors to protect sensitive information and keep the business running.

https://community.mcafee.com/community/security/blog/2013/06/07/xp-end-of-life–08-april-2014

https://community.mcafee.com/community/security/blog/2013/06/07/xp-end-of-life–08-april-2014

http://blogs.windows.com/windows/b/springboard/archive/2013/04/08/365-days-remaining-until-xp-end-of-support-the-countdown-begins.aspx

http://blogs.technet.com/b/mspfe/archive/2013/04/29/windows-server-2003-rapidly-approaches-end-of-life-watch-out-for-performance-bottlenecks.aspx

http://www.microsoft.com/en-us/windows/endofsupport.aspx

 

 


2013 DevPartner Conference Tutorials Now Available On-Line

By Steve Ives, Posted on June 30, 2013 at 10:24 am

Steve Ives

We and many of our customers have just returned home from another great conference, during which we introduced another batch of Synergy/DE related developer tutorials. These tutorials have now been added to those from previous years and made available on-line. The tutorials can be downloaded via a small client application. If you already have the tutorials client installed then you will see the new content when you start the application. If not you can download and install the tutorials client from the following URL:

http://tutorials.synergex.com/Download.aspx

 


We’re Ready for for the 2013 DevPartner Conference … Are You?

By Steve Ives, Posted on June 5, 2013 at 9:36 pm

Steve Ives

photo

What you’re looking at is fifty terabytes of external USB 3.0 hard drives (for the oldies amongst us that’s the equivalent of 10,000 RL01’s), and we’ll be giving them away during the DevPartner conferences in Bristol (UK) and Providence (RI) in the next three weeks.

Of course it’s not really about the hardware, anyone can get that! It’s really about what’s ON the hardware. Each of these disks contains a Windows 8 virtual machine that includes the latest version of Synergy/DE (10.1.1a, just released today) and Microsoft Visual Studio 2012 (Update 2).

But it’s not really about the products that are installed on the virtual machines either. It’s really about what you can learn from these “freebies”, and how you can use what you have learned to the benefit of your employer.

During this years DevPartner conferences the Synergex Professional Services Group will introduce seventeen all-new hands-on tutorials that will provide you with a quick-start to all of the latest features of Synergy/DE. And in addition we’ll be including updated versions of three of the most popular tutorials from previous conferences.

It’s not too late! If you haven’t already signed up for the 2013 DevPartner Conference then all you have to do is visit this link:

http://conference.synergex.com/register.aspx

But talk to your boss before you visit the link, because if your company is already a member of the DevPartner program you might find that your conference registration is free!

We are all looking forward to seeing you again for another great conference.


Windows 8: If I can do it, so can you

By William Mooney, Posted on February 13, 2013 at 9:10 am

Avatar

Although I’ve always considered myself to be an early adopter, I must admit I’ve been a bit skeptical about upgrading to Windows 8 for a number of reasons: its completely new look and feel, all the negative propaganda surrounding it, its missing “Start” button, our internal struggles with the new icon requirements, other people’s horror stories… just to name a few. This past weekend I was forced to face my fears head on when I offered to help my father purchase a laptop. We found ourselves at one of the big box stores, where I quickly realized that any laptop (or desktop, for that matter) we purchased would have Windows 8 installed (and I would have to pay a downgrade fee to get to my tried and true Windows 7).  When I expressed my doubts to the salesperson, he countered with surprising enthusiasm. He raved about Windows 8 and was so reassuring that I soon felt comfortable enough to make the leap, instead of going online to purchase a Windows 7 system.  It certainly got me thinking that I’ve been paying attention to only the negative stuff, and not the positive. Even more importantly—just like 64-bit systems—I know that our customers will soon be dealing with end-users just like me.

At home a few hours later with my dad’s new Windows 8 laptop, buyer’s remorse and the dreaded Windows 8 user experience was full on. Things that were so familiar and intuitive were gone. Armed with my trusty Windows 7 laptop and my BFF Google at my side, I slowly and painfully learned how to turn on Windows Defender, get the Windows Explorer 10 address bar to reappear, get to traditional Windows, etc., etc.

Unsure about how I would be able to support my dad on Windows 8 when I didn’t have a clue myself, I decided the next day that the sooner I upgraded my own system and got on with it, the sooner I would be able to help my dad—and hopefully our customers as well.

So my next step was to figure out what on my system was—and more importantly, was not—supported. I stumbled upon a great little tool, the Windows 8 Upgrade Assistant. You basically just run this tool to determine which programs on your computer are or are not compatible with Windows 8. I highly recommend it. When you run it, you’ll notice that Synergy/DE 10.1 is listed as compatible with Windows 8, which brings me to the real point of this post.

You probably have users/customers who will ultimately upgrade to Windows 8 (or the latest version of whatever platform you’re on), either because they drink the Kool-Aid or, frankly, because they have no other choice. You’ll have some who will be gung-ho about going to the latest version and oblivious of any reason not to. Others, like me, will be skeptical about upgrading, but it’s in their nature to go for it anyway. And you’ll have others who will just happen to buy a new system and will assume all of their software is supported. No matter what the reason, you’ll want to be prepared when your users ultimately upgrade, so make sure your applications can support these customers when they inevitably ask. First step: make sure to get your Synergy/DE version current by upgrading to version 10.1 right away. Then, do what I’m doing: vow to learn something new about your new version/platform every day. In other words, embrace the change!


HTTP API Enhancements in DBL 10.1

By Steve Ives, Posted on January 14, 2013 at 11:24 pm

Steve Ives

In addition to introducing several totally new features DBL 10.1 also includes enhancements to the client portion of the HTTP API. These enhancements make the API significantly easier to use, and also make it possible to achieve things that were not previously possible.

Since the HTTP API was introduced in DBL 7.5 the client part of the API consisted of two routines. These routines are HTTP_CLIENT_GET and HTTP_CLIENT_POST. As suggested by their names these routines allowed you to issue GET and POST requests to an HTTP server. A GET request is a simple request to a server in which a URI is sent to the server and a response (which may include data) comes back. A POST request is slightly different in that in addition to the URI, additional data may also be sent to the server in the body of the HTTP request.

When dealing with an HTTP server it isn’t always possible to pre-determine the amount of data to be sent to the server, and it’s certainly not possible to know how much data will come back from the server for any given request. So in order to implement the HTTP API it was necessary to have a mechanism to deal with variable length data of any size, and at that time the only solution was to use dynamic memory.

Using dynamic memory worked fine, any data to be sent to the HTTP server as part of a POST request was placed into dynamic memory and the memory handle passed to the API, and any data returned from a GET or POST request was placed into dynamic memory by the API and the handle returned to the application. Dealing with variable length strings using dynamic memory isn’t particularly hard, but the fact of the matter is that while only a single line of code is required to perform an HTTP GET or POST, typically several lines of code were required in order to marshal data into and out of memory handles.

When the System.String class was introduced in DBL 9.1, so was the opportunity to simplify the use of the HTTP API, and that became a reality in DBL 10.1.

In order to maintain compatibility with existing code the HTTP_CLIENT_GET and HTTP_CLIENT_POST routines remain unchanged, but they are joined by two new siblings named HTTP_GET and HTTP_POST. These routines are similar to the original routines, essentially performing the same task, but they are easier to use because they use string objects instead of dynamic memory. And because the string class has a length property it is no longer necessary to pass separate parameters to indicate the length of the data being sent, or to determine the length of the data that was received. String objects are also used when passing and receiving HTTP headers.

So the new HTTP_GET and HTTP_POST routines make the HTTP API easier to use, but there is a second part to this story, so read on.

One of the primary use cases for the HTTP API is to implement code that interacts with Web Services, and in recent years a new flavor of Web Services called REST Services (REST stands for Representational State Transfer) has become popular. With traditional Web Services all requests were typically sent to the server via either an HTTP GET or POST request, but with REST Services two additional HTTP methods are typically used; the HTTP PUT and DELETE methods.

Many of you will be familiar with the term “CRUD” which stands for “Create, Read, Update and Delete”. Of course these are four operations that commonly occur in software applications. The code that we write often creates, reads, updates or deletes something. When designing traditional Web Services we would often indicate the type of operation via a parameter to a method, or perhaps even implement a separate method for each of these operations. With REST based web services however, the type of operation (create, read, update or delete) is indicated by the type of HTTP request used (PUT, GET, POST or DELETE).

To enable DBL developers to use the HTTP API to interact with REST services an extension to the HTTP API was required, and DBL 10.1 delivers that enhancement in the form of another two new routines capable of performing HTTP PUT and DELETE requests. As you can probably guess the names of these two new routines are HTTP_PUT and HTTP_DELETE. And of course, in order to make these new routines easy to use, they also use string parameters where variable length data is being passed or received.

You can find much more information about the HTTP API in the DBL Language Reference Manual, which of course you can also find on-line at http://docs.synergyde.com. In fact, if you’re feeling really adventurous you could try Googling something like “Synergy DBL HTTP_PUT”.


Unit Testing with Synergy .NET

By Steve Ives, Posted on at 11:02 pm

Steve Ives

One of the “sexy” buzz words, or more accurately “buzz phrases” that is being bandied around with increased frequency is “unit testing”. Put simply unit testing is the ability to implement specific tests of small “units” of an application (often down at the individual method level) and then automate those tests in a predictably repeatable way. The theory goes that if you are able to automate the testing of all of the individual building blocks of your application, ensuring that each of those components behaves as expected under various circumstances, testing what happens when you use those components as expected, and also when you use them in ways that they are not supposed to be used, then you stand a much better change of the application as a whole behaving as expected.

There are several popular unit testing frameworks available and in common use today, many of which integrate directly with common development tools such as Microsoft Visual Studio. In fact some versions of Visual Studio have an excellent unit testing framework build in; it’s called the Microsoft Unit Test Framework for Managed Code and it is included in the Visual Studio Premium and Ultimate editions. I am delighted to be able to tell you that in Synergy .NET version 10.1 we have added support for unit testing Synergy applications with that framework.

I’ve always been of the opinion that unit testing is a good idea, but it was never really something that I had ever set out to actually do. But that all changed in December, when I found that I had a few spare days on my hands. I decided to give it a try.

As many of you know I develop the CodeGen tool that is used by my team, as well as by an increasing number of customers. I decided to set about writing some unit tests for some areas of the code generator.

I was surprised by how easy it was to do, and by how quickly I was able to start to see some tangible results from the relatively minimal effort; I probably spent around two days developing around 700 individual unit tests for various parts of the CodeGen environment.

Now bear in mind that when I started this effort I wasn’t aware of any bugs. I wasn’t naive enough to think that my “baby” was bug free, but I was pretty sure there weren’t many bugs in the code, I pretty much thought that everything was “hunky dory”. Boy was I in for a surprise!

By developing these SIMPLE tests … call this routine, pass these parameters, expect to get this result type of stuff … I was able to identify (and fix) over 20 bugs! Now to be fair most of these bugs were in pretty remote areas of the code, in places that perhaps rarely get executed. After all there are lots of people using CodeGen every day … but a bug is a bug … the app would have fallen over for someone, somewhere, sometime, eventually. We all have those kind of bugs … right?

Anyway, suffice it to say that I’m now a unit testing convert. So much so in fact that I think that the next time I get to develop a new application I’m pretty sure that the first code that I’ll write after the specs are agreed will be the unit tests … BEFORE the actual application code is written!

Unit testing is a pretty big subject, and I’m really just scratching the surface at this point, so I’m not going to go into more detail just yet. So for now I’m just throwing this information out there as a little “teaser” … I’ll be talking more about unit testing with Synergy .NET at the DevPartner conferences a little later in the year, and I’ll certainly write some more in-depth articles on the subject for the BLOG also.


What would you say to a prospect who questions why your app is written in Synergy?

By William Mooney, Posted on September 20, 2012 at 5:38 pm

Avatar

Recently the technology director for one of our top customers forwarded to me a copy of a lengthy email he had sent to the employees at his company, just raving about the new features available in the latest version of Synergy/DE. Apparently his sales director had responded to the email with great enthusiasm, but requested a condensed version of the email that he could forward out to decision makers and prospects. So my contact, the development manager, asked me if I could repeat something I had said during one of the evening events at the DevPartner Conference in May—something about how to respond to a prospect who questions why your app is written in Synergy. He thought what I had said represented a condensed version of his email, and was something that his sales director would be able to use. At first, I tried to remember exactly what it was that I said (bear in mind, I probably had a pint of Guinness under my belt at the time) but then quickly decided that there was an easy answer to this question—Guinness or no Guinness.

So, I provided the response below.

If asked, “What would you say to a prospect who questions why your app is written in Synergy?”, I would say…

Application X [the customer’s application] is developed with Synergy DBL, which is one of the most advanced languages in existence today for developing enterprise business applications. While Synergy/DE is a modern OO development suite that rivals any popular tool set today, what separates it from the pack is its portability. When we first developed Application X 30 years ago, we could never have possibly imagined that our customers would need to run on Windows 10 years ago or that there would be a .NET environment as we know it today. Because we use Synergy/DE, which over the years has consistently added support for the platforms we’ve needed to get to, and which currently compiles and runs on OpenVMS, all flavors of UNIX, Linux, Windows, .NET, handheld devices, and the Cloud, we at Company X can focus on functionality. There is no question there will be new user platforms down the road, but because Application X is based on Synergy/DE, we will be in a position to leverage our current business logic without the need for rewrites, no matter what shape a new platform happens to take. For “future proofing” an application, there’s no better place to be.

Cheers!

 


Don't miss a post!

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts Categories Tag Cloud Archives