Open Menu

Synergex Blog


Performance problem upgrading to Server 2012

By Roger Andrews, Posted on February 28, 2014 at 12:06 pm

When one of our customers recently upgraded a file server from Windows Server 2008 to Server 2012, their customers complained of significantly increased end-of-day processing times—up to three times slower with Server 2012 than the previous system.

The system used terminal services to allow remote administration and running of such tasks as day-end processing. There were no general interactive users. Running the same software on a Windows 8 client system resulted in better performance, all other things being equal (disk, network, CPU, etc.). But a brand new Server 2012 R2 system with just the basic GUI role, performed twice as fast as the Server 2008 system. However, as roles were added, the customer noticed that the RDS role caused the slowdown.

Since Server 2008 R2 was introduced, RDS (Remote Desktop Services, formerly known as Terminal Services) has included a feature called Fairshare CPU Scheduling. With this feature, if more than one user is logged on a system, processing time is limited for a given user based on the number of sessions and their loads (see Remote Desktop Session Host on Microsoft TechNet). This feature is enabled by default and can cause performance problems if more than one user is logged on the system. With Server 2012, two more “fairshare” options were added: Disk Fairshare and Network Fairshare (see What’s New in Remote Desktop Services in Windows Server 2012 on TechNet). These features are enabled by default and can come into play when only one user is logged on. And these options proved to be the cause of the slowdown for our customer. They limited I/O for the single logged-on user (or scheduled task), though the day-end processing was always I/O bound. We were able to remove this bottleneck by either disabling the RDS role or turning off fairshare options.

In summary, if a system is used for file sharing services only (no interactive users), use remote administrative mode and disable the RDS role. If the RDS role must be enabled, consider turning off Disk Fairshare if the server runs disk-intensive software (such as database or I/O-intensive programs), and turn off Network Fairshare if the server has services (such as Microsoft SQL Server or xfServer) to prevent client access from being throttled. For information on turning off Disk Fairshare and Network Fairshare, see Win32 TerminalServiceSetting Class on Microsoft Developer Network (MSDN).

More articles related to this issue:

·         Resource Sharing in Windows Remote Desktop Services
·         Roles, Role Services, and Features


Synergy and XAML in Harmony

By Richard Morris, Posted on October 17, 2013 at 2:23 am

The Symphony Framework is a set of assemblies that aid the development of Synergy .NET applications.  Together with CodeGen they provide a powerful, flexible environment within Microsoft Visual Studio to build great looking windows applications utilising your existing Synergy data and logic.

Windows Presentation Foundation (WPF) is the tool of choice.  The user interface is written in XAML and data bound to your Synergy data exposed through Symphony Framework Data Objects.  It all sounds simple, and it is.  The biggest problem with XAML is that the data binding is performed at runtime, using “magic” strings.  They are “magic” because at runtime they miraculously find the data source you named in the binding “string” and all is well.  However, if you mistype the string then the bindings don’t work, but your application continues to execute – no errors reported, but no data.

To alleviate this problem the Symphony Framework uses CodeGen to build Synergy Repository based Styles.  Styles allow you to define the “style” of a control, for example the type of input control, it’s size, field entry limitations, drop-down list bindings, error information, etc.  Once defined you can reference these styles in your XAML UI code and utilise them.

In Visual Studio 2010 this worked nicely: referencing your styles worked and you could apply styles to the input controls, but there was no editor assistance – you needed to remember the style names for things to build and run correctly.

And then came along Visual Studio 2012 and things really didn’t work at all.  The XAML referencing syntax was changed and broken in a big way.  Some referencing worked in the XAML designer and failed at runtime, and others the reverse – meaning you couldn’t design your UI at all.  Big problems!

But now Visual Studio and the Symphony Framework are back in harmony.  The upcoming release of Visual Studio 2013 is a must for anyone doing any .NET development, it’s a triumph over VS2012!  As well as the XAML referencing being fixed there are some cool new features that really make Symphony Framework development even easier.  One such feature is that the XAML editor is now aware of all your referenced style, making the selection of the right style easier than learning your scales.

XAML

 

 

 

 

And using the right style ensures you have the right binding: hey-presto no more binding issues!

Synergy 10.1.1b is going to be release to coincide with the release of VS2013 so bookmark your diary and ensure you upgrade to the latest versions of both.

If you would like to see examples of Styles in action you check out my video at http://youtu.be/FqWpMRrSb4w.


Using Synergy .NET in Multi-Threaded Applications

By Steve Ives, Posted on October 10, 2013 at 12:28 pm

There are special considerations that must be taken into account when implementing non thread-aware Synergy .NET code that will execute in a multi-threaded environment. As will be explained later, one such environment is when code executes within an ASP.NET / Internet Information Server (IIS) environment.

The Microsoft .NET environment provides a mechanism to allow the isolation of multiple instances of an application from one another. This mechanism is called Application Domains, and is often referred to as AppDomains. Essentially an AppDomain entirely isolates an instance of some piece of executing code from all other executing instances of that, and any other code, so that these instances cannot interfere with one another in any way. AppDomains also ensure that the failure of any code executing within an AppDomain cannot adversely affect any other executing code.

AppDomains are specifically useful in situations where there may be multiple instances of a piece of code executing within the context of a single process, for example where different execution threads are used to perform multiple concurrent streams of processing (perhaps on behalf of different users) all within a single host process. An example of such an environment is ASP.NET Web applications executing on an IIS Web Server.

Synergy .NET provides specific support for isolating non thread-aware code in AppDomains, and in some situations it can be critical that AppDomains are used in order to have your Synergy .NET code behave as expected in several key areas. Those key areas are channels, static data, common data and global data.

If any of the above items are used in a multi-threaded environment without the use of AppDomains, then they are SHARED between all instances of the code running in the multiple threads. By using an AppDomain, code executing within a given thread can isolate itself from code running in other threads, thus returning to the normal or expected behavior in Synergy environments.

If you are implementing Synergy .NET code that does not make use of multi-threading, and will not execute in a multi-threaded application host then you don’t need to worry about any of this.

If you are implementing multi-threaded Synergy .NET code then you have the option of using AppDomains to isolate parts of your code from other instances if you chose or need to do so.

However if you are implementing non thread-aware Synergy .NET code that will execute in an ASP.NET / IIS environment then you are automatically in a multi-threaded environment and it is critical that you use AppDomains to isolate instances of your application code from each other.

The basic problem is this: ASP.NET and IIS isolate different APPLICATIONS within their own AppDomains, but within an ASP.NET application, multiple (potentially lots of) user “sessions” all execute within the SAME AppDomain. This means that by default there is no isolation of executing code between multiple ASP.NET user sessions, and with Synergy .NET that in turn means that channels, and static, common and global data is shared between those multiple user sessions. As an ASP.NET user session is often considered to correlate to a user process in a regular application, the problem becomes apparent. If one “user” opens a new channel and reads and locks a record, that same channel and record lock are shared with all other users. If one user places a certain value in a common field for use by a routine that is going to be called, that value could be changed by code running for a different user before the first users routine gets called.

Clearly it would be very difficult, if not impossible to build and execute reliable code in this kind of environment. Without support for AppDomains in a multi-threaded environment a Synergy developer would need to:

  • Always use automatic channel number selection.
  • Always close any channels before returning from the routine that opened the channel (no persistent open files)
  • Not use static, common or global data unless the nature of that data is that it contains application wide information that does not change during execution.

While it is possible to write code which adheres to these rules, it would be at the least inconvenient to do so, and because of the way that Synergy code has typically been written in the past, existing code would likely require significant reworking.

The solution to this problem is to have each “users” code isolate its self from other “users” code by loading its self into an AppDomain, and this is relatively easy to do. Specifically in the case of an ASP.NET Web application, code can be written to hook the Session_Start event that signals the beginning of a new user session and create a new AppDomain in which to execute, and hook the Session_End event and execute code to clean up by deleting the AppDomain. There are other possible approaches that may be more appropriate, but the principal is basically the same; have the code isolate its self in an AppDomain before any non-thread aware Synergy .NET code executes.

By isolating non thread-aware Synergy .NET code in an AppDomain in a multi-threaded environment you have essentially the same operating environment that you would expect for any other Synergy code executing in a process, with one notable exception. That exception is that environment variables created with XCALL SETLOG are always applied at the PROCESS level. This means that Synergy .NET code executing in any multi-threaded environment should never rely on the use of XCALL SETLOG unless the value being set is applicable to code executing in all other threads. An example of this might be an environment variable that identifies the fixed path to a data file.

Synergex Professional Services Group is in the process of developing code for a sample ASP.NET Web Application that will demonstrate how to use AppDomains to ensure that code executing in one ASP.NET Session is isolated from other ASP.NET Sessions. We will publish this code in the Synergy/DE CodeExchange soon. I will post another article on this BLOG once the code has been published.


Symphony Framework Basics: Control Styling

By Richard Morris, Posted on September 6, 2013 at 5:05 am

In my previous article (Symphony Framework Basics: Data Binding) I demonstrated how to perform simple data binding between your XAML UI controls and your Data Objects.  This article demonstrates how to build powerful styles to define and control your user interface and provide automated data binding to your Data Objects.

Before we look at styles, let’s recap how we do data binding.  Consider the following simple repository structure;

Record group_record

GROUP_ID    ,A20   ; (1,20) group id

DESCRIPTION ,A100  ; (21,120) description

When created as a Data Object this creates two properties;

public property Group_id, a20

public property Description, a100

In the XAML code we can data bind the properties exposed by the Data Object to standard UI controls;

<TextBox Text=”{Binding Path=Group_id, Converter={StaticResource alphaConverter}}”/>

<TextBox Text=”{Binding Path=Description, Converter={StaticResource alphaConverter}}”/>

There are a number of issues here, and not all of them are obvious.  Although we have performed the data binding, there is no code in the XAML to prevent the user typing more characters than the underlying data allows.  The Group_id property for example only allows up to twenty characters, so we need to add code to prevent more being entered.  In the repository we’ve defined the field to only contain uppercase characters and again the XAML is not honouring this requirement.  When a field is in error, for example a required field that is blank, the underlying Data Object exposes this information, but we are not utilising it here.  Also, controlling if the field is read-only, if entry is disabled, etc.  All these setting and more can be configured against the field in the Synergy Repository.

Using CodeGen and the correct Symphony templates we can generate styles that define exactly how we require field entry to be controlled.

Generating the style files is very simple.  The syntax to execute CodeGen with is;

codegen -s GROUP -t Symphony_Style -n GroupMaint -ut ASSEMBLYNAME=GroupMaint -cw 16

One interesting item on the CodeGen command line is the “-cw 16”.  This simply defines the standard width as 16 pixels for each character and is used when defining the size of a control.

The generated style file contains individual styles for each field in the repository structure, as well as a style for the prompt.  Here is an example of a prompt style;

<Style x:Key=”Group_Group_id_prompt” TargetType=”{x:Type Label}”>

<Setter Property=”Template”>

<Setter.Value>

<ControlTemplate TargetType=”{x:Type Label}”>

<Label

Content=”Group ID”

IsEnabled=”{Binding Path=Group_idIsEnabled}”>

</Label>

</ControlTemplate>

</Setter.Value>

</Setter>

</Style>

And a field style;

<Style x:Key=”Group_Group_id_style” TargetType=”{x:Type symphonyControls:FieldControl}”>

<Setter Property=”FocusVisualStyle” Value=”{x:Null}”/>

<Setter Property=”Focusable” Value=”False”></Setter>

<Setter Property=”Template”>

<Setter.Value>

<ControlTemplate TargetType=”{x:Type symphonyControls:FieldControl}”>

<TextBox Name=”ctlGroup_Group_id”

Text=”{Binding Path=Group_id, Converter={StaticResource alphaConverter},

UpdateSourceTrigger=PropertyChanged,

ValidatesOnDataErrors=True}”

Validation.ErrorTemplate=”{StaticResource validationTemplate}”

MaxLength=”20″

Width=”320″

CharacterCasing=”Upper”

IsEnabled=”{Binding Path=Group_idIsEnabled}”

IsReadOnly=”{Binding Path=Group_idIsReadOnly}”

VerticalAlignment=”Center”

HorizontalAlignment=”Left”

ToolTip=”{Binding RelativeSource={RelativeSource Self},Path=(Validation.Errors), Converter={StaticResource errorConveter}}”>

<TextBox.Style>

<Style>

<Style.Triggers>

<DataTrigger Binding=”{Binding Path=Group_idIsFocused}” Value=”true”>

<Setter Property=”FocusManager.FocusedElement”

Value=”{Binding ElementName=ctlGroup_Group_id}”></Setter>

</DataTrigger>

<DataTrigger Binding=”{Binding RelativeSource={RelativeSource Self},Path=(Validation.HasError)}” Value=”True”>

<Setter Property=”TextBox.Background”>

<Setter.Value>

<LinearGradientBrush StartPoint=”0.5,0″ EndPoint=”0.5,1″>

<LinearGradientBrush.GradientStops>

<GradientStop Offset=”0.2″ Color=”WhiteSmoke” />

<GradientStop Offset=”3″ Color=”Red” />

</LinearGradientBrush.GradientStops>

</LinearGradientBrush>

</Setter.Value>

</Setter>

</DataTrigger>

</Style.Triggers>

</Style>

</TextBox.Style>

</TextBox>

</ControlTemplate>

</Setter.Value>

</Setter>

</Style>

This code may look a little verbose but enables a number of capabilities, including;

  • Data binds the underlying UI control to the Data Object property
  • Control features like field length, character casing, read-only, etc.
  • The Tooltip is used to display any error information if the field is in error.
  • The control background colour is made red if the field is in error.

Once you have created your styles and added them to your Visual Studio project you can then reference and use them in your UI design.  To reference the style;

<ResourceDictionary Source=”pack:/GroupMaint;component/Resources/Group_style.CodeGen.xaml”/>

Each style is based on a control in the Symphony Framework called “FieldControl” which can be found in the Symphony.Conductor.Controls namespace.  You must add a reference to this namespace in your XAML code;

xmlns:symphonyControls=”clr-namespace:Symphony.Conductor.Controls;assembly=SymphonyConductor”

Now you can reference the FieldControl and apply the required style to it;

<symphonyControls:FieldControl      DataContext=”{Binding Path=MasterData}”

Style=”{StaticResource Group_Group_id_style}”>

</symphonyControls:FieldControl>

And to add the prompt, or label style use;

<Label Style=”{StaticResource Group_Group_id_prompt}”

DataContext=”{Binding Path=MasterData}” />

Because the styles are linked to the same property in the same Data Object when your code disables the input control the prompt will be greyed out as well.

The code snippets here are just part of the overall solution.  To see the full details you can watch a short video at http://youtu.be/FqWpMRrSb4w. This article convers styling of the user interface.  The next article will demonstrate using all of the difference Synergy fields types and utilizing controls like date pickers, check boxes, etc.

 


Are you ready for the demise of Windows XP?

By Roger Andrews, Posted on August 28, 2013 at 10:28 am

As Microsoft has stated and many other companies have echoed (see links below), the end of life for Windows XP is April 2014. Yep, in just 7 months, Microsoft, anti-virus vendors, and many software vendors (including Synergex) will no longer provide support or security patches for IE and XP. And the end of life for Windows Server 2003 will follow soon after.

Why is this so important? And, if what you’re using isn’t broken, why fix it?

Let’s consider, for example, a doctor’s, dentist’s, or optician’s office running Windows XP and almost certainly Internet-connected – in fact, probably using an Internet-based application. All it takes is an infected web site, a Google search gone astray, or a mistyped URL, and the PC is infected – INCLUDING all of the office’s confidential medical data. Plus, most offices allow their workers to browse the Internet at some point in the day – to catch up on e-mails and IM, conduct searches, surf eBay, etc. If the office is running XP after 2014, it is almost certain that it will be open to infection by a rootkit or other malicious software, because the malware authors will take advantage of every vulnerability not fixed by Microsoft due to the end of life. Add the fact that the antivirus vendors will also reduce or eliminate support, and you have a mass Bot-like infection waiting to happen. Once a system gets a rootkit, it’s nigh on impossible to remove it without a wipe clean. To further complicate things, it usually takes a boot-time scan from a different device to detect many of these infections.

Further, while Windows XP had an impressive 13-year run, it is far less secure than later operating systems, and the hardware it runs on in many cases is also at the end of its life.

If you or your customers are running Windows XP or Server 2003, it’s time to upgrade to a more modern, more secure operating system like Windows 7. At least you can rest assured that Microsoft’s monthly Patch Tuesday will provide protection with security fixes in conjunction with your anti-virus vendors to protect sensitive information and keep the business running.

https://community.mcafee.com/community/security/blog/2013/06/07/xp-end-of-life–08-april-2014

https://community.mcafee.com/community/security/blog/2013/06/07/xp-end-of-life–08-april-2014

 http://blogs.windows.com/windows/b/springboard/archive/2013/04/08/365-days-remaining-until-xp-end-of-support-the-countdown-begins.aspx

http://blogs.technet.com/b/mspfe/archive/2013/04/29/windows-server-2003-rapidly-approaches-end-of-life-watch-out-for-performance-bottlenecks.aspx

http://www.microsoft.com/en-us/windows/endofsupport.aspx

 

 


Symphony Framework Basics: Simple Data Binding.

By Richard Morris, Posted on at 3:06 am

In my previous article (Symphony Framework Basics: Data Objects) I introduced you to the Symphony Data Object.  These Data Objects are at the root of a Symphony Framework development.  This article demonstrates how to data bind the exposed field properties on your Data Object to user interface controls.

Before we deep dive into data binding let’s take a minute to understand how we are going to craft our new user interface (UI).  For a Windows Presentation Foundation (WPF) desktop application we will be building our UI in XAML.  As defined by Microsoft:

“XAML is a declarative mark-up language. As applied to the .NET Framework programming model, XAML simplifies creating a UI for a .NET Framework application. You can create visible UI elements in the declarative XAML mark-up, and then separate the UI definition from the run-time logic.”

To enable the separation of logic we utilize a concept called data binding.  Data binding provides the ability to bind a UI control to a dynamic piece of data.  This data binding can only occur when you expose your individual data fields as properties. For each field in your repository structure the Data Object exposes a read-write property of the same name, starting with an uppercase letter – case is VeRy iMpOrTaNt in WPF!

Consider the following simple repository structure;

Record group_record

GROUP_ID    ,A20   ; (1,20) group id

DESCRIPTION ,A100  ; (21,120) description

When created as a Data Object this creates two properties;

public property Group_id, a20

public property Description, a100

The first thing to note here is that the property types are Synergy types.  The underlying data is not transformed by the code generation process or the Data Object.  The actual data type being exposed here is a Synergex.SynergyDE.AlphaDesc.

Our UI is going to be written in XAML.  Now XAML understands all standard .NET types like String, Integer, etc. but does not know about a Synergex.SynergyDE.AlphaDesc type so when we data bind to this property we need to utilize a converter to convert from the base, or exposed type, to a type that can be understood by XAML.

This sounds a little complicated but it’s not, and actually gives you complete control over how your UI controls do data binding.  Let’s look at an example.  Consider you have a field called “credit_exceeded” which in your synergy program is defined as an “a1”.  The exposed Data Object property would be called “Credit_exceeded”, exposed as an a1 and the allowable values for the field are “Y” for yes and “N” for no.  Ideally “Credit_exceeded” would be exposed as a boolean.  But by doing so you would have to change any existing synergy code that wanted to use this property.  You would also have to define logic in the Data Object to perform the transformation between the Y/N values and True/False.  By using a converter when you perform the data binding you can choose how you wish to see the data.  A modern UI would most likely bind the Credit_exceeded property to the IsChecked property of a UI CheckBox control.  But some users may wish to continue to see and enter “Y” or “N” and so you can data bind to a standard UI TextBox.  Alternatively you could expose a UI DropDown control with “Yes” and “No” options, bound to the values “Y” and “N”.  By allowing the UI designer in the XAML to utilize converters and exposing the raw Synergy data is a very powerful capability.

So, back to our XAML design.  Our Data Object was built from the repository structure and contains the two properties;

public property Group_id, a20

public property Description, a100

We need to be able to convert from Synergex.SynergyDE.AlphaDesc to String so that we can data bind to simple UI TextBox controls.  The Symphony Framework provide a number of default converters which can be found in the Symphony.Conductor.Converters namespace.  In your XAML code you need to reference the converters;

<ResourceDictionary Source=”pack:/SymphonyConductor;component/Resources/Converters.xaml”/>

Once we have referenced the Symphony Framework converters we can use them to convert from our base Synergy type to any standard .NET type.

<TextBox Text=”{Binding Path=Group_id, Converter={StaticResource alphaConverter}}”/>

<TextBox Text=”{Binding Path=Description, Converter={StaticResource alphaConverter}}”/>

And you have data bound the Data Object properties to the UI controls.  If the data within the Data Object changes, the UI is notified and updated.  As data is changed through the UI, the underlying Data Object is updated.

The code snippets here are just part of the overall solution.  To see the full details you can watch a short video at http://youtu.be/A0eMoLt_8iE.This article convers the basic data binding capabilities of the Symphony Framework.  In my next article we shall demonstrate how to style your UI by generated repository structure based styles and utilize them in your UI XAML code.


Symphony Framework Basics: Data Objects.

By Richard Morris, Posted on August 22, 2013 at 6:12 am

The purpose of the Symphony Framework is to enable you to expose existing Synergy data and logic in a way that can be utilised in a Windows Presentation Foundation (WPF) desktop application.  This is achieved using Symphony Data Objects.  These Data Objects are at the root of a Symphony Framework development.

Creating a Symphony Data Object is very simple, but before we talk about how, let’s look at why.  Synergy data is rather different to the data you find in other applications/databases.  All synergy data is an alpha.  When I talk about Synergy data I mean the information we store and move around the application. I’m not referring to individual elements of data that you can define using the “data” statement – these are different and do allow you to create a data entity that is not an alpha.  So, that’s cleared up what “data” in Synergy is….  It’s all just an alpha.  These alpha data areas are what we manage in our SDMBS data files. Just try reading from your file into an integer: %DBL-E-ALPHAXP, Alpha expression expected!  The compiler is very clever and allows you to re-map this alpha data to anything you desire, with a few exceptions, and this is done by declaring “record” areas.  Your record area can overlay this alpha data in any way you require, and can even overlay the overlays.  This overlaying is simply remapping how we want to expose or access the individual bytes of the alpha stream of data.  It’s actually quite cool and very powerful.  Now you have your alpha data – but in your Synergy program you are accessing portions of the data as different types like decimal, integer, alpha, etc. and your programs rely on this ability.  So where do we store these overlay definitions of our alpha data – in your Synergy Repository of course!

Now back to our Symphony Data Objects.  As I mentioned, creating them is a breeze.  You simply use CodeGen, the right Symphony Framework template, and your Synergy Repository structure.  The syntax is very easy:

codegen -s GROUP -t Symphony_Data -n PartMaint -prefix m

And you have your structure based Symphony Data Object.

Assuming you have referenced the required Symphony Framework assemblies in your Synergy.NET Visual Studio project your Data Object code will just build.

Understanding the Data Object is really quite simple as well.  There are a number of aspects to be aware of….

  • There is a property called SynergyRecord.  This is a property that exposes the underlying “alpha” data area.  You can use this property to populate the Data Object, for example, when you have read a record from a file.
  • Each field in your repository structure is exposed as a read-write property which allows access to the individual fields.
    • You can data-bind to these properties from your WPF user interface (XAML) code.  More about this in my next article!
    • You can also access this “field” data from your existing Synergy code.
    • Additional powerful capabilities of your Data Object include:
      • IsDataValid is a boolean value indicating if the data within the Data Object passes the basic validation rules you have defined in your repository structure.  For example, a “required” field must have a value in it otherwise the Data Object is considered IsDataValid == false.
      • The object conforms to the property notification sub-system of the Model-View-ViewModel design pattern for WPF development.
      • Individual fields in your repository structure additionally expose IsEnabled, IsReadOnly and IsFocused properties that help you control the User Interface.

You can watch a short video at http://youtu.be/DkoVIEmr3NY that walks you through the steps for creating Symphony Data Objects and building them into your Synergy .NET Visual Studio project.  In the next article I’ll demonstrate how easy it is to data bind UI controls to your Data Object properties.


2013 DevPartner Conference Tutorials Now Available On-Line

By Steve Ives, Posted on June 30, 2013 at 10:24 am

We and many of our customers have just returned home from another great conference, during which we introduced another batch of Synergy/DE related developer tutorials. These tutorials have now been added to those from previous years and made available on-line. The tutorials can be downloaded via a small client application. If you already have the tutorials client installed then you will see the new content when you start the application. If not you can download and install the tutorials client from the following URL:

http://tutorials.synergex.com/Download.aspx

 


We’re Ready for for the 2013 DevPartner Conference … Are You?

By Steve Ives, Posted on June 5, 2013 at 9:36 pm

photo

What you’re looking at is fifty terabytes of external USB 3.0 hard drives (for the oldies amongst us that’s the equivalent of 10,000 RL01’s), and we’ll be giving them away during the DevPartner conferences in Bristol (UK) and Providence (RI) in the next three weeks.

Of course it’s not really about the hardware, anyone can get that! It’s really about what’s ON the hardware. Each of these disks contains a Windows 8 virtual machine that includes the latest version of Synergy/DE (10.1.1a, just released today) and Microsoft Visual Studio 2012 (Update 2).

But it’s not really about the products that are installed on the virtual machines either. It’s really about what you can learn from these “freebies”, and how you can use what you have learned to the benefit of your employer.

During this years DevPartner conferences the Synergex Professional Services Group will introduce seventeen all-new hands-on tutorials that will provide you with a quick-start to all of the latest features of Synergy/DE. And in addition we’ll be including updated versions of three of the most popular tutorials from previous conferences.

It’s not too late! If you haven’t already signed up for the 2013 DevPartner Conference then all you have to do is visit this link:

http://conference.synergex.com/registration.aspx

But talk to your boss before you visit the link, because if your company is already a member of the DevPartner program you might find that your conference registration is free!

We are all looking forward to seeing you again for another great conference.


Paradise by the dashboard light.

By Richard Morris, Posted on March 7, 2013 at 11:08 pm

For many years we have had the ability to expose Synergy data and logic through xfServerPlus which can be consumed, processed and displayed by cool Windows programs.  But for me there has always been the “single thread” bottleneck.  I’ve wanted my Windows applications to respond when the user drags the mouse, clicks a button, sorts a grid, at the time the event happens, and not when the program has caught up doing its background actions.  Calling a remote method generally “hangs” the program until that operation has completed and this is because it’s being performed on the UI thread.  And while the UI thread is busy calling into your xfServerPlus libraries and waiting for a response it blocks any user interaction so your application becomes unresponsive.

The answer is to utilize threads.  Threads are separate processing channels that can run simultaneously.  Your application always has at least one thread – the UI thread.  But if you have a language that can allow multithreading then utilising this to perform background tasks on other threads frees your UI thread to continue being response to the user – its job really!

Welcome to Synergy.NET!  Totally written in Synergy.NET and utilising the Symphony Framework I’ve written a fully functional interactive dashboard:

db

The user can fully define the record selection criteria and then issue the “Select” of the data.  This selection is performed on a background thread.  This means as the results are being returned from the remote Synergy DBMS file via xfServer the list is populated and the user can begin navigating the items within it.  While the list is loading, the user can select an item even if the list load is not complete.  At this point a number of background threads become active – each one loading data from different files via xfServer.  The various lists are populated “in real time” and the application remains responsive – all the lists can be “loading” at the same time.

If the user clicks another item in the list the executing threads are instructed to “stop” and then the selection process restarts based on the newly selected master record.

Using the Symphony Framework and building my Synergy Repository based Data Objects using CodeGen I was able to quickly build the core components used in the Dashboard including the thread based selection code.

All this functionality is available in the latest V10.1 release of Synergy. To find out more details and to see multithreaded processing in action book your place at the upcoming DevPartner Conference at http://conference.synergex.com.


Windows 8: If I can do it, so can you

By William Mooney, Posted on February 13, 2013 at 9:10 am

Although I’ve always considered myself to be an early adopter, I must admit I’ve been a bit skeptical about upgrading to Windows 8 for a number of reasons: its completely new look and feel, all the negative propaganda surrounding it, its missing “Start” button, our internal struggles with the new icon requirements, other people’s horror stories… just to name a few. This past weekend I was forced to face my fears head on when I offered to help my father purchase a laptop. We found ourselves at one of the big box stores, where I quickly realized that any laptop (or desktop, for that matter) we purchased would have Windows 8 installed (and I would have to pay a downgrade fee to get to my tried and true Windows 7).  When I expressed my doubts to the salesperson, he countered with surprising enthusiasm. He raved about Windows 8 and was so reassuring that I soon felt comfortable enough to make the leap, instead of going online to purchase a Windows 7 system.  It certainly got me thinking that I’ve been paying attention to only the negative stuff, and not the positive. Even more importantly—just like 64-bit systems—I know that our customers will soon be dealing with end-users just like me.

At home a few hours later with my dad’s new Windows 8 laptop, buyer’s remorse and the dreaded Windows 8 user experience was full on. Things that were so familiar and intuitive were gone. Armed with my trusty Windows 7 laptop and my BFF Google at my side, I slowly and painfully learned how to turn on Windows Defender, get the Windows Explorer 10 address bar to reappear, get to traditional Windows, etc., etc.

Unsure about how I would be able to support my dad on Windows 8 when I didn’t have a clue myself, I decided the next day that the sooner I upgraded my own system and got on with it, the sooner I would be able to help my dad—and hopefully our customers as well.

So my next step was to figure out what on my system was—and more importantly, was not—supported. I stumbled upon a great little tool, the Windows 8 Upgrade Assistant. You basically just run this tool to determine which programs on your computer are or are not compatible with Windows 8. I highly recommend it. When you run it, you’ll notice that Synergy/DE 10.1 is listed as compatible with Windows 8, which brings me to the real point of this post.

You probably have users/customers who will ultimately upgrade to Windows 8 (or the latest version of whatever platform you’re on), either because they drink the Kool-Aid or, frankly, because they have no other choice. You’ll have some who will be gung-ho about going to the latest version and oblivious of any reason not to. Others, like me, will be skeptical about upgrading, but it’s in their nature to go for it anyway. And you’ll have others who will just happen to buy a new system and will assume all of their software is supported. No matter what the reason, you’ll want to be prepared when your users ultimately upgrade, so make sure your applications can support these customers when they inevitably ask. First step: make sure to get your Synergy/DE version current by upgrading to version 10.1 right away. Then, do what I’m doing: vow to learn something new about your new version/platform every day. In other words, embrace the change!


HTTP API Enhancements in DBL 10.1

By Steve Ives, Posted on January 14, 2013 at 11:24 pm

In addition to introducing several totally new features DBL 10.1 also includes enhancements to the client portion of the HTTP API. These enhancements make the API significantly easier to use, and also make it possible to achieve things that were not previously possible.

Since the HTTP API was introduced in DBL 7.5 the client part of the API consisted of two routines. These routines are HTTP_CLIENT_GET and HTTP_CLIENT_POST. As suggested by their names these routines allowed you to issue GET and POST requests to an HTTP server. A GET request is a simple request to a server in which a URI is sent to the server and a response (which may include data) comes back. A POST request is slightly different in that in addition to the URI, additional data may also be sent to the server in the body of the HTTP request.

When dealing with an HTTP server it isn’t always possible to pre-determine the amount of data to be sent to the server, and it’s certainly not possible to know how much data will come back from the server for any given request. So in order to implement the HTTP API it was necessary to have a mechanism to deal with variable length data of any size, and at that time the only solution was to use dynamic memory.

Using dynamic memory worked fine, any data to be sent to the HTTP server as part of a POST request was placed into dynamic memory and the memory handle passed to the API, and any data returned from a GET or POST request was placed into dynamic memory by the API and the handle returned to the application. Dealing with variable length strings using dynamic memory isn’t particularly hard, but the fact of the matter is that while only a single line of code is required to perform an HTTP GET or POST, typically several lines of code were required in order to marshal data into and out of memory handles.

When the System.String class was introduced in DBL 9.1, so was the opportunity to simplify the use of the HTTP API, and that became a reality in DBL 10.1.

In order to maintain compatibility with existing code the HTTP_CLIENT_GET and HTTP_CLIENT_POST routines remain unchanged, but they are joined by two new siblings named HTTP_GET and HTTP_POST. These routines are similar to the original routines, essentially performing the same task, but they are easier to use because they use string objects instead of dynamic memory. And because the string class has a length property it is no longer necessary to pass separate parameters to indicate the length of the data being sent, or to determine the length of the data that was received. String objects are also used when passing and receiving HTTP headers.

So the new HTTP_GET and HTTP_POST routines make the HTTP API easier to use, but there is a second part to this story, so read on.

One of the primary use cases for the HTTP API is to implement code that interacts with Web Services, and in recent years a new flavor of Web Services called REST Services (REST stands for Representational State Transfer) has become popular. With traditional Web Services all requests were typically sent to the server via either an HTTP GET or POST request, but with REST Services two additional HTTP methods are typically used; the HTTP PUT and DELETE methods.

Many of you will be familiar with the term “CRUD” which stands for “Create, Read, Update and Delete”. Of course these are four operations that commonly occur in software applications. The code that we write often creates, reads, updates or deletes something. When designing traditional Web Services we would often indicate the type of operation via a parameter to a method, or perhaps even implement a separate method for each of these operations. With REST based web services however, the type of operation (create, read, update or delete) is indicated by the type of HTTP request used (PUT, GET, POST or DELETE).

To enable DBL developers to use the HTTP API to interact with REST services an extension to the HTTP API was required, and DBL 10.1 delivers that enhancement in the form of another two new routines capable of performing HTTP PUT and DELETE requests. As you can probably guess the names of these two new routines are HTTP_PUT and HTTP_DELETE. And of course, in order to make these new routines easy to use, they also use string parameters where variable length data is being passed or received.

You can find much more information about the HTTP API in the DBL Language Reference Manual, which of course you can also find on-line at http://docs.synergyde.com. In fact, if you’re feeling really adventurous you could try Googling something like “Synergy DBL HTTP_PUT”.


Unit Testing with Synergy .NET

By Steve Ives, Posted on at 11:02 pm

One of the “sexy” buzz words, or more accurately “buzz phrases” that is being bandied around with increased frequency is “unit testing”. Put simply unit testing is the ability to implement specific tests of small “units” of an application (often down at the individual method level) and then automate those tests in a predictably repeatable way. The theory goes that if you are able to automate the testing of all of the individual building blocks of your application, ensuring that each of those components behaves as expected under various circumstances, testing what happens when you use those components as expected, and also when you use them in ways that they are not supposed to be used, then you stand a much better change of the application as a whole behaving as expected.

There are several popular unit testing frameworks available and in common use today, many of which integrate directly with common development tools such as Microsoft Visual Studio. In fact some versions of Visual Studio have an excellent unit testing framework build in; it’s called the Microsoft Unit Test Framework for Managed Code and it is included in the Visual Studio Premium and Ultimate editions. I am delighted to be able to tell you that in Synergy .NET version 10.1 we have added support for unit testing Synergy applications with that framework.

I’ve always been of the opinion that unit testing is a good idea, but it was never really something that I had ever set out to actually do. But that all changed in December, when I found that I had a few spare days on my hands. I decided to give it a try.

As many of you know I develop the CodeGen tool that is used by my team, as well as by an increasing number of customers. I decided to set about writing some unit tests for some areas of the code generator.

I was surprised by how easy it was to do, and by how quickly I was able to start to see some tangible results from the relatively minimal effort; I probably spent around two days developing around 700 individual unit tests for various parts of the CodeGen environment.

Now bear in mind that when I started this effort I wasn’t aware of any bugs. I wasn’t naive enough to think that my “baby” was bug free, but I was pretty sure there weren’t many bugs in the code, I pretty much thought that everything was “hunky dory”. Boy was I in for a surprise!

By developing these SIMPLE tests … call this routine, pass these parameters, expect to get this result type of stuff … I was able to identify (and fix) over 20 bugs! Now to be fair most of these bugs were in pretty remote areas of the code, in places that perhaps rarely get executed. After all there are lots of people using CodeGen every day … but a bug is a bug … the app would have fallen over for someone, somewhere, sometime, eventually. We all have those kind of bugs … right?

Anyway, suffice it to say that I’m now a unit testing convert. So much so in fact that I think that the next time I get to develop a new application I’m pretty sure that the first code that I’ll write after the specs are agreed will be the unit tests … BEFORE the actual application code is written!

Unit testing is a pretty big subject, and I’m really just scratching the surface at this point, so I’m not going to go into more detail just yet. So for now I’m just throwing this information out there as a little “teaser” … I’ll be talking more about unit testing with Synergy .NET at the DevPartner conferences a little later in the year, and I’ll certainly write some more in-depth articles on the subject for the BLOG also.


The changing face of data.

By Richard Morris, Posted on at 3:02 am

With the eagerly anticipated release of Synergy/DE version 10 we now have the ability to track and manage changes to Synergy DBMS data files.  This new feature, Change Tracking, gives you the ability to track and monitor records within a file that have been modified, added or deleted.

Using the snapshot facility you can start a new snapshot within a file and begin your applications normal processing.  Any file updates are records against the current snapshot.  There can only ever be one active snapshot within a file so as soon as you start a new snapshot, all previous ones become an historical record of the changes made to the file.  In code you can access any existing snapshot within a file to trace the activity using the new Select class.

A recent development for a customer required code changes to a number of programs.  All the programs together defined a process and it was imperative that my changes didn’t break that process.  I wanted to be able to test the changes to the data after running each program and not just when I’d run all the programs for the process.  With Change Tracking this was a very easy task.  I simply created a new snapshot across all my files before running each program.  Using the Symphony Framework and CodeGen I created a simple program that displayed the records within a particular snapshot, so I could easily see what changes the program had made.  If a program didn’t do what I had expected (I intentionally coded in a bug just to check this!) I could simply rollback the current transaction, fix the program and continue.  I no longer have to restore the data and start my testing from the beginning.

You can see just how easy it is to utilise Change Tracking by watching a short video I’ve uploaded to YouTube.  Enjoy!

There are many applications for Change Tracking, and at this year’s DevPartner conference I’ll be exploring this new feature in detail – including having attendees help me demonstrate many of the capabilities!


What would you say to a prospect who questions why your app is written in Synergy?

By William Mooney, Posted on September 20, 2012 at 5:38 pm

Recently the technology director for one of our top customers forwarded to me a copy of a lengthy email he had sent to the employees at his company, just raving about the new features available in the latest version of Synergy/DE. Apparently his sales director had responded to the email with great enthusiasm, but requested a condensed version of the email that he could forward out to decision makers and prospects. So my contact, the development manager, asked me if I could repeat something I had said during one of the evening events at the DevPartner Conference in May—something about how to respond to a prospect who questions why your app is written in Synergy. He thought what I had said represented a condensed version of his email, and was something that his sales director would be able to use. At first, I tried to remember exactly what it was that I said (bear in mind, I probably had a pint of Guinness under my belt at the time) but then quickly decided that there was an easy answer to this question—Guinness or no Guinness.

So, I provided the response below.

If asked, “What would you say to a prospect who questions why your app is written in Synergy?”, I would say…

Application X [the customer’s application] is developed with Synergy DBL, which is one of the most advanced languages in existence today for developing enterprise business applications. While Synergy/DE is a modern OO development suite that rivals any popular tool set today, what separates it from the pack is its portability. When we first developed Application X 30 years ago, we could never have possibly imagined that our customers would need to run on Windows 10 years ago or that there would be a .NET environment as we know it today. Because we use Synergy/DE, which over the years has consistently added support for the platforms we’ve needed to get to, and which currently compiles and runs on OpenVMS, all flavors of UNIX, Linux, Windows, .NET, handheld devices, and the Cloud, we at Company X can focus on functionality. There is no question there will be new user platforms down the road, but because Application X is based on Synergy/DE, we will be in a position to leverage our current business logic without the need for rewrites, no matter what shape a new platform happens to take. For “future proofing” an application, there’s no better place to be.

Cheers!

 


Page 5 of 12« First...34567...10...Last »
Don't miss a post!

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts Tag Cloud Archives