Open Menu

Synergex Blog

Performance troubleshooting

By Phil Bratt, Posted on May 14, 2018 at 10:22 am

In the 1984 movie classic Ghostbusters, we are introduced to Bill Murray’s character Dr. Peter Venkman, a professor of paranormal studies, testing subjects for the gift of clairvoyance—the ability to gain information about an object, person, location, or event through extrasensory perception. While it is clear Dr. Venkman does not take such things very seriously, we can see the advantage of such an ability, particularly for developers.

Stop me if you’ve heard this before: “We upgraded X and now Y is happening.” In this case, Y is usually associated with a negative behavior like slow performance, a slew of errors, or a bad user experience. Events like these may induce weariness, nausea, dry mouth, and other various side effects that are usually listed in American pharmaceutical commercials. In summary, they’re unpleasant. If only there were some means to predict these events and avoid them in the future…

Unfortunately, most developers are not gifted with the power of clairvoyance to anticipate problems with upgrades before they happen. But maybe instead of a crystal ball, there are tools that can help us avoid upgrade failure. Let’s look at some things that can help anticipate and/or prevent such issues from happening.

A relatively recent addition to the Synergy documentation is the Configuring for performance and resiliency topic in the Installation and Configuration Guide. This topic discusses things that one should take into consideration when running Synergy on any system, and it’s based on years of experience from the Synergex Development staff and the results of their testing. If you haven’t read this section yet, I highly recommend doing so. If you’ve already read it, I recommend a quick refresher the next time you’re looking at major system or software changes.

In Support, we often see issues when developers virtualize systems that run Synergy or when data is migrated and then accessed across a network rather than being stored locally. Both scenarios are discussed in this topic. And as part of Synergy’s web-based documentation set, it’s updated regularly with the latest information. Make sure you take a look before any major upgrade in case there are changes.

Other useful tools for avoiding problems are the Synergex Blog, the Synergy migration guides, KnowledgeBase articles like Guideline for debugging performance issues, and the release notes that come with each version of Synergy. Remember that even if you are just changing the operating system and/or hardware and not your version of Synergy, you should re-review these materials. Some of the considerations they outline may now be relevant, even if they didn’t affect you previously. Also, when testing, remember to take load testing or a process running over time into account. We commonly see pitfalls  when developers neglect these two factors in testing.

Taking Action

Now let’s say that despite your excellent planning, you do see a performance issue. What can you do? Here are some steps I’ve found helpful that might get overlooked. Most have to do with simply eliminating factors that affect performance.

  • Establish a baseline

If you’re going to diagnose a problem in performance, the first thing to do is isolate code or create a piece of code that demonstrates the problem. Make this your baseline test for all of the various configurations you’re going to test. This will make your tests consistent as well as eliminate code changes as a factor.

  • Use a metric

Establish a program you’re going to use to measure the difference in performance. If you’re using a traditional Synergy program as your baseline, you can use the Synergy DBL Profiler, which will count CPU time for you. Just make sure you pick the same metric for your testing—CPU time is not the same as real time. This step will enable you to get measurable results to test what is actually making a difference.

  • One by one, test all the things

I’ve found that the easiest way to plan and visualize testing is to make a tree. Each layer is one aspect you’re testing that continues to branch with every different aspect. For example, I had a situation where a production machine migrated Synergy version, operating system, and hardware and virtualized the OS, all in one move. We picked one thing to change (the virtualization of the OS) and tested it.

Virtualized Non-Virtualized


By doing this, we established that virtualization was a factor, because a virtualized environment was slower than a non-virtualized one. We then compared those to the old and new Windows versions, but continued with virtualized and non-virtualized environments using the same virtualization software.

Windows 8 Windows 10
Virtualized Non-Virtualized Virtualized Non-Virtualized
In previous table In previous table


On average, this produced the same result. (It was I/O processing, so we did an average of 10-20 runs based on how volatile the results could be.) Next, we compared the Synergy 10 runtime with the Synergy 9 one.

Windows 8 Windows 10
Virtualized Non-Virtualized Virtualized Non-Virtualized
Syn 9 Syn 10 Syn9 Syn 10 Syn 9 Syn10 Syn 9 Syn 10
In previous In previous In previous In previous


The tree continued growing until all of the factors were considered and tested.

Closing Thoughts

It can be tedious to test one change at a time, but without that kind of granularity, you can’t establish which change affected performance and by how much. In the example I mentioned above, we established that virtualizing the hardware was causing a problem because of the way the virtual machine software emulated separate cores. We never would have come to such a conclusion without carefully eliminating the many different changes one at a time.

After you’re able to establish exactly which changes caused the performance issue(s) and by how much, you can work on a fix or provide a solid case to whichever software support representative you need to contact to get a fix.

You might know most of this already. You might even know of some methods, tips, etc., for performance issues that I didn’t discuss. Maybe you are clairvoyant and you already knew the contents of this post before I did. Either way, I hope you find this information helpful when you look at performance in the future, in both preventative measures and problem diagnosis.

Big Code

By Richard Morris, Posted on April 13, 2018 at 7:13 am

If you have large (and I mean LARGE) blocks of code in single source files – and by large I mean 20k lines plus – then you may be having compiler issue with “SEGBIG” errors: “Segment too big”. This issue arises because your code segment is just too big for the compiler to handle and is usually the result of many man years of development to a single routine that has just grown over time.

If you encounter SEGBIG issues, as a customer I have recently worked with did, then this quick blog will give you some practical ideas of how to manage the issue and modify the code to allow for future development and expansion, without having to rewrite the world.

First off, it’s not the physical number of lines of code in the source file that’s the issue, it’s the lines of code and data definitions within each routine block: subroutine or function. Developers have encountered the problem for many years and the resolution has previously been to chop out a section of code, make it into a subroutine or function, and somehow pass all the appropriate data to it – usually by large numbers of arguments or common/global data blocks.

The “today” way is not too dissimilar but is a little more refined: turn the code block into a class. The first major advantage is class-based data. This removes the need to create subroutines or functions that accept large numbers of arguments, or to create large common or global data blocks. As an example:

subroutine BigRoutine


.include ‘’

record localData

localRoutineData      ,a10



call doSomeLogic

call doOtherLogic







Obviously this code will not give us a SEGBIG issue, but its an example of the structure of the code. The routine has a common data include and private data. In the routine body we make multiple local label calls. When there is too much data and too many lines of code added we will encounter a SEGBIG error.

So to address this, in the same source file, we can create a class with class-level data (the routine level data) and methods for the local call labels. So, for example:

namespace CompanyName

public class BigRoutineClass

private record localData

localRoutineData      ,a10


public method Execute, void







method doSomeLogic, void

.include ‘’




method doOtherLogic, void

.include ‘’






In this code, the Execute method becomes the entry point. All the existing code that made the label calls is moved into this routine and the calls changed to method invocations;



Then we can change the existing BigRoutine code;

subroutine BigRoutine



routineInstance       ,@CompanyName.BigRoutineClass



routineInstance = new BigRoutineClass()





Although the code changes I’ve described here sound monumental, if you use Visual Studio to develop your Traditional Synergy code the process is actually quite simple. Once you have created the scaffolding routine and defined the base class with class level data (which really is a case of cutting and pasting the data definition code), there are a few simple regex commands we can use that will basically do the work for us.

To change all the call references to class method invocations you can use:

Find: ([\t ]+)(call )([\w\d]+)

Replace: $1$3()


To change the actual labels into class methods, simply use the following regex:

Find: ^([\t ]+)([a-zA-z0-9_]+)[,]

Replace: $1endmethod\n$1method $2, void\n$1proc


And to change the return statements to method returns, use:

Find: \breturn

Replace: mreturn


These simple steps will allow you to take your large code routines and make manageable classes from them which can be extended as required.

If you have any questions or would like assistance in addressing your SEGBIG issues, please let me know.

Synergy/DE Documentation Reimagined

By Matt Linder, Posted on April 9, 2018 at 11:01 am

If you’re a Porsche enthusiast, you probably know about the Nürburgring record set a few months back by a 911 GT2 RS, the latest iteration of the Porsche 911 (see the Wikipedia article). Like many, I find it interesting that the best* production sports car in the world isn’t a new design, but the result of continuous improvement and development since its introduction over 50 years ago. One company, Singer Vehicle Design, takes old 911s and resurrects them as carbon fiber kinetic sculptures in the aesthetic of the older 911s, but with performance that matches some of the fastest new 911s from the factory. They describe these as “Reimagined,” and you can see a video about a Singer 911 or visit the Singer website for more information.

Here in the Documentation department at Synergex, we’ve been doing some reimagining of our own. The Synergy/DE documentation has been continually improved over the years, but since 10.3.3c, we’ve published the Synergy/DE documentation with a new format, a new look, and a new set of underlying technologies and practices. The goal: documentation that is quickly updated to reflect product changes and user input, that is increasingly optimized for online viewing, and that is increasingly integrated with other Synergy/DE content. (And soon it will be better integrated with Visual Studio as well; see a recent Ideas post for details.) You can access the doc just about anywhere (even without internet access), it offers better viewing on a range of screen sizes, and it’s poised for further improvements. If you haven’t seen our new “reimagined” doc, check it out at

Here are some of the highlights:

  • A better, more responsive UI for an improved experience when viewing documentation from a desktop system, a laptop, or a tablet.
  • Technologies that facilitate more frequent updates and allow us to increasingly optimize content for online use.
  • Improved navigation features. The Contents tab and search field adapt to small screens, the UI includes buttons that enable you to browse through topics, and a Synergy Errors tab makes it easy to locate error documentation.
  • Quick access to URLs for subheadings. To get the URL for a topic or subheading (so you can share it or save it for later), just right-click the chain-link icon next to a heading and copy the link address.
  • The ability to print the current topic (without printing the Contents tab, search field, etc.) and to remove highlighting that’s added to topics when you use the search feature.
  • Local (offline) access. If you’re going somewhere where internet access is limited, download and install the Local Synergy/DE Product Documentation, which is available on the Windows downloads page for 10.3.3.

See the Quick Tips YouTube video for a brief visual tour of the documentation, and let us know what you think. In the footer for every documentation topic, there is a “Comment on this page” link you can use to send us your input. We look forward to hearing from you!

*Just an opinion!

A Winning Formula

By Richard Morris, Posted on February 15, 2018 at 3:28 am

For a recent project I’ve worked with a customer who wished to provide their users with an engaging desktop application that would allow management of product formulations.  They had a Synergy UI Toolkit version and also elements of the required application in a third-party system, but neither met the needs of the users.  After a review and discussions about their requirements we agreed on a Synergy .NET Windows Presentation Foundation based application using Infragistics tooling for the User Experience.

The basic requirements of the application where to allow the creation and maintenance of formulations.  A formulation contains the components required to make a finished product.  For this customer the final product is an aerosol. 

The basic interface is built using the Infragistics control to handle navigation (Ribbon menu control), listing and selection of data (powerful DataGrid), hierarchical representation of the formulation components (TreeView) and management of finished product details (Property Grid);

Of course, using the Infragistics DockManager allows the user to drag and reposition all the available windows to their liking.

There are powerful searching facilities, or QBE (Query By Example) controls.  These allow the user to provide snippets of information and the application will query the Synergy DBMS database using Symphony Harmony and the Synergex.SynergyDE.Select class;

The top line of the QBE controls allow the user to enter the data in the columns they wish to search for and so only select the data they require and not have to filter through a list of thousands of formulations.

Because the application is written in Synergy, the existing printing capabilities from the original UI Toolkit application have been retained without change;

The whole application is written in Synergy .NET and utilises the Symphony Framework for controlling the data access and presentation.  If you would like more details, or would like to know how you can build modern applications with Synergy .NET please drop me an email.

CodeGen 5.2.3 Released

By Steve Ives, Posted on December 1, 2017 at 11:18 pm

We are pleased to announce the release of a new version of CodeGen. The new release includes new features, addresses some issues found with previous releases, and also paves the way for once again being able to use CodeGen on non-Windows platforms through experimental support for the .NET Core environment.

As always, you can download the latest version of CodeGen from here.

CodeGen Release Notes

  • We added a new experimental utility to the distribution. The Code Converter utility can be used to automate bulk searches within and bulk edits to an applications code. This utility is in a usable form but is still a work in progress and is likely to undergo substantial changes as it evolves.
  • We added two new utility routines (IsDate.dbl and IsTime.dbl) that are referenced by some of the supplied sample template files.
  • We corrected a regression that was introduced in the previous release which caused the field loop expansion token <FIELD_SQL_ALTNAME> not to default to using the actual field name if no alternate name was present.
  • We performed an extensive code review and cleanup, updating the code in several areas to take advantage of new features available in the Synergy compiler, and also improving efficiency.
  • We fixed an issue that was causing the CreateFile utility -r (replace file) option to fail, an existing file would not be replaced even if the -r option was specified.
  • We fixed an issue in the CreateFile utility that would result in an unhanded exception in the event that invalid key information was passed to XCALL ISAMC.
  • We made some minor code changes to allow CodeGen to be built in a .NET Core environment and we hope to be able to leverage .NET Core to once again support the use of CodeGen on non-Windows systems (starting with Linux) in the near future.
  • This version of CodeGen was built with Synergy/DE 10.3.3d and requires a minimum Synergy version of 10.1.1 to operate.

Symphony Framework Components

  • We no longer ship the Symphony Framework sample templates with CodeGen. You can obtain the latest Symphony Framework templates from the Symphony Framework web site (
  • There were no Symphony Orchestrator changes in this release.
  • There were no Symphony Framework CodeGen Extensions changes in this release.

CodeGen 5.2.2 Released

By Steve Ives, Posted on October 25, 2017 at 3:00 pm

We are delighted to announce the availability of a new CodeGen release that includes the following enhancements and changes:

  • We added a new field loop expansion token <FIELD_FORMATSTRING> which can be used to access a fields format string value.
  • We added a new command-line option -utpp which instructs CodeGen to treat user-defined tokens as pre-processor tokens. This means that user-defined tokens are expanded much earlier during the initial tokenization phase, which in turn means that other expansion tokens may be embedded within the values of user-defined tokens.
  • We removed the RpsBrowser utility from the distribution; it was an experimental project that was never completed.

This version of CodeGen was built with Synergy/DE 10.3.3d and requires a minimum Synergy version of 10.1.1 in order to operate.

Developing in Visual Studio

By Steve Ives, Posted on June 30, 2017 at 7:56 pm

Most Synergy developers would love to use the very latest and greatest development tools to develop and maintain their Synergy applications, but how do you get started? At the recent DevPartner conference in Atlanta product manager Marty Lewis not only discussed the concepts of how to get started, but actually demonstrated the entire process with a real Synergy application. Check out his presentation entitled Developing Synergy Code in Visual Studio:

By the way, this video is just one of many from the 2017 DevPartner Conference.

CodeGen 5.1.9 Released

By Steve Ives, Posted on May 12, 2017 at 8:45 am

I am pleased to announce that we have just released a new version of CodeGen (5.1.9) that contains some new features that were requested by customers. The changes in this release are as follows:

  • We added two new structure expansion tokens <FILE_ODBC_NAME> and <FILE_RPS_NAME> that expands to the repository ODBC table name or structure name of the first file definition that is assigned to the structure being processed.
  • We made a slight change to the way that the multiple structures command line option (-ms) is processed, allowing it to be used when only one repository structure is specified. This allows for templates that use the <STRUCTURE_LOOP> construct to be used when only one structure is being processed.
  • We also fixed an issue that was causing the <FIELD_SPEC> token to produce incorrect values for auto-sequence and auto-timestamp fields. Previously the value 8 would be inserted, now the correct value i8 is inserted.

This version of CodeGen was built with Synergy/DE 10.3.3c and requires a minimum Synergy runtime version of 10.1.1. You can download the new version directly from the CodeGen Github Repository.

CodeGen 5.1.7 Released

By Steve Ives, Posted on February 7, 2017 at 10:25 am

We are pleased to announce that Professional Services has just released CodeGen 5.1.7. The main feature of the release is the addition of experimental support for generating code for the MySQL and PostgreSQL relational databases. Developers can use a new command line option -database to specify their database of choice. This causes the SQL-compatible data types that are injected by the field loop expansion token <FIELD_SQLTYPE> to be customized based on the chosen database. The default database continues to be Microsoft SQL Server.

Before we consider support for these new databases to be final we would appreciate any feedback from developers working with MySQL or PostgreSQL to confirm whether we have chosen appropriate data type mappings. Additional information can be found in the CodeGen documentation.


CodeGen 5.1.4 Released

By Steve Ives, Posted on July 29, 2016 at 10:42 am

We are pleased to announce that we have just released CodeGen V5.1.4. The main change in this version is an alteration to the way that CodeGen maps Synergy time fields, i.e. TM4 (HHMM) and TM6 (HHMMSS) fields, to corresponding SQL data types via the <FIELD_SQLTYPE> field loop expansion token. Previously these fields would be mapped to DECIMAL(4) and DECIMAL(6) fields, resulting in time fields being exposed as simple integer values in an underlying database. With this change it is now possible to correctly export time data to relational databases.

We also made a small change to the CodeGen installation so that the changes that it makes to the system PATH environment variable occur immediately after the installation completes, meaning that it is no longer necessary to reboot the system after installing CodeGen on a system for the first time.

This version of CodeGen is built with Synergy/DE 10.3.3a and requires a minimum Synergy runtime version of 10.1.1.

Merging Data and Forms with the PSG PDF API

By Steve Ives, Posted on July 6, 2016 at 9:53 pm

When I introduced the PSG PDF API during the recent DevPartner Conference in Washington DC I received several questions about whether it was possible to define the layout of something like a standard form using one PDF file, and then simply merge in data in order to create another PDF file. I also received some suggestions about how this might be done, and I am pleased to report that one of those suggestions panned out into a workable solution, at least on the Windows platform.

The solution involves the use of a third-party product named PDFtk Pro. The bad news is that this one isn’t open source and neither is it free. But the good news is it only costs US$ 3.99, which I figured wouldn’t be a problem if you need the functionality that it provides.

Once you have PDFtk Pro installed and in your PATH you can then call the new SetBackgroundFile method on your PdfFile object, specifying the name of the existing PDF file to use as the page background for the pages in the PDF file that you are currently creating. All that actually happens is when you subsequently save your PDF file, by calling one of the Print, Preview or Save methods, the code executes a PDFtk Pro command that merges your PDF file with the background file that you specified earlier. Here’s an example of what the code looks like:

;;Create an instance of the PdfFile class
pdf = new PdfFile()

;;Name the other PDF file that defines page background content
if (!pdf.SetBackgroundFile(“FORMS:DeliveryTicketForm.pdf”,errorMessage)
    throw new Exception(errorMessage)

;;Other code to define the content of the PDF file


;;Show the results

There are several possible benefits of using this approach, not least of which is the potential for a significant reduction in processing overhead when creating complex forms. Another tangible benefit will be the ability to create background forms and other documents using any Windows application that can create print output; Microsoft Word or Excel for example. Remember that in Windows 10 Microsoft has included the “Print to PDF” option, so now any Windows application that can create print output can be used to create PDF background documents.

I have re-worked the existing Delivery Ticket example that is distributed with the PDF API so that it first creates a “form” in one PDF file, then creates a second PDF file containing an actual delivery ticket with data, using the form created earlier as a page background.

I have just checked the code changes into the GitHub repository so this new feature is available for use right away, and I am looking forward to receiving any feedback that you may have. I will of course continue to research possible ways of doing this on the other platforms (Unix, Linux and OpenVMS) but for now at least we have a solution for the Windows platform that most of us are using.

Symphony takes a REST

By Richard Morris, Posted on at 3:08 am

The Symphony Harmony namespace allows access to data and logic through an SQL like syntax. For example you can select records from a file using a query such as “SELECT ID, DESCRIPTION FROM PART WHERE QUANTITY > 12 ORDER BY QUANTITY”. All matching records are returned from the query in the form of Symphony Data Objects. The data can be local or accessed via Synergy xfServer. The Symphony Bridge utility allows you to expose your query-able database via a standard Windows Communication Foundation (WCF) web service. So far so good.

Steve Ives and I recently had the opportunity to spend a week working together in the UK to “bang” heads together. Steve has always been an exponent of providing RESTful services to access logic and data which can be consumed by just about anything. So we set about using CodeGen to build a standard Restful service that will utilize the Symphony Framework to enable dynamic access to data and ultimately logic.

We soon had the basic service up and running. Out first implementation handled the standard GET verb – and returned all the records in the file. No filtering, no selection, just all the records retuned as a JSON collection. This is the standard API;


Now remember that Symphony Harmony allows you to filter the data you are requesting, so we next implemented the ability to provide the “where” clause to the query. So for example;


And using ARC (Advanced Rest Client which is a Google Chrome plug-in) we can test and query the service;


And we get back just the selected customer details – all those customers where CUSTST has value of CA.

As well as being able to filter the data we can also limit the results returned by Harmony to just the fields we need; this has the benefit of reducing the data being brought across the wire. But how can our REST server build the required data objects to just include the fields we select? By doing runtime code generation! Within our code generated data objects we added the ability to dynamically build the response data object to only include those fields requested by the client. The calling syntax, as provided by the API, is;


And again using ARC to test our server we can issue a command like;


This is requesting all records from CUSMAS where the CUSNM2 field contains the word “LAWN” and limiting the response data object to just three fields. The response JSON looks like;


Two perfectly formed data object that are limited by the fields in the selection list. If your Symphony Harmony connection to your data is via xfServer then only those selected fields will have been loaded from the file, again improving performance.

We also added the ability to limit the amount of data retuned by adding a “maxrows” option;


We have already added the ability to the Symphony Harmony namespace to perform inserts, updates and deletes using standard SQL syntax and we’ll be adding these capabilities to the appropriate rest verbs POST, PUT and DELETE. Watch this blog feed for more information.

CodeGen 5.1.3 Released

By Steve Ives, Posted on June 30, 2016 at 1:11 pm

Tomorrow morning I’m heading back home to California having spent the last two weeks in the United Kingdom. The second week was totally chill time; I spent time with family and caught up with some old friends. But the first week was all about work; I spent a few days working with Richard Morris (it’s been WAY too long since that happened) and I can tell you that we worked on some pretty cool stuff. I’m not going to tell you what that is right now, but It’s something that many of you may be able to leverage in the not too distant future, and you’ll be able to read all about it in the coming weeks. For now I wanted to let you know that we found that we needed to add some new features to CodeGen to achieve what we were trying to do, so I am happy to announce that CodeGen 5.1.3 is now available for download.

PSG PDF API Moves to GitHub

By Steve Ives, Posted on May 28, 2016 at 11:05 am

This is just a brief update on the current status of the PDF API that I have mentioned previously on this forum. During the recent DevPartner Conference in Washington DC I received some really great feedback from several developers already using the API, and some very positive reactions from several others who hope to start working with it in the near future.

During my conference presentation about the API I mentioned that I was considering making the code a little easier to access by moving it out of the Code Exchange and on to GitHub. Well it turns out that was a popular idea too, so I am pleased to announce that I have done just that; you can now obtain the code from its new home at And if any of you DBL developers out there want to get involved in improving and extending the API, we will be happy to consider any pull requests that you send to us.

STOP! Validation Alert

By Richard Morris, Posted on April 12, 2016 at 12:00 pm

It’s the tried and trusted way to get the users attention. At the slighting hint of an issue with their data entry you put up a big dialog box complete with warning icons and meaningful information.


We’ve all done it, and many programs we write today may still do it. Writing code using the Synergy UI Toolkit its common practice to write a change method to perform field level validation. When the user has changed the data and left the field – either by tabbing to the next or clicking the “save my soul” button – the change method executes and validates the entry. It is here were we stop the user in their tracks. How dare they give us invalid data – don’t they know what they should have entered? It’s an ever so regimented approach. The user must acknowledge their mistake by politely pressing the “OK” button before we allow them to continue. Users are usually not OK with this interruption to their daily schedule so there must be a nicer way to say “hey there, this data is not quite as I need it, fancy taking a look before we try to commit it to the database and get firm with you and tell you how bad you are doing at data entry?”

When migrating to a new Windows Presentation Foundation UI we can do things a little different and guide the user through the process of entering the correct data we need to complete a form or window. We will still use the same change method validation logic however as there is no reason to change what we know works.

When using the Symphony Framework you create Symphony Data Objects – these classes represent your repository based data structures. The fields within your structures are exposed as properties that the UI will data-bind to. These data objects are a little bit cleverer that just a collection of properties. Based on the attributes in the repository it knows what fields have change methods associated with them. Because of this the data object can raise an event that we can listen for – an event that says “this field needs validation by means of this named change method”. Here is a snippet of code registering the event handler:


The event handler simply raises the required “change method” event back to the host DBL program;


Back in the DBL program we can now listen for the change method events. Here is the event handler being registered:


Remember, we are now back in the host DBL code so we can now dispatch to the actual change methods registered against the field. This is a code snippet and not the complete event handler code:


We are calling into the original change method and passing through the required structure data and method data. Inside the change method we will have code that validate the entry and then, as this snippet shows, we can perform the error reporting:


If the code is running as a UI Toolkit program the normal message box dialog is used to display the message. However, when running with the new WPF UI the code provides the required error information against the field. No message boxes are displayed. To the user they will see:


The edit control background is coloured to indicate an issue with the data and the tooltip gives full details of the problem. When the user has entered valid data, the field reverts back to the standard renditions:


We shall be exploring the ability to handle field change method processing during the DevPartner 2016 pre-conference workshop as we all migrate an existing UI Toolkit program to a modern WPF user interface.


Page 1 of 41234
Don't miss a post!

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts Tag Cloud Archives