Phone800.366.3472 SupportGet Support DocumentationDocumentation Resource CenterResource Center
search
close
Open Menu

Synergex Blog


When It Comes to Legacy Systems, It’s Hip to Be Square

By William Mooney, Posted on September 16, 2021 at 11:34 am

Avatar

I talk to a lot of CEOs, CIOs, and CTOs who feel pressured to replace legacy (aka proven) applications that have served their multibillion-dollar organizations faithfully for 30 or 40 years or more, in a quest for “modernization.” My advice, more now than ever, is never touch a hot stove, and please don’t burn your house down.

While CEOs tend to have a longer lifespan, I’ve seen a great many CIOs and CTOs come and go because they got burned by an ill-thought-out move toward IT modernization and because, in some cases, they figuratively burned down the house, bringing line-of-business applications to their knees and significantly impacting operations—leaving it to their successors to beat a hasty retreat back to the rock-solid legacy systems that some see as old-fashioned.

Some say it’s a build versus buy question. I say it’s a built versus buy question: those legacy applications are already built and running. Yet I’ve encountered C-level executives who just two months after joining a company have made hasty—turned disastrous—decisions to throw out the old in pursuit of the new. But they’re trying to solve a problem that doesn’t really exist. Well, kind of. Yes, there are known issues and problems. However, companies typically don’t invest in solving these known issues and problems and instead kick the can down the road. Then the new CxO comes in and suddenly gets the green light to make the investment, and money that never existed before is now abundant.

Migrate your code to a new platform? Definitely. For years companies have been moving their legacy systems from OpenVMS, IBM AS/400 and AIX, Sun boxes, and other systems onto Windows or Linux, while retaining the original code of their legacy applications.

Apply a modern user interface? Go for it! With browser-based, Windows-desktop, and mobile front ends, you can change the UI at any time, without endangering the line-of-business legacy code that has kept your business prospering for decades, and you can continue to do so for decades to come.

Replicate your legacy data to a modern database? Absolutely. While preserving your existing data and logic, you also make your data available for ETL processing into a data warehouse or whatever other BI environment you choose.

Use modern development tools? Of course. Armed with the right tools, which a good developer can learn quickly, you can take advantage of the myriad developer productivity, code quality, and other features that are inherently present in modern IDEs like Microsoft Visual Studio. Your legacy application being written in DBL, COBOL, BASIC, or the like does not prevent you from using modern tools and development techniques.

It’s Hip to Be Square

When it comes to retaining your bullet-proof legacy systems that have supported your business for decades, I encourage you to have a new appreciation for the steadfastness of DBL, COBOL, and other legacy languages. As the musician Huey Lewis sings, “It’s hip to be square.” When it comes to stability, DBL, COBOL, and the like rock—as in rock solid.

In celebration of COBOL’s 60th birthday in 2019, Mike Madden posted an appreciation titled “Happy Birthday Dear COBOL,” which presented these figures:

• COBOL supports 90% of Fortune 500 business systems daily

• 70% of all critical business logic is written in COBOL

• COBOL connects 500 million mobile phone users daily

• 95% of ATM transactions pass through COBOL code

• 80% of all point-of-sale transactions rely on COBOL

• There are more COBOL transactions executed daily than Google and YouTube searches

• 1.5 million new lines of COBOL code are written every day

• 2 million people work in COBOL

Yet within the industry, there’s a tendency to judge a book by its cover, to see a green screen and cringe, and to risk one’s career by saying, “We’re going to rewrite that application to bring it into the modern era.”

Upon such statements, careers have faltered and business continuity has foundered.

Too often lost in a rush toward the new is the simple fact that you can apply a modern user interface to your legacy application’s code base, usually in a fraction of the time and at a fraction of the cost of replacing the application with something else.

One of the main techniques for enabling this type of enhancement is to create a “services layer” between your legacy code and your new UI. Usually this involves building RESTful web services APIs that expose your application’s business logic and data through modern, flexible, open, and secure protocols.

Once you have this services layer in place, it can be used by any new UI that you choose to build, using whatever tools you decide to use. But be extremely cautious about trying to rewrite the back-end code that has been reliably supporting your business operations, perhaps for decades.

Question: Do you think you can rewrite 35 years of customized applications in three years? Quick answer: No. I’m sure your business applications have been steadily upgraded and customized over the years, involving a vast number of accumulated developer hours. The nuances in that code should be considered your secret sauce. Could it be duplicated in C# or some other language? Yes—but not within a timeframe, cost, or risk profile that most organizations would want to take on.

Gap Analysis: 2 or 3 Years

In the classic American western Butch Cassidy and the Sundance Kid, the two outlaws have been tracked down by a posse. They’re at the edge of a cliff with nowhere to go except a very rough river far below. As Butch tries to talk the Sundance Kid into jumping off the cliff to the river below, Sundance finally confesses: “I can’t swim!” Butch laughs and says, “Are you crazy? The fall will probably kill you.”

That scene comes to mind when considering the essential task of performing a thorough gap analysis before trying to replicate the old legacy code that you think needs to be replaced. Just the gap analysis can take two to three years. That’s the jump. But attempting to create a new application without a gap analysis—using a needs-based approach—is equivalent to the other fate: facing certain death.

When I speak of the “secret sauce” incorporated into your legacy applications over the decades, I’m not talking about something basic like general ledger, accounts payable, or accounts receivable. I’m referring to the more complex pieces, like order entry and inventory management, that involve intricate orchestrations, workflows, and in-depth business knowledge.

For example, we worked with a company that has long provided DIBOL-based software for tracking grain in storage. That might sound simple enough, but the workflows are complex. Looking at just a subset of the tracked elements: upon arrival, the grain is measured for weight, of course, but also for moisture content, which feeds an algorithm to determine yield and value of the grain. Next, the grain is mixed with other grain in silos where it may sit for months or years, and it is tracked as it transfers from one silo to another for shipment. All along the way, there are elements and dependencies worked into the code that could take some years to identify through gap analysis, even before any actual coding starts.

We’ve seen supposed three-year modernization projects drag on for more than a decade, with still no end in sight. And there was no actual need to replace the application: other options were available that would have addressed the requirements more quickly and at a significantly lower cost. The new CxO just felt compelled to replace the application with something “modern.”

But Who Does Legacy?

A common fear is “But I can’t hire old-school developers!”

Any good developer that you hire out of a computer science program, given appropriate time and support, will learn both your environment and the language that you use. Of course, developing in a modern IDE can only help to accelerate that process. Good programmers adapt to a new language quickly.

The real question should be “Who has the business knowledge?”

What generally takes much more time for a new hire to learn is the specifics of your industry, and that learning challenge applies to all developers. This “domain knowledge” about your industry is what your legacy application has wrapped up in its code: decades of carefully acquired and crafted business logic. It’s not hard to find someone who codes. But try to find someone who knows grain elevators inside and out. Or who knows your international shipping regulations, practices, and requirements. Or your consumer-packaged goods business. Or life and casualty insurance. Those legacy systems are vast storehouses of carefully acquired wisdom, set into code as business logic.

You can put an attractive front end (and please do) on anything, through the use of web services and other modern technologies. But it could take years, perhaps decades, to capture and recast the real-life business logic sitting in that legacy code. So the next time someone suggests a multi-year modernization project, just smile and remind them: Sometimes it’s hip to be square.


Announcing Synergy 12.0

By Steve Ives, Posted on September 10, 2021 at 2:48 pm

Steve Ives

For the last several months the Synergy development team has been building out the features and functionality for the next version of Synergy, including the not-insignificant task of adding support for Microsoft .NET (in addition to .NET Framework) throughout the product. We are now ready to start sharing some of that work with you.

Some of you may have noticed something slightly unusual in the title of this announcement; the version number includes an even number! The complete version number of the release is 12.0.1.3272, which is also somewhat different from previous releases. We have implemented a new release strategy involving two types of releases: Long-Term Support (LTS) Releases and Feature Releases. You can find more information about that on the Synergex website.

Synergy 12.0 is a feature release, providing early access to some of the features and enhancements that will be part of the Synergy 12.1 LTS release later this year.

IMPORTANT: In this document and in other places, the term .NET refers to Microsoft .NET 5 and higher, while the term .NET Framework refers to the original .NET Framework product through its final 4.8 version. The term .NET Core refers to the interim product which culminated with a final 3.1 release.

Primary Focus of this Release

As previously announced, the primary focus of our current development is to add support for Microsoft .NET, and in some cases .NET Core, in all the places in Synergy where .NET Framework is currently supported. Some of these changes are being shipped for the first time in this release, while others will follow in subsequent 12.0 feature releases. Not all of the included changes are related to .NET; there are changes throughout the product set, as detailed below.

New Features and Enhancements

This section presents information about new features and enhancements included in this release:

.NET Support for Synergy APIs and Utilities

Several Synergy libraries and tools are now available as NuGet packages and provide support for .NET:

Library                     NuGet Package Name
Gennet Utility              Synergex.SynergyDE.gennet
Repository API Library      Synergex.SynergyDE.ddlib
UI Toolkit Library          Synergex.SynergyDE.tklib
XML API Library             Synergex.SynergyDE.synxml
xfNetLink .NET Library      Synergex.SynergyDE.xfnlnet

xfNetLink .NET

In addition to the Windows installer package, the xfNetLink .NET library is also now distributed as a NuGet package. The NuGet version of the library is built with .NET Standard 2.0 and so is usable from .NET Framework 4.7.2 and higher, .NET Core 3.1, and .NET 5 and higher environments.

The gencs utility now generates a C# project file that you can use with Visual Studio, MSBuild, or the dotnet build utility to build a client assembly. Note also that gencs no longer creates a strong-name key file by default; if you rely on this feature, you must explicitly specify a key file via the gencs -s option.

In .NET Framework applications, configuration settings were usually set in configuration files, but this is not supported with .NET Core or .NET applications. To address this, we have added several new environment variables that can be used to specify configuration settings in all environments. Refer to the release notes and documentation for a list of the new environment variables.

The COM+ pooling mechanism supported with .NET Framework is not available for .NET Core or .NET applications. As an alternative, we provide a pooling mechanism that can be implemented within your application, similar to the way that pooling worked with xfNetLink Java. For additional information refer to the Microsoft documentation for ObjectPool and use the BlockingPooledObjectPolicy class that we provide in an assembly within the NuGet package.

Synergy .NET assembly API

We added support for using the Synergy .NET assembly API in .NET Core and .NET environments, in addition to the current .NET Framework support.

The gennet40 utility continues to produce code for .NET Framework, and in addition, a new gennet utility is available from NuGet as a .NET global tool (Synergex.SynergyDE.gennet). This utility produces code for the version of .NET Core or .NET in which it is running.

Synergy Runtime Enhancements

We have added platform support for Windows 11 and Windows Server 2022 and new identifiers to allow for conditional compilation based on these new platforms.

The Synergy windowing API is now available in .NET environments.

On Windows, the data encryption routines (DATA_ENCRYPT, DATA_DECRYPT, and DATA_SALTIV) now use the Windows “Cryptography API: Next Generation” (CNG) instead of OpenSSL.

Related to our version numbering scheme changes, XCALL VERSN and %VERSN now have an optional second parameter that returns the Synergy build number.

On Linux, Synergy no longer has a dependency on the libtinfo.so.5 libraries.

Synergy DBMS Utilities

Continuing with our recent theme of improving performance when accessing ISAM files, or undertaking routine file maintenance or recovery operations, more enhancements have been made to the isutl utility that can result in performance improvements in some cases, particularly when processing very large files. Refer to the release notes for additional information.

License Manager

Linux systems can now be optionally configured to forward licensing requests to a Synergy license server on a Windows system, using TCP/IP communication. This is only supported when the license server hosts subscription licenses, and note that Linux system(s) will share the same pool of license keys available to Windows clients. Amongst several benefits is the simplification of Synergy deployment on Linux to virtualized or containerized environments, including deployment in cloud-based scenarios.

If you wish to implement IP-based licensing with existing Linux systems, you will need to work with Synergex Customer Service, as they will need to consolidate any existing Linux licenses into Windows licenses, and you will have to reset and reconfigure licensing on your Linux systems.

We have improved license server operation on UNIX and Linux systems to simplify running the license server under a non-root user account. Specifically, if file permissions prevent the license file from being written to the /usr/lib directory, it will now be written to /var/synergy instead. Additionally, if the license file is in /var/synergy, the location of the license server log file will default to the same location.

Quality Improvements

As usual, in addition to the new features and enhancements detailed here, Synergy 12.0 includes quality improvements throughout the product set. Again, refer to the release notes for complete information.

Documentation and Downloads

Documentation for feature releases will be available online only at https://www.synergex.com/docs. This website will always default to displaying the documentation for the current LTS release, but until Synergy 12.1 is released, the current default is the 11.1 documentation. To view the updated documentation for the 12.0 release, be sure to select “12.0 (feature release)” from the version dropdown.

In addition, the release notes that document everything that changed in this release can be viewed in the product downloads area of the Synergex Resource Center. Similar to the documentation, the downloads area defaults to displaying the latest LTS release, and currently defaults to displaying the 11.1 downloads. Click on the prominent red “Feature Release” button to view the downloads and release notes for the feature release.

What About SDI?

Synergy DBL Integration for Visual Studio now follows a separate development timeline and release cycle, and releases may not coincide with SDE. The SDI team has been hard at work, and you can expect an SDI release that supports Visual Studio 2019 and the upcoming Visual Studio 2022 very soon.


A New Release Strategy for Synergy

By Steve Ives, Posted on August 10, 2021 at 4:23 pm

Steve Ives

For some time now, we have been contemplating changing the way we release Synergy products. This will not be a surprise to many of you; we’ve been “testing the waters” during conversations with many of you for well over a year now, and I presented information about the likely changes in my Product Update presentation at the DevPartner conference back in May. What has changed since then is that our plans have been formalized, the changes are definitely happening, and we are drawing close to what will be the first release under the new scheme. With that in mind, it’s probably a good time to tell you all about the changes that you can expect.

Before I talk about the actual changes, let me first explain what we are trying to achieve by making the changes.

Known Release Cadence

Previously, Synergy products have not had a pre-set release cadence; we released a new version of the product when we felt a significant new functionality or change in technology justified doing so. As a result, the time between releases could vary from as little as a few months for minor releases (e.g., 9.3 in December 2009 and 9.5 in November 2010) to several years between major releases (10.1 in December 2012 and 11.1 in September 2019).

While this probably works fine for most of our “end-user” customers, it can be challenging for ISV customers that release their products on pre-determined schedules. A known release cadence will make it easier for them to plan their releases, specifically when to adopt new Synergy versions, because they will know when to expect a new Synergy release in advance.

Known Support Window

Similar to the release cadence, the support period for the current Synergy version has previously also been undefined. Our commitment has been to support the current and previous major or minor releases (currently 10.3.3 and 11.1.1), and any earlier versions are technically unsupported. By the way, in this context, “supported” refers to the versions of the product for which we will issue patches if a customer finds a serious problem, not to the ability to call the Developer Support team for assistance. Our support engineers will always try to help supported customers, regardless of product version.

Improved Stability

As our development practices became more “agile” in recent years, so did the way we released software. We got to the point where every release we published (even “patch letter” releases) typically included new or updated functionality in addition to quality enhancements. While this is great for developers who want access to the “latest and greatest” features, the strategy sometimes wasn’t great from the point of view of product stability. It’s a fact of life in software development that new or altered code means an opportunity for new bugs, but it’s frustrating if that happens in the context of a patch letter release that is supposed to fix bugs, not introduce new ones.

We have already partially addressed this issue, as I discussed in my Improving Our Internal Development Process blog post last September, but the changes we are about to make will enhance product stability even further.

Introducing Long-Terms Support (LTS) Releases

We are switching to what is referred to as a Long-Term Support (LTS) release strategy. LTS releases provide customers with a pre-defined schedule of Synergy releases that they can plan against, as well as the opportunity to deploy applications in an inherently more stable environment.

Each LTS release will contain a pre-determined set of new and enhanced features that will not change post-release. Further, the features introduced in an LTS release will have been previously made available and thoroughly tested in earlier “feature releases” (more on this later).

LTS releases provide an inherently more stable environment because of the absence of newly introduced or altered code. Although an LTS release may have subsequent updates, those updates will only contain quality improvements or security enhancements, not new functionality.

We plan to make a new LTS release available every two years, the first being before the end of this year. Synergy has many close ties with Microsoft .NET, and we will be broadly aligning our release schedule with that of .NET, which also uses an LTS strategy with a two-year cadence.

LTS releases will be labeled with a major version number, an odd major revision number, and an incrementing build number. The first LTS release will be 12.1.n (n being the build number). After that will be the 12.3 LTS release in late 2023, the 12.5 LTS release in late 2025, and so on. Build numbers will reset with each LTS release, and there will be no patch letter releases under the new scheme.

Each LTS release will be supported for a minimum of four years, or one year after the next LTS release ships, whichever is longer. So if you adopt the 12.1 LTS release when it ships in late 2021, you know it will be supported until at least late 2025.

Feature Releases

Of course, some developers will take advantage of LTS releases for the improved stability they offer, but others will want to get their hands on the latest and greatest features that have just been developed, and that is where feature releases come in.

Feature releases are interim releases that provide early access to new features and enhancements that will eventually become part of the next LTS release. They may include partial (but usable) implementations of new features. They will also address any quality and security issues identified within the feature release branch.

Occurring on a more frequent cadence than LTS releases, multiple feature releases may occur each year. You can think of feature releases as being more similar to the way that we have released our software in recent years.

Feature releases will typically be made available for all supported platforms, but if the changes in any release do not apply to or are not usable on a particular platform, no release will be made for that platform.

We will fully test feature releases to the best of our ability, but developers should bear in mind that quality may be lower in areas where new code has been added or significant changes have been made. Quality will solidify as developers exercise the new or updated features and as our automated test suites are extended.

The support period for feature releases will be much shorter than for LTS releases, ending three months after the next feature or LTS release.

Feature releases will be labeled with a major version number, an even major revision number, and an incrementing build number. This may seem a little strange, but remember that feature releases build-up to the next LTS release. The feature releases to support the upcoming 12.1 LTS release will be labeled as 12.0.n. We’re currently in the process of putting together the first such feature release, which you may see towards the end of this month or in September.

This first release may not seem too unusual because we will be going from 11.1 to 12.0 and then to 12.1 later in the year. But shortly after the 12.1 LTS release, the first 12.2 feature release will occur (leading up to the 12.3 LTS release in 2023). For the next two years, you will see releases in both the 12.1 and 12.2 release branches, not necessarily at the same time! For example, there may be a 12.2 feature release in March, followed by a 12.1 LTS release in April!

Remember that you can take advantage of runtime version targeting, which allows you to ship your products to customers (external or internal) using an earlier and stable version of the Synergy runtime (the LTS runtime) while developers use the latest feature release to build the product, enabling them to take full advantage of all of the ever-improving productivity enhancements in our Visual Studio IDE tools.

If you have any questions about anything that I have presented here, please do not hesitate to contact me via email at steve.ives@synergex.com. And if you are interested in the changes that will be included in the upcoming 12.0 feature release, watch this space—I will post about that soon.


Virtual Reality: How We Successfully Pulled Off Our First Remote Conference

By Liz Wilson and Heather Sula, Posted on July 27, 2021 at 11:20 am

Liz Wilson and Heather Sula

March 2020. We were three months away from the Synergy DevPartner Conference, an in-person learning event we hold for our customers every 18 months. Then the pandemic hit and upended everything. We had no idea that within a year, we’d have to completely reconceptualize and adapt our entire conference blueprint for a virtual audience while providing the same educational value and keeping the communal spirit of our in-person gatherings.

In the end, we were able to present 16 content-packed virtual sessions to a record number of attendees that, thanks to a boatload of planning and persistence, went off with barely a hitch. (You can watch most of the conference sessions here.)

Conference moderator Haley Hart and subject matter expert Marty Lewis lead a Q&A after a session.

Here’s how we did it.

Planning

Planning any large event is a significant logistical undertaking, requiring coordination and cooperation between multiple departments. The more you plan ahead, the less you have to panic at conference time. We won’t bore you with every detail of our particular planning process, but here are some foundational steps we took to make sure all our bases were covered.

What we did:

  • Held weekly conference check-ins. Communication is key for making sure nothing falls through the cracks.
  • Created standard branding. The conference had its own “look” that was used to design the website, PowerPoint templates, webinar rooms, session landing pages, etc., to ensure a cohesive aesthetic that distinguished it from our usual content.
  • Got organized. Whether it was shared spreadsheets, Kanban boards, etc., we made sure the key players had insight into the necessary tasks and timelines, so no one was in the dark.
  • Allowed time to work on “nice-to-haves” (informal networking sessions, giveaways, etc.) in addition to the necessary components of the conference. Sometimes it’s the little things that make an event special.

Additional Takeaway: Hold a pre-mortem. A pre-mortem is a thought experiment that encourages people to think about what could go wrong. A few members of the conference planning committee got together in April and did some brainstorming about specific issues that could arise during the conference. For example, we determined who would step in as moderator if either of our two main MCs called in sick. Or, if the Q&A portion of a session went on for longer than expected, we came up with a solution: unanswered questions would be collected and answered during the wrap-up session. Thankfully, no one wound up getting sick, but it was comforting to know we had a plan in case that did happen.

Execution

While planning is nine-tenths of the game, you still have to execute. The following are some things to keep in mind so everything goes smoothly when the time comes.

What we did:

  • Created a task force to handle registrant issues that communicated frequently. Because we had additional people on hand to field customer support, the organizers were free to focus on making the conference happen.
  • Kept an eye on statistics. We sent analytics to our sales team after every session so they could reach out to customers accordingly.

Additional Takeaway: Talking to a camera instead of an audience is…weird. Pre-recording most of the sessions was helpful for a handful of reasons: it allowed presenters to take a substantial breather between slides (grab a cup of coffee, get the demo ready without rushing), and we had the opportunity to add some creative flourishes in post-production. Still, most presenters found the camera to be a poor substitute for a live audience. In the future, we’ll consider inviting additional staff to recordings to act as a stand-in audience.

Session recording in our makeshift conference studio at Synergex HQ.

Flexibility

Ultimately, you have to be nimble, ready to adapt to changing circumstances, and take on any challenges that pop up. Luckily, you can prepare for flexibility too.

What we did:

  • Backups, backups, backups. For pre-recorded sessions, the main plan was to upload the files into the webinar platform and hit play. But we always had a backup plan in case something went wrong (private YouTube video versions of the sessions we could link to, etc.).

Additional Takeaway: You don’t need to use a one-size-fits-all formula for conference sessions. We had to work around several factors when recording the 16 sessions, including each presenter’s geographic location and level of comfort in a live vs. pre-recorded context. Rather than make everyone do everything the same way, we gave presenters flexibility in terms of how they wanted to structure and lead their sessions, and we wound up with a nice variety because of it.

We look forward to seeing you at the 2022 conference!

Check out the conference sessions and learn how to do the following:

  • Improve development productivity and practices through adopting more efficient development methodologies.
  • Enhance years of Synergy data and code with new technologies, enabling connectivity through RESTful web services and APIs.
  • Keep up to date with Synergy SSL and operating system security patches. (Security and disaster recovery are important for compliance!)
  • Use traditional Synergy in Visual Studio to gain a huge productivity boost, lower the barrier to continuous code integration, and improve processes and software quality.

Watch sessions here


Announcing Synergy/DE 11.1.1h

By Steve Ives, Posted on July 19, 2021 at 9:49 am

Steve Ives

Synergex is pleased to announce the immediate availability of Synergy/DE 11.1.1h on all platforms. A quality release that includes a wide range of improvements across the entire Synergy product line, and we strongly encourage all Synergy developers to review the release notes for detailed information about everything that changed.

If you are one of the many developers using Synergy DBL Integration for Visual Studio (SDI), please note that this will be the final release that supports Visual Studio 2017. From the next release, SDI will support Visual Studio 2019 as well as the upcoming Visual Studio 2022.

As we continually enhance the isutl utility for improved performance and better recoverability in system crash scenarios, we recommend that all customers using Synergy DBMS download the latest version of the Synergy DBMS Utilities package, regardless of the version of Synergy that you are currently using.

And finally, we want to point out an important bug fix that we applied to xfServer in this release: “On Windows and UNIX/Linux, a pre-version 11.1.1 client connecting to an 11.1.1 through 11.1.1g encrypted xfServer caused the connection to hang on certain functions, such as ISSTS, CLEAR, and STORE. This has been fixed in this release.”

Synergy/DE 11.1.1h is available for download in the Resource Center now.


Elevate Your Endpoints

By Liz Wilson, Posted on July 14, 2021 at 12:55 pm

Liz Wilson

If you’ve checked out our GitHub documentation, attended an office hours session, or watched a web services–related video on our YouTube channel, you may know that OData is a critical layer of the tech stack that makes up our open-source Harmony Core solution.1 There are several reasons for this: OData is standards-based, it supports query validation (so you can choose the data available to users in a given context), and the learning curve is minimal. Developers can look at a sample OData request and immediately get a sense of what is being asked for, and our implementation of OData emits JSON, a standards-based data format that other programming languages can parse with ease.  

OData is easy to work with, but it’s important to know how to make the most of Harmony Core’s API functionality, beyond just the basics. Here are some tips for maximizing the readability and performance of the APIs that you will be generating via Harmony Core’s OData services. 

Use Meaningful Names 

Harmony Core OData services rely partially on data structures and files from Repository. That said, where appropriate, it’s a good idea to have meaningful names for data structures, as these will be turned into URLs. For example, “CustomerNumber” would be better than “CSTNBR.” 

vs.

Don’t Skimp on Data 

It might seem odd to buck a “less is more” approach to exposing your data, but in the world of Harmony Core, the better setup will likely be one in which most of your data is initially made available to developers, and the specificity of who gets what is determined when these developers create URLs, like the one below, to extract the exact information they need:  

To do this, you can configure your Harmony Core environment to enable the entity collection endpoints feature. This will generate a new GET method in each of your controller classes that exposes their respective collections (e.g., all customers, all products, etc.). You can access the GET method through an HTTP GET request without parameters.  

Plan Ahead 

The more planning you do in terms of the data you’d like to query for, the more specific you can make your endpoints. The more specific the endpoint is, the faster your query will be, as asking for a smaller amount of data takes less time than requesting a large collection.  

If you have access to an entity’s primary key, you can adjust your environment configuration to enable single entity endpoints. This will allow you to whittle down your results with a URL that incorporates the item’s unique key, thereby reducing load time. To illustrate, our sample Harmony Core service provides testable queries for all customer data, as well as for a single customer.  

When I tracked how long it took to load each page, I noticed that the “all customers” collection took 114 milliseconds, while the single customer query took about half that time—52 milliseconds. 

vs.

If you’re able to map things out in advance and narrow your data needs further, you can specify an entity’s discrete properties. For example, rather than retrieving the entire data set for an individual customer, you can query for the customer’s phone number, name, or website. To do this, check out the instructions for enabling individual property endpoints on GitHub.   

Get Familiar with Third-Party Tools 

Our documentation references two useful API-related platforms: Postman and Swagger. Postman was created as client for testing API requests and responses and has grown to include functionality for building APIs, creating reports, and generating documentation automatically. You can find more information on Postman in Harmony Core tutorial two in Github

While documentation generation was a recent addition to Postman’s suite of API tools, Swagger has focused on API visualization from the very beginning. In a Harmony Core environment, you will have automatic access to Swagger-generated documentation for your endpoints, but you may have a use case for one of the other tools listed on their website

There are plenty of additional tips scattered  
throughout the Harmony Core tutorials on GitHub
so we encourage you to check them out! If you have  
questions on getting started with or furthering your use  
of web services, be sure to talk to your account executive  
or join us for a session of Harmony Core Office Hours.  
We have more best practices to share from previous  
Harmony Core implementations


1 Under the hood, Entity Framework Core translates OData queries into Synergy Select class operations.


CodeGen 5.7.3 Released

By Steve Ives, Posted on July 9, 2021 at 6:47 pm

Steve Ives

There’s a new version of CodeGen out there, so if you’re using it, here’s what’s changed in this version:

  • We added a new special structure expression token <STRUCTURE_HAS_FIELD_name> that allows you to detect the presence or absence of a named field within the structure.
  • We improved the logic used when processing unique key loops and unique alternate key loops and made a slight change to the way that the unique alternate key loops operate. Previously the loop would not compare alternate key segments with the primary key, but now it does. So the loop now processes any alternate keys that do not have identical key segments to the primary key or any previous alternate keys.
  • We fixed a problem with the implementation of the -pa command line option that would cause CodeGen to fail when used with an input file containing multiple structures.
  • We fixed a problem with the implementation of the -tweaks option which could result in a null reference exception in some rare circumstances.
  • We reviewed and updated all sample templates that ship with CodeGen to ensure they use the latest capabilities such as taking advantage of complex expressions to simplify template complexity resulting from nested expressions.
  • This version of CodeGen was built with Synergy/DE 11.1.1g and requires a minimum of version 10.1.1 to operate.

As always, this new release is backward compatible with earlier releases, so we recommend that everyone download the new version as soon as possible.


Another RESTful web service story

By Cindy Limburg, Posted on May 27, 2021 at 12:26 pm

Cindy Limburg

I hope you had a chance to attend some of the recent Synergy DevPartner Conference virtual sessions. There were many opportunities to learn about the latest Synergex technologies and how customers are taking advantage of them. One of the conference themes was the importance of creating RESTful web services with your legacy applications, and the sessions included some examples of customers doing this with our Harmony Core open-source product. We also just published a new customer success story, which describes how our customer RCC used Harmony Core to create a RESTful web service and provide its customer Legacy Vacation Resorts (LVR) with a new web portal.

Used by vacation property owners, the RCC Resort Management Solution offers features such as a central reservations system, contract management, and sales analysis. LVR offers travel experiences across Florida, Colorado, New Jersey, and Nevada. When COVID hit, LVR wanted to make their resorts safer and more comfortable for their guests. They decided to move their check-in process to the web so guests could minimize the time required to interact with LVR’s front desk agents.

RCC and LVR already had a web access solution, but it used Synergex’s proprietary xfServerPlus product, and they both wanted a more standards-based solution. This was a great opportunity for RCC to modernize their solution and provide a foundation for meeting future demands for access.

RCC started with the data/logic access routines they already had for xfServerPlus, expanded them to meet the needs of the new web service and web UI, and then added new OData controllers and a Harmony Core feature called “adapters” to expose the data and logic as OData resources. Code for the adapters and controllers was generated using Synergex’s CodeGen tool and RCC’s Synergy repository.

RCC and LVR were able to meet the project goals, and both are excited to move forward with their new web services solution. See the full success story for details (and some beautiful resort photos).

Mike Amundsen, the Synergy DevPartner Conference’s keynote speaker, said that when you’re getting started with web services and trying to figure out scope and boundaries, you should look for the smallest problem you have, fix that and learn from it, and then move on to the next smallest problem. In the “Leveraging Web Services for UI” conference session, customer Forward Computers had a similar message: start with a well-defined function. (They also advised making sure it’s not too simple, as you’ll want to evaluate performance.) The new customer success story describes how RCC and LVR got started with LVR’s self-check-in function. What function will you start with?

To learn more about web services and Harmony Core, refer to our web services training videos and our documentation.


Updated Visual Studio Development Tools Update

By Steve Ives, Posted on March 29, 2021 at 10:44 am

Steve Ives

Synergex is pleased to announce the immediate availability of a new release of the Synergy DBL Integration for Visual Studio, Version 11.1.1g-3045.

As always, this latest release contains improvements across the board, but in particular, the focus was placed on these specific areas:

  • Improving the development experience in both the editing and debugging of .NET Core code.
  • Improvements in working with Repository projects in various scenarios.

Some of the improvements in this release were actually in the Synergy .NET Compiler, so in addition to updating the SDI installation, developers working on .NET Core projects (including any Harmony Core projects) should upgrade the version of the Synergex.SynergyDE.Build NuGet package used in the projects to version 11.1.1070.3045 also.

We encourage everyone undertaking any type of Synergy development in Visual Studio to upgrade to this new release as soon as possible. And remember, if you are not ready to upgrade the runtime versions on your production systems yet, you can use runtime version targeting to give you access to the latest-and-greatest development tools while continuing to support older runtime installations.

Head on over to the Resource Center Downloads page to download the new release now.



CodeGen 5.6.9 Released

By Steve Ives, Posted on March 11, 2021 at 8:10 pm

Steve Ives

We are pleased to announce another release of CodeGen, once again including some significant advances in the technology. We recommend that all developers using CodeGen, and especially developers working with Harmony Core, should upgrade to this latest version as soon as possible. This new version includes these changes and enhancements:


Announcing Synergy/DE 11.1.1g

By Steve Ives, Posted on February 26, 2021 at 3:33 pm

Steve Ives

Synergex is pleased to announce the immediate availability of Synergy/DE 11.1.1g, a quality release that includes a wide range of improvements across the entire Synergy product line. We strongly encourage all Synergy developers to review the release notes for detailed information about everything that changed.

Besides a significant focus on quality, we have also made several feature enhancements in our Visual Studio integration tools, enhancing the developers’ experience and improving their productivity.

First off, we have introduced collapsible-region support for many DBL statements and other language constructs. For example, some of the now collapsible statements include BEGIN-END blocks, USING-ENDUSING, CASE-ENDCASE, IF-THEN-ELSE statements, and more. This feature was specifically requested and voted on by developers in the Synergy Ideas Forum.

We also added support for activating the go-to-definition feature via mouse clicks, in addition to using the existing keyboard shortcut-based mechanism. The default behavior is activated via Ctrl + Left-Click but is customizable in the Tools/Options dialog under Text Editor settings.

Another area that we focused on is improving the accuracy of those “red squiggles” that show up in your code when something is wrong and occasionally when something is not wrong! Inaccurate red squigglies should occur less frequently now, although we do know that have some additional work still to do in this area.

To give developers more options when they need to get in quickly and look at something specific, especially in solutions with large numbers of projects, we also implemented support for Visual Studio 2019’s “Filtered Solution” feature that allows you to check a “Do not load projects” option in the Project Open dialog. Visual Studio opens the solution very quickly when you do this, but all projects are in an unloaded state. The developer can then select the projects they wish to work on in Solution Explorer and use the “Reload Project” context menu to load them. The context menu then includes options to allow you to load either direct dependencies or all dependencies of the projects you loaded, meaning that you can quickly get to a buildable scenario without having all projects loaded. Solution Explorer also has options to show or hide unloaded projects.

Having filtered your solution the way you want it, you can then use the “Save as Solution Filter” context menu to save the state of your solution for the next time you need it that way. The file is saved as a .slnf file and can be reopened the same way you open the solution.

We have overhauled our project build system, reduced memory usage, and improved performance for Visual Studio and command-line builds. And at the same time, we have improved and standardized the way they interact with MSBuild, allowing us to adopt new features more quickly.

And armed with our new MSBuild capabilities, we added support for a new feature known as /graphBuild, making pre-build analysis of inter-project dependencies work much more effectively. In turn, MSBuild can now more effectively perform parallel builds of multiple projects simultaneously, in some situations resulting in improvements in overall build time.

We are confident that most developers should experience improvements in overall build times across the board, particularly for traditional Synergy projects. And in some cases, with the right combination of projects and resources, those improvements could be significant.

For example, suppose you have a small number of base libraries used by a large number of higher-level projects. In that case, once those base libraries have been built, there is a good chance that higher-level projects can build in parallel, with the overall process completing more quickly. The more CPU and memory resources that are available, the faster things can proceed. In some environments, such as on dedicated build servers running CICD pipelines, we have seen improvements of up to 30 – 40% in overall build time as compared to the previously released version. But improvements of that order do require access to considerable resources; most improvements will be more modest.

If you’re already using Visual Studio to develop your Synergy code, we encourage you to upgrade to this new version as soon as possible; remember, you can always use runtime version targeting if you’re not ready to upgrade your production systems. And don’t let the fact that you only develop traditional Synergy code or deploy to Unix, Linux, or OpenVMS deter you; Visual Studio can make a great development environment for those scenarios too! Talk to your Synergex account representative for more information.


9 Windows Tips and Tricks You Should Know

By Jerry Fawcett, Posted on February 23, 2021 at 1:45 pm

Avatar

Over many years of being immersed in Microsoft Windows, I’ve come across a few tidbits I think are worthy of sharing. Some of these you may know about, but I hope you find something new here.

1. Opening a command prompt in File Explorer

So, you’re in File Explorer and you want to open a command prompt in the current location. The easiest way is to simply type “cmd” (without the quotes) in the address bar, and there you go: a command window opens to that location. You can also run other commands the same way. Go ahead: type “notepad” into the address bar of File Explorer for yourself. Conversely, you can use a command prompt to open File Explorer in the current location. To do so, issue either of the following commands in the command prompt: “start.” or “explorer .”.

2. Auto inserting the date and time in Notepad

I use Notepad all the time for taking quick notes. Try pressing F5 when you’re in Notepad. This will enter the time and date, which is useful for marking when an entry was made.

3. Changing configuration settings with MSConfig

Many already know about this utility, but it’s worth mentioning for those who don’t. MSConfig.exe is a built-in Windows tool for controlling such things as the manner the next startup boot will occur (normal, diagnostic, or selective), which services will run, and a list of other built-in tools. It’s very powerful, so use it with care. You can find more information here.

4. Understanding system errors

A quick way (yes, quicker than Googling) to look up a Windows system error is to use the command “net helpmsg errornumber”. For example, if you want to know what system error 5 means, enter this at a command prompt:

     C:\>net helpmsg 5

     Access is denied.

5. Activating speech recognition

Tired of typing? No problem: Windows has a built-in solution that allows you to talk to your computer and give your hands a break. To enable this tool, start by pressing the Windows key +H.  This will prompt you with a link to Settings to enable speech recognition. Toggle it on, and the next time a text field is in focus, you can turn listening mode on with Windows key + H. When you start talking, your computer will start typing for you. You’ll find handy documentation on how to use this feature at Windows Speech Recognition commands.

6. Organizing windows

Say you want to re-arrange the windows on your screen. Pressing the Windows key and the right or left arrow will snap a window to either the right or left side of the screen. But what if you have four windows and you want one in each corner of the screen? Well, that’s easy: just grab a window, drag it to the corner of the screen until you see the outline of the window, and then release it. Repeat this for each corner.

7. Accessing control settings with Windows Master Control Panel

There is something called Windows Master Control Panel, aka the God Mode folder. (Don’t blame me; I didn’t name it.) It’s a folder with shortcuts to many different Windows administrative/management settings and Control Panel tools in a single location. To create it, one must be logged in as Administrator (of course), and then simply create a new folder and give it the name 

     SomeName.{ED7BA470-8E54-465E-825C-99712043E01C}

For example,

     GodMode.{ED7BA470-8E54-465E-825C-99712043E01C}

8. Deleting files

It is well known that when a file is deleted it is not truly deleted, but the space the file consumed is now marked as available. However, if you want to make sure a deleted file is actually deleted and not recoverable (by freely available tools), then the space the file consumed needs to be overwritten. There are plenty of third-party tools that will do this, but why bother getting one when Windows has a built-in tool? To overwrite free space in a particular directory or even a drive so the deleted files are not recoverable, use the cipher command. For example, to overwrite free space in the folder C:\Temp, issue the command

     cipher /w:c:\Temp

To overwrite all the free space on the D: drive, issue the command

     cipher /w:d

And of course, a tool named “cipher” can also encrypt files or directories.

9. Checking battery status

Want to know the state of your laptop’s battery? No problem. Use the powercfg command to learn everything about your battery’s current state. Note: Powercfg must be run in an Administrator command window. While powercfg has many options, the two I find most informative are /batteryreport and /energy. When powercfg is run with /batteryreport, it will create an HTML file named battery-report.html with pretty much all there is to know about your battery’s usage and history statistics.  When run with /energy, powercfg will create an HTML file named energy-report.html with a “Power Efficiency Diagnostics Report” that contains all kinds of information on the battery’s current status. For more information about powercfg, visit Powercfg command-line options.

This is only a drop in the (bit) bucket of Windows tips and tricks, so please share the ones you’ve discovered.


When is a page not a page? And when should it act like one?

By Liz Wilson, Posted on February 12, 2021 at 11:30 am

Liz Wilson

The Synergy developer interested in RESTful web services may have noticed a few intriguing things about this liminal era between Web 2.0 and Web 3.0. For starters, a page refresh is not guaranteed each time you navigate to some new corner of a website or app. When it comes to the “traditional” concerns of HTML, Javascript, and CSS—despite what The Offspring may claim—you no longer have to keep ‘em separated. The list of ways that frameworks like Vue, React, and Angular have changed both user and developer experience is extensive, and taken together, all these changes allow for unique and dynamic websites and applications.

That said, while innovation has the potential to take us further from the traditional experience of clicking a link and waiting for a new page to load, there are a few characteristics of those webpages from the early 2000s that the front-end folks would be amiss to innovate away from. Here are just a few!

1. Specific, Descriptive Title Elements

The HTML title element, demarcated by the <title> tag, is meant to provide a succinct description of the document that the browser has just rendered. If you are one of those people who has anywhere from 5 to 50 browser tabs open at any given moment, you probably rely heavily on page titles, as they appear in the tab next to the site’s custom icon.

A specific and descriptive page title is not just beneficial for compulsive tab openers like me, however. Unique titles with more, rather than less, information can improve the search engine optimization of your site. Additionally, page titles are often the first component that screen reader users will refer to in establishing where they are in a site. So adding a company name next to “Home” or “About Us” is a helpful enhancement, as is including information about changes to the page’s state.

For example:

<title>Error – password invalid – Acme Corp: Login</title>

The title element lives with other metadata in the “head” of the document, so it’s not displayed on the page and is therefore relatively easy to neglect. And when you’re working with tools like Angular and React to create single-page applications, you’ll find yourself with one HTML file in your project folder, which means just one <head> section where a title element would obviously be placed. Fortunately, most of these frameworks and libraries have developed tools for generating dynamic page titles.

ASP.NET Core Razor Pages

To ensure that your page titles are being updated dynamically, use the ViewData attribute either in the page’s model (as demonstrated in the documentation) or its .cshtml file. In either case, make sure that if you use Layout, the title element is reading from the ViewData dictionary. If you follow the standard instructions for creating a Razor Pages web application, this is the code you will see in the _Layout.cshtml file:

React

For a similar effect in your React app, install the React Helmet node package. Once that’s done, it’s a simple matter of importing Helmet and adding the document head information within <Helmet> tags within components as needed.

Angular

Similar to React, Angular projects contain a single index.html file, and that’s it for documents with the traditional head/body structure. Angular’s data binding can’t access anything outside of the body tag, so in order to display different titles as the user navigates around the app, you need to use the Title service, which is a very simple class consisting of two methods: getTitle() and setTitle(). The Angular documentation provides clear examples of how to incorporate this class into your application.

2. Semantic/Logical Headings

Whether your site is built with HTML and “vanilla” JavaScript or the client-side framework your cousin’s cousin created last week, at the end of the day, a document object model will be created by the web browser from HTML—whether that HTML was manually written by you or not. The DOM is a cross-platform interface that represents HTML as a tree structure by organizing all the markup’s elements, attributes, and text nodes into objects and creating a logical hierarchy from these objects. The technology was developed in the early days of JavaScript to enable client-side interactivity, and nowadays, the DOM also serves as the foundation for the browser’s accessibility tree, which is a critical layer that allows screen readers and other assistive technology to make sense of the contents of a website. Objects within an accessibility tree contain information ranging from what the specific element is (e.g., heading, input, etc.) to whether the element has a state (e.g., checked or unchecked, collapsed or expanded).

With that in mind, while current design trends indicate that the amount of text on each page remains in decline, it’s still a good idea to create a logical hierarchy of information and use different heading elements accordingly. This will not only benefit assistive technology users, but also the large swaths of us who have been conditioned to look for a big ol’ heading in a prominent position on the page. So even if the page is not technically a page, make sure the main heading is contained in a heading tag (probably an <h1> or <h2>) and the information that is less important is organized under the subordinate heading levels (<h2>, <h3>, <h4>…).

3. Logical Focus Order

In the same way that you may need to apply more thought with new web architectures, it’s very likely that you will have to expend some amount of effort to manage keyboard focus to replicate the way that focus operates in regular old HTML documents. In a traditional website (barring questionable use of CSS), a keyboard user can tab through each page in such a way that mimics the visual flow of information: left to right, top to bottom. In a single-page application where new HTML documents are not actually being loaded in the browser, there is nothing that would necessarily prompt keyboard focus to jump to an element at the top of a new page, because there is no new page. The Angular website (built, naturally, with Angular) provides us with an example of this. If you tab to the footer section from Home and select About, rather than hopping up to something intuitive, like the links at the top of that “page,” your next tab will take you to the next item in the list of footer links.

There are other potential issues relating to mutable content, including focus effectively disappearing, or users getting shot back up to the top of a section if a button is replaced by some other UI component.

Again, today’s popular libraries and frameworks propose techniques for providing a good experience for keyboard users:

To conclude, there is no reason for the not-pages that make up your single-page application to look like they were built in 2006. However, a case can be made for ensuring that certain aspects of the early web experience are not thrown away with the bathwater.1


1The bathwater consists of jQuery, Flash, and frames.


CodeGen 5.6.6 Released

By Steve Ives, Posted on February 5, 2021 at 3:05 pm

Steve Ives

A quick post to announce the availability of CodeGen 5.6.6.  This is a quality release that addresses an issue that could occur when generating code from templates containing certain complex expressions. In particular, several Harmony Core templates contain such complex expressions, so we recommend that all Harmony Core developers upgrade to this new version of CodeGen at their earliest convenience. As always, documentation for this new release can be found at https://codegen.synergex.com.


Don't miss a post!

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts Categories Tag Cloud Archives