Our website is undergoing maintenance. You may experience some unexpected updates or outages. We appreciate your patience as we work to improve your online experience.
In the 1984 movie classic Ghostbusters, we are introduced to Bill Murray’s character Dr. Peter Venkman, a professor of paranormal studies, testing subjects for the gift of clairvoyance—the ability to gain information about an object, person, location, or event through extrasensory perception. While it is clear Dr. Venkman does not take such things very seriously, we can see the advantage of such an ability, particularly for developers.
Stop me if you’ve heard this before: “We upgraded X and now Y is happening.” In this case, Y is usually associated with a negative behavior like slow performance, a slew of errors, or a bad user experience. Events like these may induce weariness, nausea, dry mouth, and other various side effects that are usually listed in American pharmaceutical commercials. In summary, they’re unpleasant. If only there were some means to predict these events and avoid them in the future…
Unfortunately, most developers are not gifted with the power of clairvoyance to anticipate problems with upgrades before they happen. But maybe instead of a crystal ball, there are tools that can help us avoid upgrade failure. Let’s look at some things that can help anticipate and/or prevent such issues from happening.
A relatively recent addition to the Synergy documentation is the Configuring for performance and resiliency topic in the Installation Configuration Guide. This topic discusses things that one should take into consideration when running Synergy on any system, and it’s based on years of experience from the Synergex Development staff and the results of their testing. If you haven’t read this section yet, I highly recommend doing so. If you’ve already read it, I recommend a quick refresher the next time you’re looking at major system or software changes.
In Support, we often see issues when developers virtualize systems that run Synergy or when data is migrated and then accessed across a network rather than being stored locally. Both scenarios are discussed in this topic. And as part of Synergy’s web-based documentation set, it’s updated regularly with the latest information. Make sure you take a look before any major upgrade in case there are changes.
Other useful tools for avoiding problems are the Synergex Blog, the Synergy migration guides, KnowledgeBase articles like Guideline for debugging performance issues, and the release notes that come with each version of Synergy. Remember that even if you are just changing the operating system and/or hardware and not your version of Synergy, you should re-review these materials. Some of the considerations they outline may now be relevant, even if they didn’t affect you previously. Also, when testing, remember to take load testing or a process running over time into account. We commonly see pitfalls when developers neglect these two factors in testing.
Now let’s say that despite your excellent planning, you do see a performance issue. What can you do? Here are some steps I’ve found helpful that might get overlooked. Most have to do with simply eliminating factors that affect performance.
If you’re going to diagnose a problem in performance, the first thing to do is isolate code or create a piece of code that demonstrates the problem. Make this your baseline test for all of the various configurations you’re going to test. This will make your tests consistent as well as eliminate code changes as a factor.
Establish a program you’re going to use to measure the difference in performance. If you’re using a traditional Synergy program as your baseline, you can use the Synergy DBL Profiler, which will count CPU time for you. Just make sure you pick the same metric for your testing—CPU time is not the same as real time. This step will enable you to get measurable results to test what is actually making a difference.
I’ve found that the easiest way to plan and visualize testing is to make a tree. Each layer is one aspect you’re testing that continues to branch with every different aspect. For example, I had a situation where a production machine migrated Synergy version, operating system, and hardware and virtualized the OS, all in one move. We picked one thing to change (the virtualization of the OS) and tested it.
Virtualized | Non-Virtualized |
By doing this, we established that virtualization was a factor, because a virtualized environment was slower than a non-virtualized one. We then compared those to the old and new Windows versions, but continued with virtualized and non-virtualized environments using the same virtualization software.
Windows 8 | Windows 10 | ||
Virtualized | Non-Virtualized | Virtualized | Non-Virtualized |
In previous table | In previous table |
On average, this produced the same result. (It was I/O processing, so we did an average of 10-20 runs based on how volatile the results could be.) Next, we compared the Synergy 10 runtime with the Synergy 9 one.
Windows 8 | Windows 10 | ||||||
Virtualized | Non-Virtualized | Virtualized | Non-Virtualized | ||||
Syn 9 | Syn 10 | Syn9 | Syn 10 | Syn 9 | Syn10 | Syn 9 | Syn 10 |
In previous | In previous | In previous | In previous |
The tree continued growing until all of the factors were considered and tested.
It can be tedious to test one change at a time, but without that kind of granularity, you can’t establish which change affected performance and by how much. In the example I mentioned above, we established that virtualizing the hardware was causing a problem because of the way the virtual machine software emulated separate cores. We never would have come to such a conclusion without carefully eliminating the many different changes one at a time.
After you’re able to establish exactly which changes caused the performance issue(s) and by how much, you can work on a fix or provide a solid case to whichever software support representative you need to contact to get a fix.
You might know most of this already. You might even know of some methods, tips, etc., for performance issues that I didn’t discuss. Maybe you are clairvoyant and you already knew the contents of this post before I did. Either way, I hope you find this information helpful when you look at performance in the future, in both preventative measures and problem diagnosis.