Phone800.366.3472 SupportGet Support DocumentationDocumentation Resource CenterResource Center
Open Menu

Synergex Blog

New select classes are a home run

By synergexadmin, Posted on September 23, 2009 at 4:18 pm


Early last week, I was given a copy of the beta build of Synergy/DE 9.3.  My task was to do some testing of one the exciting new features it includes: The Select classes.

Now, testing isn’t always fun, and it can be frustrating trying to figure out if a bug is really a bug, or just a problem born of having no clue what I’m doing.  This time, however, any minor problems I encountered were completely overshadowed by the sheer awesomeness of the new classes.

The Select classes provide a SQL-like syntax for communicating with Synergy databases, and it’s amazing just how simple they are to use.  Once I had a basic understanding of how they worked, I was able to compress a simple READS loop – complete with “filters” and error checking – into a single line.

Consider the following code, which loops through active customer records and prints out the customer number, name and last sales date of anyone with no sales for more than a year:

        reads(ch_cusmas,cusmas)  [err = eof]
    if (cusmas.status .ne. ‘A’); If customer is not Active, ignore it
        if (cusmas.last_sale.year < lastYear)
        call printLine
eof,    etc…

The basic syntax and usage of the Select Class is:

    foreach myRecord in @Select(@From[, @Where][, @OrderBy])

And so, using the Select classes, I condensed everything into:

customers = new From(ch_cusmas,cusmas)
noNewSales = new Select(customers,(where)status.eq.’A’ .and. last_sale.year < lastYear)
foreach cusmas in noNewSales
    call printLine

(I actually condensed the first three lines into just one foreach statement, but the result is a line of code that doesn’t fit nicely into a blog entry, and therefore becomes more difficult to read.)

The syntax is neat, but it’s not the best part; the really cool stuff is happening under the hood.  The actual database I/O layer is now handling all of the “filter” logic, and it’s doing it faster than a regular READS loop can handle.  In fact, during my tests, a filtered return of around 18,375 records showed a performance benefit that ranged from 11 to 21 percent.  Now, that’s a small data set and we’re only talking about milliseconds, but it demonstrates a performance boost nevertheless – and that’s for a local application, running against a local database.  The savings over a network connection to a remote database (i.e., xfServer) is likely to be enormous, as the I/O layer on the server is now doing the filtering, rather than returning the data to the client to handle.

Other features include the OrderBy class, which (as expected) sorts the returned data in either ascending or descending order based on the key being read.  The classes also provide for a sparse record population, in which only the fields needed by the application are actually returned.  There are even methods available to get at each individual record returned in the set, write back to the file, etc.

The fact that an update to Synergy/DE 9.3 is all that’s required is impressive as well.  There’s no need to perform any database conversions, or add additional services or products; the Select classes work right out of the box.

The Select classes represent a significant addition to the language, and I can imagine a time in the not-too-distant future when they become the primary Synergy database access mechanism.  My hat’s off to the Synergex development team; it appears that they’ve hit this one out of the park.

The message is the same; it’s just the words that have changed

By William Mooney, Posted on September 2, 2009 at 4:24 pm


I dropped my daughter off at her first day of high school this week and got caught up in a “get back to business/summer’s over” mentality–get to the office, sharpen my pencils, and focus on what’s really important.

First order of business: Blogging. It’s been weighing on my mind that I haven’t posted in a while, and as I mentioned previously, I’ve been anxious to talk about our new tagline, Advancing Applications. Partnering for Success.

Our tagline for years was “Take Part in Creating Success.” These five words were plastered on everything we sent out. The concept was to convey to our customers, employees, and vendors that we are only successful when our customers are successful. I polled some Synergex new-hires for candid comments, however, and learned that they found the tagline confusing – i.e., who was taking part in whose success? We slowly backed away from the mantra to the point that we got rid of it all together.

This troubled me. Taglines articulate a company’s vision and empower people – employees, customers, vendors – to make decisions in line with the company’s overall objectives. (Think FedEx when they were just an overnight service – Surely “When it absolutely, positively has to be there overnight” empowered their employees to make that happen, at whatever cost, without having to ask permission first.) So, we recruited some Mad Men types (in our case Mad Women) to help us find a way to convey the original message without the confusion. We white-boarded ideas around customers, products…everything. We finally chose “Advancing Applications. Partnering for Success.”

That’s it. Our application of the Law of the Little Shovel. Our existence is based on these simple words: “Advancing Applications” because that’s what we’ve been doing since the early days of helping customers migrate from one platform to another; to today’s customers who want to implement new Windows user interfaces and Web front ends, or integrate data with Oracle and SQL Server, hand held apps, and a lot more. And “partnering for success” because we recognize that we are partners with our customers.

Our role extends much farther than providing products and services to help customers advance applications. It means coming up with ideas and ways to help them succeed – from designing new logos for them, to sending out mailings to help them get new business, to training their support departments on new technology — simply, partnering with them however we can to ensure their continued success.

So, armed with my sharp pencils and little shovel, empowered by the vision present and clear, I am eager to help my partners advance their applications to ensure success. Please call on me to let me know how we can help you.

I’m baaack…

By Don Fillion, Posted on August 27, 2009 at 4:28 pm



A few of you may remember me from my days with various software companies, where for many years I developed vertical market software. We may have rubbed elbows at an SPC, or met more recently while I was a Synergex PSG Consultant. After a really quick year or so in that role, I have now moved into a managerial position with Synergex, so, this is now Don Fillion, Director of Professional Services, kicking off PSG’s contribution to the blogosphere, the Synergy/DE PSG Blog!

As you are probably aware, we have some pretty fine consultants in the Professional Services Group: masters at working with customers to apply the technology Synergex develops. This blog is really their forum, a place for them to expound on their thoughts concerning software application development in the land of Synergy/DE (and beyond…!)—and hopefully pass along some insight gained during their various engagements. But it’s your place too, as we hope posts will engender some lively discussion.

So, welcome! If you have ideas or suggestions about future posts, or you would just like to discuss the latest technology trends and how they impact you, please feel free to email me at I look forward to working with you!

Don Fillion

PS… As I was researching weblogs, I came across some pretty cool sites. One of the best was LIFEHACKER – tips and downloads for getting things done. It’s kind of a toolbox and discussion forum for modern (techie) life. Check it out!

Preventing performance issues related to antivirus software

By Roger Andrews, Posted on August 10, 2009 at 7:19 pm


We get quite a number of support calls with either performance or system-down issues related to installation security suites, mostly related to antivirus software. In most cases the culprit ends up being the incorrect setup of the antivirus software.

Let’s first consider what antivirus software has to do and how it ships by default.

In today’s cat and mouse game, the security software vendors are trying to keep up with all of the malware generators that pop up daily. A typical antivirus signature file contains over 80 Mb of compressed signatures, and  the major players like Trend, McAphee, Symantec, VIPRE, and Kaspersky provide multiple updates to signatures daily. The problem then is deciding what to scan and when to scan—you obviously don’t want to miss an infected file that’s downloaded between updates to the scan databases, but you also don’t want to bog down your system unnecessarily. By default, most security products scan all files once daily, and use real time scanning to scan infectable files on both read and write. Some even default to continuously scanning all files. Though each vendor has different terminology for “scan on read” and “scan on write” (in fact some confuse read as write and write as read), “scan on read” effectively means scan every time a file is opened and “scan on write” effectively means only scan when a file opened for write is closed. Some vendors even have a flag to scan all files on close. And some products, like VIPRE, don’t have any concept of scan on write only.

Now that we know how these products handle file access, let’s consider some scenarios on live systems.

Scenario 1 – When “scan all files” is set

In this scenario, every file may be scanned for a virus on open and close, regardless of writeability. Consider scanning a .vhd for a virtual image, or a Synergy DBMS file every time a user opens or closes the file. (Both file types are usually opened even for write.) The same would even apply to every file accessed in your SQL Server and Oracle databases, and to all of your Synergy .dbr and .elb files.  The implications to your system performance are obvious.

Scenario 2 – Scan only infectable files

In this scenario, infectable files may be scanned on open and close. By default in most vendors’ products, this includes Synergy .ism files as well as .vhd files.  This scenario as well has a significant impact on your system performance due to the overhead of scanning large files.

Scenario 3 – Scan only infectable files on Write

In this case, .exe and .dll files are only scanned when updated, but a .vhd and a Synergy .ISM file would also be scanned on close because they are usually opened for write. This technique might be good for a general purpose file server of Word documents, for example, but not for a data server.

As you can see, without some degree of tuning, virus scanning products can have disastrous effects on system performance.  (You can use the Sysinternals Process Monitor to see the overhead your virus scanning tool is causing.)

For obvious reasons, scanning of files takes place at a high priority in the kernel mode of the operating system. This usually impacts both system time and user processing time. Additionally, many vendors now use the VISTA filter manager, and I previously bloggedabout the performance penalties of such hooking on Vista and Server 2008. Luckily the overhead is significantly reduced in Server 2008 R2 and Windows 7.

In our recent internal use of Microsoft’s SharePoint server, we were seeing dramatic performance problems when installing and uninstalling software, and even when the IIS SharePoint services (which are .NET-based) were loading and jitting. By correctly disabling the “scan on open for read” options, the performance significantly improved. We also tried the VIPRE product, and this improved performance even further – however, for a very specific reason. VIPRE, as stated previously, scans all files on open and close, and gains its performance edge because it recognizes signed, read-only EXE/DLL files and caches them if they have not changed so that the re-scan is not required. This is what gives it a seemingly large performance gain. However, once you throw in files that are not signed, its scan requires significantly more resources because you can’t disable the “scan on read” functionality (which would require a scan of such products as Diskeeper moving around files). Additionally, VIPRE also scans (but does not report issues with) other excluded files, so the overhead is pretty much permanent for unversioned files like Synergy DBMS files.

The key is, after you have a clean full-file scan on a system, set scan on write only, scan infectable files, and make sure that the file extensions of your databases and VHD files are set to no scan. And, due to its inability to scan on read, we do not recommend VIPRE for use with Synergy/DE installations.

(Of course I’m providing this information for information purposes only, and it is up to each company to set its security policies.)

SPC Boston comes to a close

By William Mooney, Posted on May 28, 2009 at 6:55 pm


We just wrapped up the SPC in Boston and it went great. Customers seemed to really like the ChronoTrack demo app (and all that sample code to take home!) and the Code Phantom’s challenge was answered by nearly all of the attendees – in fact, many stayed late just to make sure they’d completed it. (We kept them energized with pizza and beer, of course.)  I received lots of positive comments from customers about the conference and Synergy/DE in general. One customer mentioned that he had written us off 10 years ago but is absolutely amazed at how far we have come and what can be done with Synergy/DE. Another customer at one of the lunches referenced how much he had learned about what is possible with Synergy/DE that he hadn’t known about because he’d been “heads-down” for so long with his current project. Overall, customers were really pleased about learning what they can do right now with their existing Synergy/DE-based applications to make them more powerful.

We were also joined by Bigbah and his team (MannyJodah, and Mark), who seemed to really enjoy the conference – at least according to their Facebook pages!

It was a really informative conference and a great time was had by all. We’re looking forward to seeing our friends “across the pond” in a couple of weeks.

Protecting the Spread of Security Infections in Places You Might Not Think About

By Roger Andrews, Posted on May 6, 2009 at 10:20 pm


Several weeks ago we had a new Ikon color printer installed. It has a separate Kodak PC running the printer drivers and color matching software. I noticed that it was Internet connected and that software updates were not being applied.

When we contacted the manufacturer, we were told the PC was an embedded XP device and did not need the XP SP3 nor the security patches. We immediately disabled the Internet connection (embedded XP devices are susceptible to viruses too)—but that’s not really good enough. To date the manufacturer still has not authorized XP SP3 nor the regular monthly security patches, yet all printed documents go through this machine and users can go to the console and copy documents from a USB drive or internal network locations. Once infected with a virus or worm — or even a botnet — we’re SOL, because the manufacturer of the device doesn’t support installing anti-virus software, and any such changes would require an engineer to reload the system from scratch.

The problems are not just with Microsoft. Adobe has had to patch its Flash Player and Reader already this year, and another Reader patch is due. How many of us keep the Adobe Reader and Flash players up to date?

Why is this such a big issue? Well, the problem is that these embedded XP systems can get infected. One example is the Conficker worm. In most cases Conficker is benign until it is woken up by its creators. Users don’t even know they have it, may not even have Internet access (or may not know that they do), and/or may have been infected internally. The only way to detect these kinds of issues other than with a virus scanner is to look at network traffic going back to “phone home.” I think an article from the San Jose Mercury News illustrates the problem well. Even if you have a patch available to avoid infecting a machine, what if every patch and/or daily antivirus update required a 90-day approval process?

My recommendation is that you get with the manufacturers of all embedded XP devices that are connected to your network and get the regular updates and XP SP3, and ensure that Internet Explorer is disabled in such a way that the machine’s users cannot re-enable it. And also be sure to keep your Adobe Reader, Flash players, and similar products up-to-date.

Customers going to great lengths to attend the SPC

By William Mooney, Posted on April 23, 2009 at 9:04 pm


Not surprisingly, the upcoming SPC (Success Partner Conference) has been prevalent in my recent conversations and customer visits.

Today I was speaking to a customer in the Midwest who told me the Boston SPC overlaps his company’s conference, so he’s trying to make it to the London SPC instead.  Another customer is paying out of his own pocket to get the conference because his company has limited travel this year.  And another customer is using his own money *and* vacation time to get to the conference for the very same reason.

It really struck me how much our customers value our conference—both for the benefits it provides to their companies, and to them professionally— and reassured me about all of the man-hours we have put into preparing all of the content for the conference.  Our Professional Services Group has been working since last June on this year’s sessions and demo application, making sure they cover all of the recent enhancements to the Synergy/DE product line.  When I hear the lengths customers go to be a part of this knowledge transfer, and I see the resulting impact on their applications and their businesses, I know our investment has paid off.

I’m looking forward to a great conference – see you in Boston or London!

My initiation into the blogosphere: SPC 2009

By William Mooney, Posted on April 2, 2009 at 7:40 pm


OK, time to jump into the blog scene. It’s either that or start “tweeting”—and I’m just not there yet. I was asked to start a blog, so here goes…

The biggest hurdle I’ve faced re. starting a blog is Where To Start. There is so much to talk about! Most of the things I expect to blog about are recurring themes from conversations I have with customers—it will be great to document and share these. Other blogs will cover random topics that I feel would be of interest to you. So my first blog will be a hybrid of the two, with the subject being our upcoming SPC (Synergex Success Partner Conference). Some of you may remember that the original name of the SPC was DC for “Developer Conference.” Today, still, the conference primarily targets developers, but the overall theme is, as it always has been, “Partnering with our customers to help them succeed.” (On that note, stay tuned for a future blog about our new tagline: “Advancing Applications. Partnering for Success.”) While most of our customer contact is with the actual developers of Synergy/DE-based software, Synergy/DE products also impact those in an organization who are not developers. It is for that reason that we strive to partner with several different types of players in an organization to help the company overall make the best use of our products.

To that end, we have expanded our communications this year to target those different players specifically. Our Marketing team has developed four characters: Jodah VeloperMark Etting, Manny Jurr, and Bigbah Smann. Each character is an exaggerated representation of his role’s interests within an organization and how they may interact with those of another role’s. So far, we have had some good success with this expansion of communication and are having a lot of fun with the characters. (Look for them on Facebook!)

The message is that no matter what role you play in your organization, the SPC will benefit you – by providing a firsthand look at how easily you can advance your applications with today’s Synergy/DE; by helping you hone your development skills; and/or by showing you the new features your development team should be taking advantage of.

  • Presidents, CEOs, VPs, General Managers—basically those who are responsible for your P/L (AKA Bigbah Smann): Your Synergy/DE-based application(s) are among your company’s most important assets. I recommend you attend at least the first day of the conference so you can get a firsthand view of all of the functionality that can be immediately attained to make your applications more powerful, and for ISVs, more marketable. I’m confident you’ll be very surprised. In fact, I’ll even comp the first day of the conference for any CEO/CIO/CTO/GM who accompanies a developer to the SPC.
  • Those who are responsible for the sales and marketing of your Synergy/DE based applications (AKA Mark Etting): Like the person (above) who is responsible for the bottom line, you can gain significant benefits by attending the conference. It’s a great opportunity to see what your application is capable of, and what other Synergy/DE customers have done to make their applications more marketable.
  • And of course the people responsible for the development of your applications (AKA Jodah Veloper and Manny Jurr): I recommend you attend all three days of the conference – this will enable you to take away the skills and knowledge required to quickly and easily advance your applications.

So, whatever role you play in your organization, I look forward to catching up with you at the conference, or meeting you if we have not yet had the opportunity.

OK, that first blog was relatively painless! I look forward to blogging again soon.

The Vista performance saga – final chapter

By Roger Andrews, Posted on March 13, 2009 at 8:52 pm


In January we finally determined why file I/O on Vista and Server 2008 disks is slower than on Windows 2003. In a previous blog post I stated that

“The performance problem on disks that have been hooked by applications that use the new Vista/Server2008 filter manager infrastructure – can cause CPU overheads of at least 40% on all I/O operations including cached I/O and locks reducing throughput.”

So what applications use the new filter manager? Well UAC on system disks using the UAFv.sys file system re-director use the filter manager, and many current antivirus applications use the filter manager on all the disks where they are set to perform real-time scanning.

In Vista the initial hit is high to register “any” application to use the filter manager on a volume and then rises even higher for every operation type hooked. The UAC file system re-director – that ensures that writes to Windows-protected directories like windowssystem32 and program files are re-directed to the user’s local path, which the user does have access to. If you use Yahoo Messenger on a Vista system, you will see it has this problem because it always assumes it can write to program files. Now the reason that the uafs.sys file system redirector hooks every file I/O operation on the system disk is because it tries to cache these re-directed operations to avoid creating and writing the temporary re-directed file to disk ever; however this now causes the performance issue on Vista unless file system redirection is turned off by disabling the service (which may cause applications like Yahoo Messenger to fail unless UAC is also turned off).

I had turned uafv.sys off on my Vista system – however performance traces in Intel’s VTUNE performance advisor showed that I was still getting performance degradation due to the filter manager when running our test suites. It turns out that the latest Trend Micro antivirus engine is following Microsoft’s best practices and using the new filter manager on all disks – so the previous work-around of using a non system disk did not work on my machine.

In my dialogue with Microsoft, they indicated that they did not expect the data drives of an internal file server to always need to have an antivirus scan (by this I don’t mean a file server in the Word document sense, rather a dedicated database server that has no internet access), so the overheads related to the virus scanner would not apply to non system disks – and even if a virus scanner was installed that it would only be set to scan the system disk in real-time mode.

The good news is that Windows 7/Server 2008 R2 have significantly improved this situation. Though there is some overhead for the initial attach to the filter manager, additional attaches cause much less overhead, and the overall figure is far better than Vista. Microsoft will continue to look at this area during the release cycle of Server 2008 because of the impact it has when virus scanners are using the filter manager and set to real-time scan all disks on a system.

Microsoft’s ADO.NET Entity Framework

By Roger Andrews, Posted on January 29, 2009 at 4:36 pm


Over the years, Microsoft has provided many different ways to access data–ODBC, DAO, ADO, and ADO.NET (with data sets and data readers). The next data access technology is the Entity Framework with the 3.5 SP1 version of ADO.NET. Synergex has provided access to all of these technologies through the baseline ADO.NET 2.0 with its xfODBC driver. Synergex has developed its own ADO.NET 3.5 provider with the extended capabilities needed to interoperate with the Entity Framework and the Entity designers in Visual Studio 2008 SP1.

Microsoft views the Entity Framework as the future of all of its data access technologies – and products like SQL Server, Office, and the Visual Studio designers are all either upgraded or being upgraded to require access to databases via the Entity Framework.

Here is how Microsoft describes the ADO.NET Entity Framework:

“Database development with the .NET framework has not changed a lot since its first release. Many of us usually start by designing our database tables and their relationships and then creating classes in our application to emulate them as closely as possible in a set of Business Classes or (false) "Entity" Classes, and then working with them in our ADO.NET code. However, this process has always been an approximation and has involved a lot of groundwork.

This is where the ADO.NET Entity Framework comes in; it allows you to deal with the (true) entities represented in the database in your application code by abstracting the groundwork and maintenance code work away from you. A very crude description of the ADO.NET Entity Framework would be that It allows you to deal with database concepts in your code.“

The ADO.NET Entity Framework is designed to enable developers to create data access applications by programming against a conceptual application model instead of programming directly against a relational storage schema. The goal is to decrease the amount of code and maintenance required for data-oriented applications. Entity Framework applications provide the following benefits:

  • Applications can work in terms of a more application-centric conceptual model, including types with inheritance, complex members, and relationships.
  • Applications are freed from hard-coded dependencies on a particular data engine or storage schema.
  • Mappings between the conceptual model and the storage-specific schema can change without changing the application code.
  • Developers can work with a consistent application object model that can be mapped to various storage schemas, possibly implemented in different database management systems.
  • Multiple conceptual models can be mapped to a single storage schema.

If you are interested in beta testing our new Entity Framework capabilities, please contactSynergy/DE Developer Support.

For more information and a tutorial of the Entity Framework, see these links:

Upcoming “experimental feature” will help you detect use of uninitialized memory

By Roger Andrews, Posted on December 10, 2008 at 7:05 pm


We are continually reviewing customer applications to assist with support/development issues, and in doing so often come up with ideas to help customers facilitate debugging problems they may encounter. We use a product from Compuware called DevPartner Studio to help us track down “C” variable access problems in the Synergy components that sometimes cause instability in the runtime. I like to run customer applications with a special runtime that is built with DevPartner, which allows us to check boundary conditions while running “real” customer applications. DevPartner enables us to check use of memory already freed (called dangling pointers) and access to memory before we have written to it (a common cause of symptoms that move around depending on memory and time of day).

One recent application we saw was accessing uninitialized memory before writing to it. As we tracked this down, , we realized the customer was using stack records and %MEMPROC memory that had never been written to. In certain cases this would cause random results, and in this particular case, it was causing the customer’s application to fail when run under the DevPartner tool because the memory was now a consistent but unexpected value.

We decided as a test to add some support in Synergy/DE to see if the Synergy runtime could also detect this use of uninitialized memory with a minimal overhead when running in debug. It turns out that we can do similar checking for assignment statements and “if” tests, and we can differentiate between stack memory and MEM_PROC memory. Using this functionality also enables a developer to break in the debugger after the statement that uses this random memory.

We are considering adding this new debugging functionality to a future release of Synergy/DE. However, so that we can get this useful tool into your hands sooner, we are planning to include it as an “experimental feature” in an upcoming patch.

“Experimental features” are features that are under evaluation. They are for early adopters to use and provide us with feedback on. They will be supported, but they may be modified or even removed in subsequent releases.

So look for this new experimental debugging feature in an upcoming patch and consider trying it out. Like the recent feature we added to detect mismatched global data-section sizes (which can cause runtime crashes), this feature to detect uninitialized memory continues our aim to add debug-time detection of coding errors to assist you in producing more reliable applications.

Get your ducks in a row for the Next Generation

By William Mooney, Posted on November 11, 2008 at 10:14 pm


At the risk of dating myself, I recall several years ago seeing some of our customers (who were there when I started at Synergex) entering retirement and sailing off into the sunset. I got a bit nostalgic, as many of these people really took me under their wings and showed me the ropes when I first entered this field. At the same time, I was excited to begin new relationships with their successors — the Next Generation — who would be working with me as they carried on the legacy of their predecessors. What I didn’t consider then, but have since witnessed time and time again, is how important it is to prepare one’s business applications for the Next Generation. Passing the torch involves more than handing down a title and a business plan. It means getting all your ducks neatly in a row so the next person is sure to make the RIGHT decisions to best support and sustain the business.

The decision makers at our customer sites come in all shapes and sizes: some are executives, some are application users, and some are developers. For most of our existence, we have focused on the developer. After all, it is the developers we are most in contact with, and most, if not all, of our customers’ original owners and executives were developers. And we’ve been very successful at addressing their needs and providing them an exceptional array of development tools to get the job done. Our integrated Workbench, OO language, Java/.NET Integration, and SQL access to SQL Server, Oracle, and MySQL are meeting and exceeding their requirements. And many of the developer decision makers, including those in the Next Generation, have done some amazing things to advance their Synergy/DE-based applications to meet current look-and-feel demands while maximizing their very rich and proven business logic.

It is another group of Next Generation decision makers that can wreak major havoc on a business. It is the new executives who decide to replace everything because the existing application isn’t pretty enough. They expect a Windows or GUI-based system, and they are willing to pay for it. It doesn’t matter that the existing application is the most robust and appropriate solution to run their company, and that their employees are highly productive because they know the application inside and out. Nope, if it’s not <insert whatever they had at the last place or whatever they think is the latest thing>, it must go. Because some of these Next Generation decision makers don’t know about the history of the application, the years of customizations, the value of the rich and proven business logic, they decide to throw it out and start with some name-brand, high-end system that costs lots of money, requires new resources (and often makes the current ones obsolete), takes forever to implement and customize (and never achieves the functionality of the original application anyway), and in the end demolishes their business processes — just because the application looks good. We’ve seen companies fold after spending millions down this road. Don’t get me wrong, I like a good-looking application too. But functionality is king, followed by look and feel, not the other way around. Fortunately, with Synergy/DE you can have both.

So what do you do to avoid this fate? Simple. Get your application ready for the Next Generation. It’s much easier to add a new front end to proven business logic than the other way around. You wouldn’t consider tearing down your house if you didn’t like its curb appeal, would you? Get your application current, make it look modern, give it all the look and feel that new Next Generation executives might demand, before they have the chance to come in, take one look, and throw it (and all of your intellectual capital) out because the application doesn’t look like they think it should. Don’t get caught off-guard with an outdated application: Advance it to meet the needs of the Next Generation!

Live from Microsoft PDC: A sneak peak at Windows 7, plus our 64-bit ActiveX list support

By Roger Andrews, Posted on October 29, 2008 at 9:58 pm


This comes to you from the Microsoft PDC in Los Angeles, where I am among over 10,000 attendees. The PDC is Microsoft’s futures conference where they preview some of the technology coming out over the next couple of years.

Microsoft has demonstrated real UI improvements in Windows 7—improvements that made almost every attendee cheer. For example, Windows 7 includes UAC improvements so you don’t have to accept “On” or “Off”. And, the new iPhone-like touch support is certainly cool. It looks like within 5 years almost every laptop and LCD monitor will include touch support. The great thing with touch is that there are no UI Toolkit changes required to your Synergy/DE Windows applications because touch translates to normal mouse movements and clicks.

Microsoft has also set a goal to make Windows 7 run faster, boot faster and require less memory than Vista, targeting the new ultra mobile 10" laptops that have flash drives and 1GB of memory. This goes hand in hand with new features in the .NET framework that reduce memory requirements and provide improved interoperability with lower overheads. At Synergex we will be testing Synergy/DE with Windows 7 in the near future—to ensure everything works as well in Windows 7 as it currently does in Vista and Server 2008. Windows 7 also contains the same set of files as Server 2008 R2 so any performance improvements in Windows 7 will also benefit the server platform.

I also want to let you know that we have recently completed our 64-bit ActiveX list implementation, and it will be released in our upcoming 9.1.5a version. This means that 64-bit UI Toolkit applications are now possible on 64-bit native operating systems with the same features as their 32-bit counterparts (that is, if the appropriate controls you use are also available). This now enables you to take full advantage of the extra memory and scalability available with Server 2008 x64 Edition. (Server 2008 R2 is already announced as the last 32-bit server O/S by Microsoft.)

The Vista Performance Saga Continues

By Roger Andrews, Posted on August 8, 2008 at 5:39 pm



I thought it about time I posted an update regarding my Vista post on the 16th of April. In that post I recommended holding off on Server 2008 deployments until more data was available.

So let’s state the real problem.

“All file operations (read, write, file-position, etc.) are 40% slower on a Vista and Server 2008 system disk than they are on XP or Server 2003 system disks.”

These operations are slowed down even when they are serviced from the O/S cache subsystem. The reason for the 40% overhead is the registration of a driver with the newly (Vista) introduced file system filter framework, even if the driver itself performs no work and just returns. Registration can be for a particular device and not just a disk drive. In one case, the UAC file system virtualization driver, UAFV.SYS registers itself with the filter manager framework to perform the protected file virtualization feature new in Vista. As a result of the filter manager subsystem overhead – all read/write/seek operations to the C: drive become slower regardless of the file virtualization operation. Turning off this UAFV.SYS driver restores system disk performance.

How can you tell what this means? You can use the sysinternals procmon utility to see all the I/O operations occurring on your c: drive—every one of those operations is slowed down on a Vista and Server 2008 system disk. This accounts for some of the CPU bottleneck when your laptop starts. It accounts for slower virus scans on Vista system disks, etc.

As nearly all laptops, most small business servers, and the majority of current desktops all have a single system disk, this problem impacts all current Vista and Small Business Server 2008 users to some degree or another. This problem becomes exacerbated when other utility and anti-virus software takes advantage of the new Vista filter manager framework, where performance to non-system disks will be impacted.

Solutions are of course to read/write sequential data in much larger blocks. We changed Synergy/DE to use 4k buffers for sequential output in our recent 9.1.5 release, however the semantics of the sequential input read allowing for random reading precludes us from doing that on input without slowing down performance. Random ISAM reads can’t use larger blocks without damaging performance at the disk level—so they incur the CPU overhead. Most of the I/O patterns I see with procmon also don’t meet the bar for larger I/Os, so the real issue is to get the problem fixed in the O/S.

If you disable UAC (which we don’t recommend) and you have never virtualized a file (for example, you do this at system installation), you can use the registry editor to make the uafv.sys service visible and then disable it. Doing so will also mean you can’t re-enable UAC till the service is re-enabled. Alternatively you can ensure all your data files (this also means your temp and DTKTMP logicals) are placed on a non system drive – and you won’t see most of the impact of this problem.

We are currently working with Microsoft to provide a fix to this in the next Service Pak and and will keep you informed of our results.

As a side note, we also noticed that any scheduled task runs slower in Vista and Server 2008. Typically customers use these to generate reports and run day ends overnight. These tasks now run at a low priority class. You would expect an idle system to run them almost the same—regardless of the priority class (after all the idea is low priority items use available resource when there are no higher priority items running), but it appears that the programs no longer use available resources as prior versions do. Microsoft sees this as by design—which is hard to believe. We have introduced a new API in 9.1.5 to allow you to re-set the priority class of your scheduled tasks to ensure they retain the performance characteristics of prior operating system versions.

Red Alert! DNS Flaw Revealed

By Roger Andrews, Posted on July 31, 2008 at 4:24 pm


Due to the recent online disclosure of technical details to exploit a widespread DNS vulnerability, security researchers are warning users to patch vulnerable systems immediately.

All Linux and Windows based DNS servers require a patch, and most routers need a patch with real urgency.


The domain name system translates domain names, like "," into numeric IP addresses and vice versa. The DNS flaw, if exploited, allows what is known as DNS cache poisoning. This involves remapping domain names to different, potentially malicious servers.

US-CERT on Monday warned: "Technical details regarding this vulnerability have been posted to public Web sites. Attackers could use these details to construct exploit code. Users are encouraged to patch vulnerable systems immediately."

"This is a very serious situation, and can possibly lead to widespread and targeted attacks which hijack sensitive information by redirecting legitimate traffic to fraudulent Web sites, due to incorrect (fraudulent) information being injected into the vulnerable caching nameserver(s)," Trend Micro security researcher Paul Ferguson said in a blog post.

Read the full article:

For additional information about this type of attack and for details about how to resolve it, visit

Don't miss a post!

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts Categories Tag Cloud Archives