Phone800.366.3472 SupportGet Support DocumentationDocumentation Resource CenterResource Center
Open Menu

Synergex Blog

Preventing performance issues related to antivirus software

By Roger Andrews, Posted on August 10, 2009 at 7:19 pm

We get quite a number of support calls with either performance or system-down issues related to installation security suites, mostly related to antivirus software. In most cases the culprit ends up being the incorrect setup of the antivirus software.

Let’s first consider what antivirus software has to do and how it ships by default.

In today’s cat and mouse game, the security software vendors are trying to keep up with all of the malware generators that pop up daily. A typical antivirus signature file contains over 80 Mb of compressed signatures, and  the major players like Trend, McAphee, Symantec, VIPRE, and Kaspersky provide multiple updates to signatures daily. The problem then is deciding what to scan and when to scan—you obviously don’t want to miss an infected file that’s downloaded between updates to the scan databases, but you also don’t want to bog down your system unnecessarily. By default, most security products scan all files once daily, and use real time scanning to scan infectable files on both read and write. Some even default to continuously scanning all files. Though each vendor has different terminology for “scan on read” and “scan on write” (in fact some confuse read as write and write as read), “scan on read” effectively means scan every time a file is opened and “scan on write” effectively means only scan when a file opened for write is closed. Some vendors even have a flag to scan all files on close. And some products, like VIPRE, don’t have any concept of scan on write only.

Now that we know how these products handle file access, let’s consider some scenarios on live systems.

Scenario 1 – When “scan all files” is set

In this scenario, every file may be scanned for a virus on open and close, regardless of writeability. Consider scanning a .vhd for a virtual image, or a Synergy DBMS file every time a user opens or closes the file. (Both file types are usually opened even for write.) The same would even apply to every file accessed in your SQL Server and Oracle databases, and to all of your Synergy .dbr and .elb files.  The implications to your system performance are obvious.

Scenario 2 – Scan only infectable files

In this scenario, infectable files may be scanned on open and close. By default in most vendors’ products, this includes Synergy .ism files as well as .vhd files.  This scenario as well has a significant impact on your system performance due to the overhead of scanning large files.

Scenario 3 – Scan only infectable files on Write

In this case, .exe and .dll files are only scanned when updated, but a .vhd and a Synergy .ISM file would also be scanned on close because they are usually opened for write. This technique might be good for a general purpose file server of Word documents, for example, but not for a data server.

As you can see, without some degree of tuning, virus scanning products can have disastrous effects on system performance.  (You can use the Sysinternals Process Monitor to see the overhead your virus scanning tool is causing.)

For obvious reasons, scanning of files takes place at a high priority in the kernel mode of the operating system. This usually impacts both system time and user processing time. Additionally, many vendors now use the VISTA filter manager, and I previously bloggedabout the performance penalties of such hooking on Vista and Server 2008. Luckily the overhead is significantly reduced in Server 2008 R2 and Windows 7.

In our recent internal use of Microsoft’s SharePoint server, we were seeing dramatic performance problems when installing and uninstalling software, and even when the IIS SharePoint services (which are .NET-based) were loading and jitting. By correctly disabling the “scan on open for read” options, the performance significantly improved. We also tried the VIPRE product, and this improved performance even further – however, for a very specific reason. VIPRE, as stated previously, scans all files on open and close, and gains its performance edge because it recognizes signed, read-only EXE/DLL files and caches them if they have not changed so that the re-scan is not required. This is what gives it a seemingly large performance gain. However, once you throw in files that are not signed, its scan requires significantly more resources because you can’t disable the “scan on read” functionality (which would require a scan of such products as Diskeeper moving around files). Additionally, VIPRE also scans (but does not report issues with) other excluded files, so the overhead is pretty much permanent for unversioned files like Synergy DBMS files.

The key is, after you have a clean full-file scan on a system, set scan on write only, scan infectable files, and make sure that the file extensions of your databases and VHD files are set to no scan. And, due to its inability to scan on read, we do not recommend VIPRE for use with Synergy/DE installations.

(Of course I’m providing this information for information purposes only, and it is up to each company to set its security policies.)

SPC Boston comes to a close

By William Mooney, Posted on May 28, 2009 at 6:55 pm

We just wrapped up the SPC in Boston and it went great. Customers seemed to really like the ChronoTrack demo app (and all that sample code to take home!) and the Code Phantom’s challenge was answered by nearly all of the attendees – in fact, many stayed late just to make sure they’d completed it. (We kept them energized with pizza and beer, of course.)  I received lots of positive comments from customers about the conference and Synergy/DE in general. One customer mentioned that he had written us off 10 years ago but is absolutely amazed at how far we have come and what can be done with Synergy/DE. Another customer at one of the lunches referenced how much he had learned about what is possible with Synergy/DE that he hadn’t known about because he’d been “heads-down” for so long with his current project. Overall, customers were really pleased about learning what they can do right now with their existing Synergy/DE-based applications to make them more powerful.

We were also joined by Bigbah and his team (MannyJodah, and Mark), who seemed to really enjoy the conference – at least according to their Facebook pages!

It was a really informative conference and a great time was had by all. We’re looking forward to seeing our friends “across the pond” in a couple of weeks.

Protecting the Spread of Security Infections in Places You Might Not Think About

By Roger Andrews, Posted on May 6, 2009 at 10:20 pm

Several weeks ago we had a new Ikon color printer installed. It has a separate Kodak PC running the printer drivers and color matching software. I noticed that it was Internet connected and that software updates were not being applied.

When we contacted the manufacturer, we were told the PC was an embedded XP device and did not need the XP SP3 nor the security patches. We immediately disabled the Internet connection (embedded XP devices are susceptible to viruses too)—but that’s not really good enough. To date the manufacturer still has not authorized XP SP3 nor the regular monthly security patches, yet all printed documents go through this machine and users can go to the console and copy documents from a USB drive or internal network locations. Once infected with a virus or worm — or even a botnet — we’re SOL, because the manufacturer of the device doesn’t support installing anti-virus software, and any such changes would require an engineer to reload the system from scratch.

The problems are not just with Microsoft. Adobe has had to patch its Flash Player and Reader already this year, and another Reader patch is due. How many of us keep the Adobe Reader and Flash players up to date?

Why is this such a big issue? Well, the problem is that these embedded XP systems can get infected. One example is the Conficker worm. In most cases Conficker is benign until it is woken up by its creators. Users don’t even know they have it, may not even have Internet access (or may not know that they do), and/or may have been infected internally. The only way to detect these kinds of issues other than with a virus scanner is to look at network traffic going back to “phone home.” I think an article from the San Jose Mercury News illustrates the problem well. Even if you have a patch available to avoid infecting a machine, what if every patch and/or daily antivirus update required a 90-day approval process?

My recommendation is that you get with the manufacturers of all embedded XP devices that are connected to your network and get the regular updates and XP SP3, and ensure that Internet Explorer is disabled in such a way that the machine’s users cannot re-enable it. And also be sure to keep your Adobe Reader, Flash players, and similar products up-to-date.

Customers going to great lengths to attend the SPC

By William Mooney, Posted on April 23, 2009 at 9:04 pm

Not surprisingly, the upcoming SPC (Success Partner Conference) has been prevalent in my recent conversations and customer visits.

Today I was speaking to a customer in the Midwest who told me the Boston SPC overlaps his company’s conference, so he’s trying to make it to the London SPC instead.  Another customer is paying out of his own pocket to get the conference because his company has limited travel this year.  And another customer is using his own money *and* vacation time to get to the conference for the very same reason.

It really struck me how much our customers value our conference—both for the benefits it provides to their companies, and to them professionally— and reassured me about all of the man-hours we have put into preparing all of the content for the conference.  Our Professional Services Group has been working since last June on this year’s sessions and demo application, making sure they cover all of the recent enhancements to the Synergy/DE product line.  When I hear the lengths customers go to be a part of this knowledge transfer, and I see the resulting impact on their applications and their businesses, I know our investment has paid off.

I’m looking forward to a great conference – see you in Boston or London!

My initiation into the blogosphere: SPC 2009

By William Mooney, Posted on April 2, 2009 at 7:40 pm

OK, time to jump into the blog scene. It’s either that or start “tweeting”—and I’m just not there yet. I was asked to start a blog, so here goes…

The biggest hurdle I’ve faced re. starting a blog is Where To Start. There is so much to talk about! Most of the things I expect to blog about are recurring themes from conversations I have with customers—it will be great to document and share these. Other blogs will cover random topics that I feel would be of interest to you. So my first blog will be a hybrid of the two, with the subject being our upcoming SPC (Synergex Success Partner Conference). Some of you may remember that the original name of the SPC was DC for “Developer Conference.” Today, still, the conference primarily targets developers, but the overall theme is, as it always has been, “Partnering with our customers to help them succeed.” (On that note, stay tuned for a future blog about our new tagline: “Advancing Applications. Partnering for Success.”) While most of our customer contact is with the actual developers of Synergy/DE-based software, Synergy/DE products also impact those in an organization who are not developers. It is for that reason that we strive to partner with several different types of players in an organization to help the company overall make the best use of our products.

To that end, we have expanded our communications this year to target those different players specifically. Our Marketing team has developed four characters: Jodah VeloperMark Etting, Manny Jurr, and Bigbah Smann. Each character is an exaggerated representation of his role’s interests within an organization and how they may interact with those of another role’s. So far, we have had some good success with this expansion of communication and are having a lot of fun with the characters. (Look for them on Facebook!)

The message is that no matter what role you play in your organization, the SPC will benefit you – by providing a firsthand look at how easily you can advance your applications with today’s Synergy/DE; by helping you hone your development skills; and/or by showing you the new features your development team should be taking advantage of.

  • Presidents, CEOs, VPs, General Managers—basically those who are responsible for your P/L (AKA Bigbah Smann): Your Synergy/DE-based application(s) are among your company’s most important assets. I recommend you attend at least the first day of the conference so you can get a firsthand view of all of the functionality that can be immediately attained to make your applications more powerful, and for ISVs, more marketable. I’m confident you’ll be very surprised. In fact, I’ll even comp the first day of the conference for any CEO/CIO/CTO/GM who accompanies a developer to the SPC.
  • Those who are responsible for the sales and marketing of your Synergy/DE based applications (AKA Mark Etting): Like the person (above) who is responsible for the bottom line, you can gain significant benefits by attending the conference. It’s a great opportunity to see what your application is capable of, and what other Synergy/DE customers have done to make their applications more marketable.
  • And of course the people responsible for the development of your applications (AKA Jodah Veloper and Manny Jurr): I recommend you attend all three days of the conference – this will enable you to take away the skills and knowledge required to quickly and easily advance your applications.

So, whatever role you play in your organization, I look forward to catching up with you at the conference, or meeting you if we have not yet had the opportunity.

OK, that first blog was relatively painless! I look forward to blogging again soon.

The Vista performance saga – final chapter

By Roger Andrews, Posted on March 13, 2009 at 8:52 pm

In January we finally determined why file I/O on Vista and Server 2008 disks is slower than on Windows 2003. In a previous blog post I stated that

“The performance problem on disks that have been hooked by applications that use the new Vista/Server2008 filter manager infrastructure – can cause CPU overheads of at least 40% on all I/O operations including cached I/O and locks reducing throughput.”

So what applications use the new filter manager? Well UAC on system disks using the UAFv.sys file system re-director use the filter manager, and many current antivirus applications use the filter manager on all the disks where they are set to perform real-time scanning.

In Vista the initial hit is high to register “any” application to use the filter manager on a volume and then rises even higher for every operation type hooked. The UAC file system re-director – that ensures that writes to Windows-protected directories like windowssystem32 and program files are re-directed to the user’s local path, which the user does have access to. If you use Yahoo Messenger on a Vista system, you will see it has this problem because it always assumes it can write to program files. Now the reason that the uafs.sys file system redirector hooks every file I/O operation on the system disk is because it tries to cache these re-directed operations to avoid creating and writing the temporary re-directed file to disk ever; however this now causes the performance issue on Vista unless file system redirection is turned off by disabling the service (which may cause applications like Yahoo Messenger to fail unless UAC is also turned off).

I had turned uafv.sys off on my Vista system – however performance traces in Intel’s VTUNE performance advisor showed that I was still getting performance degradation due to the filter manager when running our test suites. It turns out that the latest Trend Micro antivirus engine is following Microsoft’s best practices and using the new filter manager on all disks – so the previous work-around of using a non system disk did not work on my machine.

In my dialogue with Microsoft, they indicated that they did not expect the data drives of an internal file server to always need to have an antivirus scan (by this I don’t mean a file server in the Word document sense, rather a dedicated database server that has no internet access), so the overheads related to the virus scanner would not apply to non system disks – and even if a virus scanner was installed that it would only be set to scan the system disk in real-time mode.

The good news is that Windows 7/Server 2008 R2 have significantly improved this situation. Though there is some overhead for the initial attach to the filter manager, additional attaches cause much less overhead, and the overall figure is far better than Vista. Microsoft will continue to look at this area during the release cycle of Server 2008 because of the impact it has when virus scanners are using the filter manager and set to real-time scan all disks on a system.

Microsoft’s ADO.NET Entity Framework

By Roger Andrews, Posted on January 29, 2009 at 4:36 pm

Over the years, Microsoft has provided many different ways to access data–ODBC, DAO, ADO, and ADO.NET (with data sets and data readers). The next data access technology is the Entity Framework with the 3.5 SP1 version of ADO.NET. Synergex has provided access to all of these technologies through the baseline ADO.NET 2.0 with its xfODBC driver. Synergex has developed its own ADO.NET 3.5 provider with the extended capabilities needed to interoperate with the Entity Framework and the Entity designers in Visual Studio 2008 SP1.

Microsoft views the Entity Framework as the future of all of its data access technologies – and products like SQL Server, Office, and the Visual Studio designers are all either upgraded or being upgraded to require access to databases via the Entity Framework.

Here is how Microsoft describes the ADO.NET Entity Framework:

“Database development with the .NET framework has not changed a lot since its first release. Many of us usually start by designing our database tables and their relationships and then creating classes in our application to emulate them as closely as possible in a set of Business Classes or (false) "Entity" Classes, and then working with them in our ADO.NET code. However, this process has always been an approximation and has involved a lot of groundwork.

This is where the ADO.NET Entity Framework comes in; it allows you to deal with the (true) entities represented in the database in your application code by abstracting the groundwork and maintenance code work away from you. A very crude description of the ADO.NET Entity Framework would be that It allows you to deal with database concepts in your code.“

The ADO.NET Entity Framework is designed to enable developers to create data access applications by programming against a conceptual application model instead of programming directly against a relational storage schema. The goal is to decrease the amount of code and maintenance required for data-oriented applications. Entity Framework applications provide the following benefits:

  • Applications can work in terms of a more application-centric conceptual model, including types with inheritance, complex members, and relationships.
  • Applications are freed from hard-coded dependencies on a particular data engine or storage schema.
  • Mappings between the conceptual model and the storage-specific schema can change without changing the application code.
  • Developers can work with a consistent application object model that can be mapped to various storage schemas, possibly implemented in different database management systems.
  • Multiple conceptual models can be mapped to a single storage schema.

If you are interested in beta testing our new Entity Framework capabilities, please contactSynergy/DE Developer Support.

For more information and a tutorial of the Entity Framework, see these links:

Upcoming “experimental feature” will help you detect use of uninitialized memory

By Roger Andrews, Posted on December 10, 2008 at 7:05 pm

We are continually reviewing customer applications to assist with support/development issues, and in doing so often come up with ideas to help customers facilitate debugging problems they may encounter. We use a product from Compuware called DevPartner Studio to help us track down “C” variable access problems in the Synergy components that sometimes cause instability in the runtime. I like to run customer applications with a special runtime that is built with DevPartner, which allows us to check boundary conditions while running “real” customer applications. DevPartner enables us to check use of memory already freed (called dangling pointers) and access to memory before we have written to it (a common cause of symptoms that move around depending on memory and time of day).

One recent application we saw was accessing uninitialized memory before writing to it. As we tracked this down, , we realized the customer was using stack records and %MEMPROC memory that had never been written to. In certain cases this would cause random results, and in this particular case, it was causing the customer’s application to fail when run under the DevPartner tool because the memory was now a consistent but unexpected value.

We decided as a test to add some support in Synergy/DE to see if the Synergy runtime could also detect this use of uninitialized memory with a minimal overhead when running in debug. It turns out that we can do similar checking for assignment statements and “if” tests, and we can differentiate between stack memory and MEM_PROC memory. Using this functionality also enables a developer to break in the debugger after the statement that uses this random memory.

We are considering adding this new debugging functionality to a future release of Synergy/DE. However, so that we can get this useful tool into your hands sooner, we are planning to include it as an “experimental feature” in an upcoming patch.

“Experimental features” are features that are under evaluation. They are for early adopters to use and provide us with feedback on. They will be supported, but they may be modified or even removed in subsequent releases.

So look for this new experimental debugging feature in an upcoming patch and consider trying it out. Like the recent feature we added to detect mismatched global data-section sizes (which can cause runtime crashes), this feature to detect uninitialized memory continues our aim to add debug-time detection of coding errors to assist you in producing more reliable applications.

Get your ducks in a row for the Next Generation

By William Mooney, Posted on November 11, 2008 at 10:14 pm

At the risk of dating myself, I recall several years ago seeing some of our customers (who were there when I started at Synergex) entering retirement and sailing off into the sunset. I got a bit nostalgic, as many of these people really took me under their wings and showed me the ropes when I first entered this field. At the same time, I was excited to begin new relationships with their successors — the Next Generation — who would be working with me as they carried on the legacy of their predecessors. What I didn’t consider then, but have since witnessed time and time again, is how important it is to prepare one’s business applications for the Next Generation. Passing the torch involves more than handing down a title and a business plan. It means getting all your ducks neatly in a row so the next person is sure to make the RIGHT decisions to best support and sustain the business.

The decision makers at our customer sites come in all shapes and sizes: some are executives, some are application users, and some are developers. For most of our existence, we have focused on the developer. After all, it is the developers we are most in contact with, and most, if not all, of our customers’ original owners and executives were developers. And we’ve been very successful at addressing their needs and providing them an exceptional array of development tools to get the job done. Our integrated Workbench, OO language, Java/.NET Integration, and SQL access to SQL Server, Oracle, and MySQL are meeting and exceeding their requirements. And many of the developer decision makers, including those in the Next Generation, have done some amazing things to advance their Synergy/DE-based applications to meet current look-and-feel demands while maximizing their very rich and proven business logic.

It is another group of Next Generation decision makers that can wreak major havoc on a business. It is the new executives who decide to replace everything because the existing application isn’t pretty enough. They expect a Windows or GUI-based system, and they are willing to pay for it. It doesn’t matter that the existing application is the most robust and appropriate solution to run their company, and that their employees are highly productive because they know the application inside and out. Nope, if it’s not <insert whatever they had at the last place or whatever they think is the latest thing>, it must go. Because some of these Next Generation decision makers don’t know about the history of the application, the years of customizations, the value of the rich and proven business logic, they decide to throw it out and start with some name-brand, high-end system that costs lots of money, requires new resources (and often makes the current ones obsolete), takes forever to implement and customize (and never achieves the functionality of the original application anyway), and in the end demolishes their business processes — just because the application looks good. We’ve seen companies fold after spending millions down this road. Don’t get me wrong, I like a good-looking application too. But functionality is king, followed by look and feel, not the other way around. Fortunately, with Synergy/DE you can have both.

So what do you do to avoid this fate? Simple. Get your application ready for the Next Generation. It’s much easier to add a new front end to proven business logic than the other way around. You wouldn’t consider tearing down your house if you didn’t like its curb appeal, would you? Get your application current, make it look modern, give it all the look and feel that new Next Generation executives might demand, before they have the chance to come in, take one look, and throw it (and all of your intellectual capital) out because the application doesn’t look like they think it should. Don’t get caught off-guard with an outdated application: Advance it to meet the needs of the Next Generation!

Live from Microsoft PDC: A sneak peak at Windows 7, plus our 64-bit ActiveX list support

By Roger Andrews, Posted on October 29, 2008 at 9:58 pm

This comes to you from the Microsoft PDC in Los Angeles, where I am among over 10,000 attendees. The PDC is Microsoft’s futures conference where they preview some of the technology coming out over the next couple of years.

Microsoft has demonstrated real UI improvements in Windows 7—improvements that made almost every attendee cheer. For example, Windows 7 includes UAC improvements so you don’t have to accept “On” or “Off”. And, the new iPhone-like touch support is certainly cool. It looks like within 5 years almost every laptop and LCD monitor will include touch support. The great thing with touch is that there are no UI Toolkit changes required to your Synergy/DE Windows applications because touch translates to normal mouse movements and clicks.

Microsoft has also set a goal to make Windows 7 run faster, boot faster and require less memory than Vista, targeting the new ultra mobile 10" laptops that have flash drives and 1GB of memory. This goes hand in hand with new features in the .NET framework that reduce memory requirements and provide improved interoperability with lower overheads. At Synergex we will be testing Synergy/DE with Windows 7 in the near future—to ensure everything works as well in Windows 7 as it currently does in Vista and Server 2008. Windows 7 also contains the same set of files as Server 2008 R2 so any performance improvements in Windows 7 will also benefit the server platform.

I also want to let you know that we have recently completed our 64-bit ActiveX list implementation, and it will be released in our upcoming 9.1.5a version. This means that 64-bit UI Toolkit applications are now possible on 64-bit native operating systems with the same features as their 32-bit counterparts (that is, if the appropriate controls you use are also available). This now enables you to take full advantage of the extra memory and scalability available with Server 2008 x64 Edition. (Server 2008 R2 is already announced as the last 32-bit server O/S by Microsoft.)

The Vista Performance Saga Continues

By Roger Andrews, Posted on August 8, 2008 at 5:39 pm


I thought it about time I posted an update regarding my Vista post on the 16th of April. In that post I recommended holding off on Server 2008 deployments until more data was available.

So let’s state the real problem.

“All file operations (read, write, file-position, etc.) are 40% slower on a Vista and Server 2008 system disk than they are on XP or Server 2003 system disks.”

These operations are slowed down even when they are serviced from the O/S cache subsystem. The reason for the 40% overhead is the registration of a driver with the newly (Vista) introduced file system filter framework, even if the driver itself performs no work and just returns. Registration can be for a particular device and not just a disk drive. In one case, the UAC file system virtualization driver, UAFV.SYS registers itself with the filter manager framework to perform the protected file virtualization feature new in Vista. As a result of the filter manager subsystem overhead – all read/write/seek operations to the C: drive become slower regardless of the file virtualization operation. Turning off this UAFV.SYS driver restores system disk performance.

How can you tell what this means? You can use the sysinternals procmon utility to see all the I/O operations occurring on your c: drive—every one of those operations is slowed down on a Vista and Server 2008 system disk. This accounts for some of the CPU bottleneck when your laptop starts. It accounts for slower virus scans on Vista system disks, etc.

As nearly all laptops, most small business servers, and the majority of current desktops all have a single system disk, this problem impacts all current Vista and Small Business Server 2008 users to some degree or another. This problem becomes exacerbated when other utility and anti-virus software takes advantage of the new Vista filter manager framework, where performance to non-system disks will be impacted.

Solutions are of course to read/write sequential data in much larger blocks. We changed Synergy/DE to use 4k buffers for sequential output in our recent 9.1.5 release, however the semantics of the sequential input read allowing for random reading precludes us from doing that on input without slowing down performance. Random ISAM reads can’t use larger blocks without damaging performance at the disk level—so they incur the CPU overhead. Most of the I/O patterns I see with procmon also don’t meet the bar for larger I/Os, so the real issue is to get the problem fixed in the O/S.

If you disable UAC (which we don’t recommend) and you have never virtualized a file (for example, you do this at system installation), you can use the registry editor to make the uafv.sys service visible and then disable it. Doing so will also mean you can’t re-enable UAC till the service is re-enabled. Alternatively you can ensure all your data files (this also means your temp and DTKTMP logicals) are placed on a non system drive – and you won’t see most of the impact of this problem.

We are currently working with Microsoft to provide a fix to this in the next Service Pak and and will keep you informed of our results.

As a side note, we also noticed that any scheduled task runs slower in Vista and Server 2008. Typically customers use these to generate reports and run day ends overnight. These tasks now run at a low priority class. You would expect an idle system to run them almost the same—regardless of the priority class (after all the idea is low priority items use available resource when there are no higher priority items running), but it appears that the programs no longer use available resources as prior versions do. Microsoft sees this as by design—which is hard to believe. We have introduced a new API in 9.1.5 to allow you to re-set the priority class of your scheduled tasks to ensure they retain the performance characteristics of prior operating system versions.

Red Alert! DNS Flaw Revealed

By Roger Andrews, Posted on July 31, 2008 at 4:24 pm

Due to the recent online disclosure of technical details to exploit a widespread DNS vulnerability, security researchers are warning users to patch vulnerable systems immediately.

All Linux and Windows based DNS servers require a patch, and most routers need a patch with real urgency.


The domain name system translates domain names, like "," into numeric IP addresses and vice versa. The DNS flaw, if exploited, allows what is known as DNS cache poisoning. This involves remapping domain names to different, potentially malicious servers.

US-CERT on Monday warned: "Technical details regarding this vulnerability have been posted to public Web sites. Attackers could use these details to construct exploit code. Users are encouraged to patch vulnerable systems immediately."

"This is a very serious situation, and can possibly lead to widespread and targeted attacks which hijack sensitive information by redirecting legitimate traffic to fraudulent Web sites, due to incorrect (fraudulent) information being injected into the vulnerable caching nameserver(s)," Trend Micro security researcher Paul Ferguson said in a blog post.

Read the full article:

For additional information about this type of attack and for details about how to resolve it, visit

The XP era is over – what does that mean to you?

By Roger Andrews, Posted on July 7, 2008 at 9:18 pm

As Windows XP is no longer available as of June 30th, I’d like to talk about your options regarding Synergy/DE support for Windows Vista.

While Microsoft may have pulled the plug on Windows XP as of June 30, it still continues to offer the home version for ultra low end PCs that can’t run Vista. However, if you go to Dell or HP, you won’t be able to select XP for a new system. Manufacturers can continue to sell XP while “stocks last” but in today’s highly evolving marketplace, who would stock XP just in case someone might buy it one day? Further, volume license customers can’t purchase XP licenses any more—the only way for a business customer to get it is to buy Vista Enterprise and downgrade to XP.

So, where does that leave Synergy/DE customers sitting on the fence and using versions of Synergy/DE prior to 9? Well, as of July 1, the supported route is to upgrade to version 9. Any new machines your customers/users buy will be running Vista, which means you need version 9 for that user (if you want to deploy a supported version). We just shipped our latest version in the 9 series, version 9.1.5, which we recommend using.

So what do you do if you want to use Vista and Server 2008 but your installed base is using 8.3.1# and you don’t want to upgrade them all at once? We have customers who have been accomplishing all of this successfully by continuing to build their .dbr and .elb files with 8.3 and then running those 8.3-built files under Synergy/DE 9.  In the rare documented cases where version 9 finds an issue not present in 8.3 (e.g., the new duplicate global data section of differing sizes), the issue can be fixed back in the 8.3 code base producing a .dbr that runs perfectly on both 8.3 and 9. This same technique should be used if you are requiring a hotfix for a problem in 8.3. Synergex’s policy is to provide Synergy/DE 9 for deployments of the fix rather than an 8.3 patch.

Now you may ask, what about development? We still recommend you use the latest version 9  tools to build and develop your applications (so you can take advantage of improved error detection and increased developer productivity), but you can rebuild the tested .dbr files under 8.3 for mass deployment.

Given that the XP era has ended, I recommend that all ISVs test their current pre-9 applications under Synergy/DE 9.1.5 so they can be assured of continued customer satisfaction when the inevitable Vista machine is encountered. I also recommend that all new customer installations be V9 throughout, or at least adopt the built-under-8.3-deployed-under 9 model described above.

Don’t forget support for your non-Synergy/DE products

By Roger Andrews, Posted on May 6, 2008 at 6:07 pm

In my last post, I talked about some issues with Windows Server 2008 and Vista SP1 that caused me to recommend not upgrading to them yet. These issues represent just one example where an operating system problem might hinder performance for our customers.

In another example, we recently had a customer report that it was taking our SQL OpenNet server 20 times longer to retrieve records from a SCO OpenServer 6 or UnixWare system than from SCO OpenServer 5.0.6. We tracked this down to a bug in the SCO implementation of the Nagle algorithm on the TCP/IP stack. We produced a simple C program that was sent to SCO and a fix is pending.

While we were able to assist the customer in the above situation, this isn’t always the case. We try hard to reproduce operating system and other layered product problems with our support team even when Synergy/DE is not at fault, but we unfortunately can’t support every OS and product in the field. There is an increasing need for our ISVs and end customers to maintain software support contracts with the vendors they work with to solve problems.

In many cases the problems we come across are third-party interaction issues (like virus scanners) and configuration issues with the OS that are beyond the scope of Synergy/DE support. A prime example of this is the use of operating system virtualization, where Synergy/DE is supported on the target OS, and the virtualization software acts as a hardware layer underneath the OS. As we have found out, Microsoft will not entertain any calls being logged if the problem is not reproducible in a non-virtual environment. So just as the device drivers of a server require a maintenance contract with the hardware supplier, so the use of virtualization software requires the same (effectively hardware) support contract with the virtualization supplier.

So I recommend you evaluate the level of support you may need for your non-Synergy/DE products and then obtain the appropriate support contracts.

Moving to Windows Server 2008, Windows Vista SP1

By Roger Andrews, Posted on April 16, 2008 at 8:50 pm

It’s been some time since I posted. We’ve been busy with our 9.1.3 release, which has some great new .NET interop features and more flexible xfNetLink .NET client capabilities. I’ve also had some international travel.

Windows Server 2008 and Windows Vista SP1 have now shipped. (9.1.3 was tested with both). Both use an identical code base and identical DLL and kernel versions – which is great for maintainability.

Unfortunately it appears that these operating systems are 40% slower at file write operations than Server 2003. This means that writing out a log file or update/insert/delete operations to Synergy databases is therefore 40% slower than Server 2003. This can only be seen when using large files because the average commercial application does small blocks of random I/O. One of our customers provided a Synergy test program and a C# .NET test program that showed significant differences in time taken. We looked into the differences and re-coded the C# program to use the same WriteFile() win32 API that the Synergy Runtime uses, and the C# program also shows the same degradation that the Synergy Runtime shows. The issue has been logged with Microsoft support to get a resolution.

Why does this matter? Well, several things are affected on a large file server:

  1. Throughput as the number of users increases
  2. Time taken to write large log files
  3. Time taken to create work files and rebuild Synergy DBMS databases
  4. Time taken to sort files

At this time I would recommend not moving to the new servers until Microsoft has had time to fix the performance degradation.

Now you might ask why the initial C# program ran faster. Synergy/DE has never buffered files opened with “O:S” mode, because on VMS we can’t (each record is a separate RMS record) and on Unix and prior Windows operating systems, buffering has had minimal if any performance gain – that was the job of the operating system. It turns out the newer Windows operating systems have significant overhead, so we will look into some buffering for a future release for both the runtime and the compiler. (The linker and librarian and isutl all perform large block I/O).

Recent Posts Categories Tag Cloud Archives