Open Menu

Synergex Blog


My initiation into the blogosphere: SPC 2009

By William Mooney, Posted on April 2, 2009 at 7:40 pm

OK, time to jump into the blog scene. It’s either that or start “tweeting”—and I’m just not there yet. I was asked to start a blog, so here goes…

The biggest hurdle I’ve faced re. starting a blog is Where To Start. There is so much to talk about! Most of the things I expect to blog about are recurring themes from conversations I have with customers—it will be great to document and share these. Other blogs will cover random topics that I feel would be of interest to you. So my first blog will be a hybrid of the two, with the subject being our upcoming SPC (Synergex Success Partner Conference). Some of you may remember that the original name of the SPC was DC for “Developer Conference.” Today, still, the conference primarily targets developers, but the overall theme is, as it always has been, “Partnering with our customers to help them succeed.” (On that note, stay tuned for a future blog about our new tagline: “Advancing Applications. Partnering for Success.”) While most of our customer contact is with the actual developers of Synergy/DE-based software, Synergy/DE products also impact those in an organization who are not developers. It is for that reason that we strive to partner with several different types of players in an organization to help the company overall make the best use of our products.

To that end, we have expanded our communications this year to target those different players specifically. Our Marketing team has developed four characters: Jodah VeloperMark Etting,Manny Jurr, and Bigbah Smann. Each character is an exaggerated representation of his role’s interests within an organization and how they may interact with those of another role’s. So far, we have had some good success with this expansion of communication and are having a lot of fun with the characters. (Look for them on Facebook!)

The message is that no matter what role you play in your organization, the SPC will benefit you – by providing a firsthand look at how easily you can advance your applications with today’s Synergy/DE; by helping you hone your development skills; and/or by showing you the new features your development team should be taking advantage of.

  • Presidents, CEOs, VPs, General Managers—basically those who are responsible for your P/L (AKA Bigbah Smann): Your Synergy/DE-based application(s) are among your company’s most important assets. I recommend you attend at least the first day of the conference so you can get a firsthand view of all of the functionality that can be immediately attained to make your applications more powerful, and for ISVs, more marketable. I’m confident you’ll be very surprised. In fact, I’ll even comp the first day of the conference for any CEO/CIO/CTO/GM who accompanies a developer to the SPC.
  • Those who are responsible for the sales and marketing of your Synergy/DE based applications (AKA Mark Etting): Like the person (above) who is responsible for the bottom line, you can gain significant benefits by attending the conference. It’s a great opportunity to see what your application is capable of, and what other Synergy/DE customers have done to make their applications more marketable.
  • And of course the people responsible for the development of your applications (AKA Jodah Veloper and Manny Jurr): I recommend you attend all three days of the conference – this will enable you to take away the skills and knowledge required to quickly and easily advance your applications.

So, whatever role you play in your organization, I look forward to catching up with you at the conference, or meeting you if we have not yet had the opportunity.

OK, that first blog was relatively painless! I look forward to blogging again soon.


The Vista performance saga – final chapter

By Roger Andrews, Posted on March 13, 2009 at 8:52 pm

In January we finally determined why file I/O on Vista and Server 2008 disks is slower than on Windows 2003. In a previous blog post I stated that

“The performance problem on disks that have been hooked by applications that use the new Vista/Server2008 filter manager infrastructure – can cause CPU overheads of at least 40% on all I/O operations including cached I/O and locks reducing throughput.”

So what applications use the new filter manager? Well UAC on system disks using the UAFv.sys file system re-director use the filter manager, and many current antivirus applications use the filter manager on all the disks where they are set to perform real-time scanning.

In Vista the initial hit is high to register “any” application to use the filter manager on a volume and then rises even higher for every operation type hooked. The UAC file system re-director – that ensures that writes to Windows-protected directories like windowssystem32 and program files are re-directed to the user’s local path, which the user does have access to. If you use Yahoo Messenger on a Vista system, you will see it has this problem because it always assumes it can write to program files. Now the reason that the uafs.sys file system redirector hooks every file I/O operation on the system disk is because it tries to cache these re-directed operations to avoid creating and writing the temporary re-directed file to disk ever; however this now causes the performance issue on Vista unless file system redirection is turned off by disabling the service (which may cause applications like Yahoo Messenger to fail unless UAC is also turned off).

I had turned uafv.sys off on my Vista system – however performance traces in Intel’s VTUNE performance advisor showed that I was still getting performance degradation due to the filter manager when running our test suites. It turns out that the latest Trend Micro antivirus engine is following Microsoft’s best practices and using the new filter manager on all disks – so the previous work-around of using a non system disk did not work on my machine.

In my dialogue with Microsoft, they indicated that they did not expect the data drives of an internal file server to always need to have an antivirus scan (by this I don’t mean a file server in the Word document sense, rather a dedicated database server that has no internet access), so the overheads related to the virus scanner would not apply to non system disks – and even if a virus scanner was installed that it would only be set to scan the system disk in real-time mode.

The good news is that Windows 7/Server 2008 R2 have significantly improved this situation. Though there is some overhead for the initial attach to the filter manager, additional attaches cause much less overhead, and the overall figure is far better than Vista. Microsoft will continue to look at this area during the release cycle of Server 2008 because of the impact it has when virus scanners are using the filter manager and set to real-time scan all disks on a system.


Microsoft’s ADO.NET Entity Framework

By Roger Andrews, Posted on January 29, 2009 at 4:36 pm

Over the years, Microsoft has provided many different ways to access data–ODBC, DAO, ADO, and ADO.NET (with data sets and data readers). The next data access technology is the Entity Framework with the 3.5 SP1 version of ADO.NET. Synergex has provided access to all of these technologies through the baseline ADO.NET 2.0 with its xfODBC driver. Synergex has developed its own ADO.NET 3.5 provider with the extended capabilities needed to interoperate with the Entity Framework and the Entity designers in Visual Studio 2008 SP1.

Microsoft views the Entity Framework as the future of all of its data access technologies – and products like SQL Server, Office, and the Visual Studio designers are all either upgraded or being upgraded to require access to databases via the Entity Framework.

Here is how Microsoft describes the ADO.NET Entity Framework:

“Database development with the .NET framework has not changed a lot since its first release. Many of us usually start by designing our database tables and their relationships and then creating classes in our application to emulate them as closely as possible in a set of Business Classes or (false) "Entity" Classes, and then working with them in our ADO.NET code. However, this process has always been an approximation and has involved a lot of groundwork.

This is where the ADO.NET Entity Framework comes in; it allows you to deal with the (true) entities represented in the database in your application code by abstracting the groundwork and maintenance code work away from you. A very crude description of the ADO.NET Entity Framework would be that It allows you to deal with database concepts in your code.“

The ADO.NET Entity Framework is designed to enable developers to create data access applications by programming against a conceptual application model instead of programming directly against a relational storage schema. The goal is to decrease the amount of code and maintenance required for data-oriented applications. Entity Framework applications provide the following benefits:

  • Applications can work in terms of a more application-centric conceptual model, including types with inheritance, complex members, and relationships.
  • Applications are freed from hard-coded dependencies on a particular data engine or storage schema.
  • Mappings between the conceptual model and the storage-specific schema can change without changing the application code.
  • Developers can work with a consistent application object model that can be mapped to various storage schemas, possibly implemented in different database management systems.
  • Multiple conceptual models can be mapped to a single storage schema.

If you are interested in beta testing our new Entity Framework capabilities, please contactSynergy/DE Developer Support.

For more information and a tutorial of the Entity Framework, see these links:

http://msdn.microsoft.com/en-us/library/aa697427(VS.80).aspx

http://www.codeguru.com/csharp/csharp/cs_linq/article.php/c15489/#more


Upcoming “experimental feature” will help you detect use of uninitialized memory

By Roger Andrews, Posted on December 10, 2008 at 7:05 pm

We are continually reviewing customer applications to assist with support/development issues, and in doing so often come up with ideas to help customers facilitate debugging problems they may encounter. We use a product from Compuware called DevPartner Studio to help us track down “C” variable access problems in the Synergy components that sometimes cause instability in the runtime. I like to run customer applications with a special runtime that is built with DevPartner, which allows us to check boundary conditions while running “real” customer applications. DevPartner enables us to check use of memory already freed (called dangling pointers) and access to memory before we have written to it (a common cause of symptoms that move around depending on memory and time of day).

One recent application we saw was accessing uninitialized memory before writing to it. As we tracked this down, , we realized the customer was using stack records and %MEMPROC memory that had never been written to. In certain cases this would cause random results, and in this particular case, it was causing the customer’s application to fail when run under the DevPartner tool because the memory was now a consistent but unexpected value.

We decided as a test to add some support in Synergy/DE to see if the Synergy runtime could also detect this use of uninitialized memory with a minimal overhead when running in debug. It turns out that we can do similar checking for assignment statements and “if” tests, and we can differentiate between stack memory and MEM_PROC memory. Using this functionality also enables a developer to break in the debugger after the statement that uses this random memory.

We are considering adding this new debugging functionality to a future release of Synergy/DE. However, so that we can get this useful tool into your hands sooner, we are planning to include it as an “experimental feature” in an upcoming patch.

“Experimental features” are features that are under evaluation. They are for early adopters to use and provide us with feedback on. They will be supported, but they may be modified or even removed in subsequent releases.

So look for this new experimental debugging feature in an upcoming patch and consider trying it out. Like the recent feature we added to detect mismatched global data-section sizes (which can cause runtime crashes), this feature to detect uninitialized memory continues our aim to add debug-time detection of coding errors to assist you in producing more reliable applications.


Get your ducks in a row for the Next Generation

By William Mooney, Posted on November 11, 2008 at 10:14 pm

At the risk of dating myself, I recall several years ago seeing some of our customers (who were there when I started at Synergex) entering retirement and sailing off into the sunset. I got a bit nostalgic, as many of these people really took me under their wings and showed me the ropes when I first entered this field. At the same time, I was excited to begin new relationships with their successors — the Next Generation — who would be working with me as they carried on the legacy of their predecessors. What I didn’t consider then, but have since witnessed time and time again, is how important it is to prepare one’s business applications for the Next Generation. Passing the torch involves more than handing down a title and a business plan. It means getting all your ducks neatly in a row so the next person is sure to make the RIGHT decisions to best support and sustain the business.

The decision makers at our customer sites come in all shapes and sizes: some are executives, some are application users, and some are developers. For most of our existence, we have focused on the developer. After all, it is the developers we are most in contact with, and most, if not all, of our customers’ original owners and executives were developers. And we’ve been very successful at addressing their needs and providing them an exceptional array of development tools to get the job done. Our integrated Workbench, OO language, Java/.NET Integration, and SQL access to SQL Server, Oracle, and MySQL are meeting and exceeding their requirements. And many of the developer decision makers, including those in the Next Generation, have done some amazing things to advance their Synergy/DE-based applications to meet current look-and-feel demands while maximizing their very rich and proven business logic.

It is another group of Next Generation decision makers that can wreak major havoc on a business. It is the new executives who decide to replace everything because the existing application isn’t pretty enough. They expect a Windows or GUI-based system, and they are willing to pay for it. It doesn’t matter that the existing application is the most robust and appropriate solution to run their company, and that their employees are highly productive because they know the application inside and out. Nope, if it’s not <insert whatever they had at the last place or whatever they think is the latest thing>, it must go. Because some of these Next Generation decision makers don’t know about the history of the application, the years of customizations, the value of the rich and proven business logic, they decide to throw it out and start with some name-brand, high-end system that costs lots of money, requires new resources (and often makes the current ones obsolete), takes forever to implement and customize (and never achieves the functionality of the original application anyway), and in the end demolishes their business processes — just because the application looks good. We’ve seen companies fold after spending millions down this road. Don’t get me wrong, I like a good-looking application too. But functionality is king, followed by look and feel, not the other way around. Fortunately, with Synergy/DE you can have both.

So what do you do to avoid this fate? Simple. Get your application ready for the Next Generation. It’s much easier to add a new front end to proven business logic than the other way around. You wouldn’t consider tearing down your house if you didn’t like its curb appeal, would you? Get your application current, make it look modern, give it all the look and feel that new Next Generation executives might demand, before they have the chance to come in, take one look, and throw it (and all of your intellectual capital) out because the application doesn’t look like they think it should. Don’t get caught off-guard with an outdated application: Advance it to meet the needs of the Next Generation!


Live from Microsoft PDC: A sneak peak at Windows 7, plus our 64-bit ActiveX list support

By Roger Andrews, Posted on October 29, 2008 at 9:58 pm

This comes to you from the Microsoft PDC in Los Angeles, where I am among over 10,000 attendees. The PDC is Microsoft’s futures conference where they preview some of the technology coming out over the next couple of years.

Microsoft has demonstrated real UI improvements in Windows 7—improvements that made almost every attendee cheer. For example, Windows 7 includes UAC improvements so you don’t have to accept “On” or “Off”. And, the new iPhone-like touch support is certainly cool. It looks like within 5 years almost every laptop and LCD monitor will include touch support. The great thing with touch is that there are no UI Toolkit changes required to your Synergy/DE Windows applications because touch translates to normal mouse movements and clicks.

Microsoft has also set a goal to make Windows 7 run faster, boot faster and require less memory than Vista, targeting the new ultra mobile 10" laptops that have flash drives and 1GB of memory. This goes hand in hand with new features in the .NET framework that reduce memory requirements and provide improved interoperability with lower overheads. At Synergex we will be testing Synergy/DE with Windows 7 in the near future—to ensure everything works as well in Windows 7 as it currently does in Vista and Server 2008. Windows 7 also contains the same set of files as Server 2008 R2 so any performance improvements in Windows 7 will also benefit the server platform.

I also want to let you know that we have recently completed our 64-bit ActiveX list implementation, and it will be released in our upcoming 9.1.5a version. This means that 64-bit UI Toolkit applications are now possible on 64-bit native operating systems with the same features as their 32-bit counterparts (that is, if the appropriate controls you use are also available). This now enables you to take full advantage of the extra memory and scalability available with Server 2008 x64 Edition. (Server 2008 R2 is already announced as the last 32-bit server O/S by Microsoft.)


The Vista Performance Saga Continues

By Roger Andrews, Posted on August 8, 2008 at 5:39 pm

 

I thought it about time I posted an update regarding my Vista post on the 16th of April. In that post I recommended holding off on Server 2008 deployments until more data was available.

So let’s state the real problem.

“All file operations (read, write, file-position, etc.) are 40% slower on a Vista and Server 2008 system disk than they are on XP or Server 2003 system disks.”

These operations are slowed down even when they are serviced from the O/S cache subsystem. The reason for the 40% overhead is the registration of a driver with the newly (Vista) introduced file system filter framework, even if the driver itself performs no work and just returns. Registration can be for a particular device and not just a disk drive. In one case, the UAC file system virtualization driver, UAFV.SYS registers itself with the filter manager framework to perform the protected file virtualization feature new in Vista. As a result of the filter manager subsystem overhead – all read/write/seek operations to the C: drive become slower regardless of the file virtualization operation. Turning off this UAFV.SYS driver restores system disk performance.

How can you tell what this means? You can use the sysinternals procmon utility to see all the I/O operations occurring on your c: drive—every one of those operations is slowed down on a Vista and Server 2008 system disk. This accounts for some of the CPU bottleneck when your laptop starts. It accounts for slower virus scans on Vista system disks, etc.

As nearly all laptops, most small business servers, and the majority of current desktops all have a single system disk, this problem impacts all current Vista and Small Business Server 2008 users to some degree or another. This problem becomes exacerbated when other utility and anti-virus software takes advantage of the new Vista filter manager framework, where performance to non-system disks will be impacted.

Solutions are of course to read/write sequential data in much larger blocks. We changed Synergy/DE to use 4k buffers for sequential output in our recent 9.1.5 release, however the semantics of the sequential input read allowing for random reading precludes us from doing that on input without slowing down performance. Random ISAM reads can’t use larger blocks without damaging performance at the disk level—so they incur the CPU overhead. Most of the I/O patterns I see with procmon also don’t meet the bar for larger I/Os, so the real issue is to get the problem fixed in the O/S.

If you disable UAC (which we don’t recommend) and you have never virtualized a file (for example, you do this at system installation), you can use the registry editor to make the uafv.sys service visible and then disable it. Doing so will also mean you can’t re-enable UAC till the service is re-enabled. Alternatively you can ensure all your data files (this also means your temp and DTKTMP logicals) are placed on a non system drive – and you won’t see most of the impact of this problem.

We are currently working with Microsoft to provide a fix to this in the next Service Pak and and will keep you informed of our results.

As a side note, we also noticed that any scheduled task runs slower in Vista and Server 2008. Typically customers use these to generate reports and run day ends overnight. These tasks now run at a low priority class. You would expect an idle system to run them almost the same—regardless of the priority class (after all the idea is low priority items use available resource when there are no higher priority items running), but it appears that the programs no longer use available resources as prior versions do. Microsoft sees this as by design—which is hard to believe. We have introduced a new API in 9.1.5 to allow you to re-set the priority class of your scheduled tasks to ensure they retain the performance characteristics of prior operating system versions.


Red Alert! DNS Flaw Revealed

By Roger Andrews, Posted on July 31, 2008 at 4:24 pm

Due to the recent online disclosure of technical details to exploit a widespread DNS vulnerability, security researchers are warning users to patch vulnerable systems immediately.

All Linux and Windows based DNS servers require a patch, and most routers need a patch with real urgency.

From InformationWeek.com:

The domain name system translates domain names, like "informationweek.com," into numeric IP addresses and vice versa. The DNS flaw, if exploited, allows what is known as DNS cache poisoning. This involves remapping domain names to different, potentially malicious servers.

US-CERT on Monday warned: "Technical details regarding this vulnerability have been posted to public Web sites. Attackers could use these details to construct exploit code. Users are encouraged to patch vulnerable systems immediately."

"This is a very serious situation, and can possibly lead to widespread and targeted attacks which hijack sensitive information by redirecting legitimate traffic to fraudulent Web sites, due to incorrect (fraudulent) information being injected into the vulnerable caching nameserver(s)," Trend Micro security researcher Paul Ferguson said in a blog post.

Read the full article:http://www.informationweek.com/news/internet/security/showArticle.jhtml?articleID=209401195

For additional information about this type of attack and for details about how to resolve it, visit http://www.kb.cert.org/vuls/id/800113.


The XP era is over – what does that mean to you?

By Roger Andrews, Posted on July 7, 2008 at 9:18 pm

As Windows XP is no longer available as of June 30th, I’d like to talk about your options regarding Synergy/DE support for Windows Vista.

 
While Microsoft may have pulled the plug on Windows XP as of June 30, it still continues to offer the home version for ultra low end PCs that can’t run Vista. However, if you go to Dell or HP, you won’t be able to select XP for a new system. Manufacturers can continue to sell XP while “stocks last” but in today’s highly evolving marketplace, who would stock XP just in case someone might buy it one day? Further, volume license customers can’t purchase XP licenses any more—the only way for a business customer to get it is to buy Vista Enterprise and downgrade to XP.

So, where does that leave Synergy/DE customers sitting on the fence and using versions of Synergy/DE prior to 9? Well, as of July 1, the supported route is to upgrade to version 9. Any new machines your customers/users buy will be running Vista, which means you need version 9 for that user (if you want to deploy a supported version). We just shipped our latest version in the 9 series, version 9.1.5, which we recommend using.

So what do you do if you want to use Vista and Server 2008 but your installed base is using 8.3.1# and you don’t want to upgrade them all at once? We have customers who have been accomplishing all of this successfully by continuing to build their .dbr and .elb files with 8.3 and then running those 8.3-built files under Synergy/DE 9.  In the rare documented cases where version 9 finds an issue not present in 8.3 (e.g., the new duplicate global data section of differing sizes), the issue can be fixed back in the 8.3 code base producing a .dbr that runs perfectly on both 8.3 and 9. This same technique should be used if you are requiring a hotfix for a problem in 8.3. Synergex’s policy is to provide Synergy/DE 9 for deployments of the fix rather than an 8.3 patch.

Now you may ask, what about development? We still recommend you use the latest version 9  tools to build and develop your applications (so you can take advantage of improved error detection and increased developer productivity), but you can rebuild the tested .dbr files under 8.3 for mass deployment.

Given that the XP era has ended, I recommend that all ISVs test their current pre-9 applications under Synergy/DE 9.1.5 so they can be assured of continued customer satisfaction when the inevitable Vista machine is encountered. I also recommend that all new customer installations be V9 throughout, or at least adopt the built-under-8.3-deployed-under 9 model described above.


Don’t forget support for your non-Synergy/DE products

By Roger Andrews, Posted on May 6, 2008 at 6:07 pm

In my last post, I talked about some issues with Windows Server 2008 and Vista SP1 that caused me to recommend not upgrading to them yet. These issues represent just one example where an operating system problem might hinder performance for our customers.

In another example, we recently had a customer report that it was taking our SQL OpenNet server 20 times longer to retrieve records from a SCO OpenServer 6 or UnixWare system than from SCO OpenServer 5.0.6. We tracked this down to a bug in the SCO implementation of the Nagle algorithm on the TCP/IP stack. We produced a simple C program that was sent to SCO and a fix is pending.

While we were able to assist the customer in the above situation, this isn’t always the case. We try hard to reproduce operating system and other layered product problems with our support team even when Synergy/DE is not at fault, but we unfortunately can’t support every OS and product in the field. There is an increasing need for our ISVs and end customers to maintain software support contracts with the vendors they work with to solve problems.

In many cases the problems we come across are third-party interaction issues (like virus scanners) and configuration issues with the OS that are beyond the scope of Synergy/DE support. A prime example of this is the use of operating system virtualization, where Synergy/DE is supported on the target OS, and the virtualization software acts as a hardware layer underneath the OS. As we have found out, Microsoft will not entertain any calls being logged if the problem is not reproducible in a non-virtual environment. So just as the device drivers of a server require a maintenance contract with the hardware supplier, so the use of virtualization software requires the same (effectively hardware) support contract with the virtualization supplier.

So I recommend you evaluate the level of support you may need for your non-Synergy/DE products and then obtain the appropriate support contracts.


Moving to Windows Server 2008, Windows Vista SP1

By Roger Andrews, Posted on April 16, 2008 at 8:50 pm

It’s been some time since I posted. We’ve been busy with our 9.1.3 release, which has some great new .NET interop features and more flexible xfNetLink .NET client capabilities. I’ve also had some international travel.

Windows Server 2008 and Windows Vista SP1 have now shipped. (9.1.3 was tested with both). Both use an identical code base and identical DLL and kernel versions – which is great for maintainability.

Unfortunately it appears that these operating systems are 40% slower at file write operations than Server 2003. This means that writing out a log file or update/insert/delete operations to Synergy databases is therefore 40% slower than Server 2003. This can only be seen when using large files because the average commercial application does small blocks of random I/O. One of our customers provided a Synergy test program and a C# .NET test program that showed significant differences in time taken. We looked into the differences and re-coded the C# program to use the same WriteFile() win32 API that the Synergy Runtime uses, and the C# program also shows the same degradation that the Synergy Runtime shows. The issue has been logged with Microsoft support to get a resolution.

Why does this matter? Well, several things are affected on a large file server:

  1. Throughput as the number of users increases
  2. Time taken to write large log files
  3. Time taken to create work files and rebuild Synergy DBMS databases
  4. Time taken to sort files

At this time I would recommend not moving to the new servers until Microsoft has had time to fix the performance degradation.

Now you might ask why the initial C# program ran faster. Synergy/DE has never buffered files opened with “O:S” mode, because on VMS we can’t (each record is a separate RMS record) and on Unix and prior Windows operating systems, buffering has had minimal if any performance gain – that was the job of the operating system. It turns out the newer Windows operating systems have significant overhead, so we will look into some buffering for a future release for both the runtime and the compiler. (The linker and librarian and isutl all perform large block I/O).


What version of Synergy/DE should you be using?

By Roger Andrews, Posted on December 12, 2007 at 7:14 pm

As applications move forward, they support only newer versions of third-party software and remove support for older versions. It’s not really surprising; it’s all to do with QA and being able to take advantage of newer functionality.

There is one area in particular of the Synergy/DE toolset that is constantly moving forward to support newer 3rd party tools, with extended functionality and in some cases bug fixes – and that’s xfODBC. With the newer technologies like ADO.NET and new client tools that use it, we had to enhance many parts of the whole stack to support ADO .NET properly.  This means that if you try to use years old versions of Synergy/DE with current 3rd party tools, it most likely won’t work, or at the very least performance won’t be great. If you expect current technologies such as SQL Server reporting tools, Visual Studio 2008, or Office 2007 (Excel) to work with xfODBC, you/your customers really must be using the latest versions of Synergy/DE. Synergex is really no different than Oracle or SQLServer in this regard. We make constant improvements to our optimizer, just like the other database vendors – and these changes can provide dramatic improvements in ad hoc query performance with Synergy databases.

Vista support is another area that requires you to stay current. Just like most 3rd party software was enhanced to work with Windows Vista, so was Synergy/DE. We had significant work to do to ensure we worked well with Vista and its whole UAC mechanism and to ensure both install and uninstall worked well. I don’t really understand why someone would try to run an older 8.3.1 or 8.1.7e version of Synergy on this newer platform – I understand that there is testing to do with application compatibility, but now that Service PAK 1 is ready to roll, businesses will start to move over more quickly. Our customers have been successful using 9.1.1b as the deployment platform for Server and Vista, with the clients and the .dbr files still using 8.3.1. Any fixes required to allow 8.3.1/9.1.1 co-existence can be made to be compatible with both while compiling under 8.3.1; however, unless there are bugs in original code, this is extremely rare.

In conclusion, if a company (or an ISV’S customers) wants to keep current with newer, rapidly changing technology, the base technology (Visual Studio, Synergy/DE) must be current as well.


Page 12 of 12« First...89101112
RSS

Subscribe to the RSS Feed!

Recent Posts Categories Tag Cloud Archives