Today Microsoft announced that Visual Studio 2015 and .NET 4.6 are available for download. As a member of the Microsoft Visual Studio Industry Partner (VSIP) program, Synergex will soon be supporting this version of Visual Studio with a Developer Build, followed by a fully supported release that also supports Windows 10. Our new release includes support for the new concord-based debugger and Light Bulb features.
We are excited about all of the new performance tools that Synergy developers can utilize in Visual Studio 2015.
We’ve long fostered a close relationship with Microsoft through their VSIP program, which has enabled us to pioneer many aspects of language integration with Visual Studio. We have pushed the envelope in this area farther than any other non-Microsoft language, and Microsoft continues to be a strategic partner, helping us provide a first-class experience for the Synergy DBL language in Visual Studio. Synergy/DE itself is written in C/C++, and we use Visual Studio Projects internally to develop and build the majority of our product set.
With the release of Visual Studio 2015, we published an article summarizing our history with Visual Studio and showing how we leverage the latest Microsoft tools to enhance our cross-platform core technologies and port to new platforms. We are delighted that Microsoft picked up this article and blogged about it in the VC++ blog. It’s a chapter in our cross-platform history that spans a number of years and includes some of the most interesting development challenges we’ve ever tackled. The article also discusses some issues (particularly with debugging) that we are still working with Microsoft to resolve.
The world of enterprise software has changed for everyone with the introduction of device-first applications. Devices can include PCs, laptops, tablets and smartphones. The days of operating system and developer tools releases every two years, or even annually, have gone. The competitive nature of the competing Android, iOS, Windows, and Linux environments, with developer tools supporting all of these platforms, means the OS and tools vendors are adopting an agile approach to release cadences with monthly updates that are not just hotfixes and security updates. The agile vendor release process is now designed to get product updates and improvements out faster, knowing that if something breaks, it can be updated the following month, especially with developer tools.
For example, when Apple releases a new phone/tablet, they update their development stack. This is usually a forced update, and application developers always want to support the new hardware and current iOS release that is automatically available to all devices. This has the knock-on effect that all layered tools (for example Xamarin tools) also have to update to support the new Apple release, usually on the day the release ships. Then companies like Synergex have to update their tools, layered on the layers below them.
The newly announced Windows 10 will have fast and slow cadences for enterprises to choose, but Synergex will need to test for the fastest cadence to ensure compatibility. Visual Studio updates ship as CTPs monthly, with quarterly release track updates. For non-Windows development, Xamarin provides bi-weekly updates for the whole stack for iOS and Android.
So what does this mean to the Synergex customer base? Some Synergy customers do not update their Synergy versions regularly. If you are one of these customers, and you move into the world of devices, you will also need to move into this agile development mindset. Just as you are forced to accept monthly .NET Framework, Java, and security updates, you will also have to accept regular Synergy updates.
With Synergy/DE 10.3.1, Synergex has taken a new approach to help customers needing to keep up with the agile world we live in. Our Visual Studio .NET product set no longer includes the traditional Synergy runtime packages. This allows us to ship hotfixes and continual product updates matching those of the products on which we layer as frequently as bi-weekly, while keeping the traditional runtime and tools at a more stable level. We realize that due to its nature, this agile process has the potential to introduce bugs, and sometimes there will be a code-breaking change that requires a quick fix, but we believe it’s necessary to align with other vendors in this regard. The Synergy device runtimes for iOS and Android are also NuGet packages, allowing us to update them independently of the core Synergy product as new features and support are required. Customers choose whether to take the latest runtime packages on a per-project basis. Finally with 10.3.1 Synergy .NET, we allow the development tools to generate code that’s compatible with earlier versions (10.1.1 for 10.3.1), so customers can take advantage of the latest tool and code generation improvements without having to update every customer’s Synergy/DE installation.
In conclusion, the new device-first world is changing the way we develop and ship software, and all who participate in this world will need to change with it. Synergex has been making changes to meet this challenge, and we can help you meet that challenge while still providing stability to your end users.
For more information about Synergy/DE 10.3.1, visit the Synergex web site.
When one of our customers recently upgraded a file server from Windows Server 2008 to Server 2012, their customers complained of significantly increased end-of-day processing times—up to three times slower with Server 2012 than the previous system.
The system used terminal services to allow remote administration and running of such tasks as day-end processing. There were no general interactive users. Running the same software on a Windows 8 client system resulted in better performance, all other things being equal (disk, network, CPU, etc.). But a brand new Server 2012 R2 system with just the basic GUI role, performed twice as fast as the Server 2008 system. However, as roles were added, the customer noticed that the RDS role caused the slowdown.
Since Server 2008 R2 was introduced, RDS (Remote Desktop Services, formerly known as Terminal Services) has included a feature called Fairshare CPU Scheduling. With this feature, if more than one user is logged on a system, processing time is limited for a given user based on the number of sessions and their loads (see Remote Desktop Session Host on Microsoft TechNet). This feature is enabled by default and can cause performance problems if more than one user is logged on the system. With Server 2012, two more “fairshare” options were added: Disk Fairshare and Network Fairshare (see What’s New in Remote Desktop Services in Windows Server 2012 on TechNet). These features are enabled by default and can come into play when only one user is logged on. And these options proved to be the cause of the slowdown for our customer. They limited I/O for the single logged-on user (or scheduled task), though the day-end processing was always I/O bound. We were able to remove this bottleneck by either disabling the RDS role or turning off fairshare options.
In summary, if a system is used for file sharing services only (no interactive users), use remote administrative mode and disable the RDS role. If the RDS role must be enabled, consider turning off Disk Fairshare if the server runs disk-intensive software (such as database or I/O-intensive programs), and turn off Network Fairshare if the server has services (such as Microsoft SQL Server or xfServer) to prevent client access from being throttled. For information on turning off Disk Fairshare and Network Fairshare, see Win32 TerminalServiceSetting Class on Microsoft Developer Network (MSDN).
As Microsoft has stated and many other companies have echoed (see links below), the end of life for Windows XP is April 2014. Yep, in just 7 months, Microsoft, anti-virus vendors, and many software vendors (including Synergex) will no longer provide support or security patches for IE and XP. And the end of life for Windows Server 2003 will follow soon after.
Why is this so important? And, if what you’re using isn’t broken, why fix it?
Let’s consider, for example, a doctor’s, dentist’s, or optician’s office running Windows XP and almost certainly Internet-connected – in fact, probably using an Internet-based application. All it takes is an infected web site, a Google search gone astray, or a mistyped URL, and the PC is infected – INCLUDING all of the office’s confidential medical data. Plus, most offices allow their workers to browse the Internet at some point in the day – to catch up on emails and IM, conduct searches, surf eBay, etc. If the office is running XP after 2014, it is almost certain that it will be open to infection by a rootkit or other malicious software, because the malware authors will take advantage of every vulnerability not fixed by Microsoft due to the end of life. Add the fact that the antivirus vendors will also reduce or eliminate support, and you have a mass Bot-like infection waiting to happen. Once a system gets a rootkit, it’s nigh on impossible to remove it without a wipe clean. To further complicate things, it usually takes a boot-time scan from a different device to detect many of these infections.
Further, while Windows XP had an impressive 13-year run, it is far less secure than later operating systems, and the hardware it runs on in many cases is also at the end of its life.
If you or your customers are running Windows XP or Server 2003, it’s time to upgrade to a more modern, more secure operating system like Windows 7. At least you can rest assured that Microsoft’s monthly Patch Tuesday will provide protection with security fixes in conjunction with your anti-virus vendors to protect sensitive information and keep the business running.
Today I’m excited to be blogging from the TV studio at the Bell Harbor Convention Center in Seattle for the live Visual Studio 2012 launch.
Since the Build conference last September, Synergex has been working closely with the Microsoft development teams ensuring that Synergy/DE works seamlessly with all the new exciting Microsoft technologies being released this fall–Visual Studio 2012, Windows 8, and Synergy for Windows Store applications on both ARM and Intel processors. Our team has made several visits up to Redmond to work directly with Microsoft engineers to enhance Visual Studio, Windows 8, and Synergy/DE.
You can download 10.0.3 of Synergy/DE today to start using the latest Visual Studio 2012 features, including the new async and await functionality demonstrated by Microsoft at the visual studio launch event.
I’m also incredibly pleased to talk about our new KitaroDB NoSQL database for Windows Store applications that we are releasing today. Built on our solid, high performance Synergy DBMS product, KitaroDB is the first on disk NoSQL database in the Windows 8 sandbox working with X86, X64 and ARM processors.
We have a Netflix sample application that uses KitaroDB, and in the next few weeks will be launching a great new Windows Store application that takes advantage of KitaroDB for its local persistent storage.
We have been working hard at Synergex since Microsoft’s BUILD conference last Fall ensuring that Synergy works well with the soon-to-be-released Visual Studio 11 and .NET Framework 4.5 beta.
I am pleased to tell you that we will sim-ship with Microsoft on their announced February 29 beta release date with a released version of Synergy/DE 9.5.3a to allow those of you interested to test drive the new release. Synergex customers can expect to see performance improvements in editing – especially when using large Synergy Language source files – among other improvements.
You can find more information on these Microsoft blogs:
Some recent posts on our synergy-l listserv made me realize there are still some misperceptions about Microsoft network shares (mapped drives), so I thought I would address those here.
Microsoft designed network shares for single-user access to shared resources such as Word documents. The locking and caching algorithms they use assume that a local cache is desirable since multiple users are unlikely to do concurrent updates (though the algorithms try to cope with this situation). Of course the use of network shares has grown to much more than single-user systems. Many of our customers have used them and are still using them, and many of these customers have unfortunately encountered performance and file corruption issues. Most of these issues are associated with concurrent updates and cache flushing, and using mapped drives (as opposed to UNC paths) seems to exacerbate the problem. With older Windows versions–prior to Vista and Server 2008–you can mitigate file corruption issues by disabling oplocks on the server (which disables the local caching). (Syncksys, a utility we used to check settings on these older Windows systems, always checked for this.) Unfortunately, you can’t disable oplocks with SMB2 redirectors on the newer Windows systems.
Because of the number of issues our customers have encountered, Synergex can provide only very limited support for Synergy database access through network shares. (See the Synergex KnowledgeBase article listed below.) We have traced these problems to errors in the Microsoft SMB (mrxsmb10.sys, mrxsmb20.sys, mrxsmb.sys) and Redirector (rdbss.sys, srv.sys) subsystems. We find that these problems get worse with network shares over a WAN and with multi-user access. It is fair to say that Microsoft has fixed many problems with Windows XP and Server 2003 over the years, but the problems have resurfaced with newer Windows operating systems. And now many organizations are using Windows Vista and Windows 7 machines as clients (alongside their Windows XP clients) to Server 2003 or Server 2008 servers, introducing newer operating systems with mapped drive subsystems that have regressed in functionality and performance.
We recently (and mistakenly) used a mapped drive internally for a project, and the ISAM files for the project were continually corrupted. (We have logged a premier support call to Microsoft for this issue.) We fixed the corruption with the recommended solution, xfServer, which not only solved the issue but improved performance.
File corruption issues aside, in most cases xfServer will significantly outperform a mapped drive in commercial situations with multi-user access when it’s set up to correctly use SCSCOMPR and SCSPREFETCH in conjunction with correctly opening files for input when just reading data sequentially. The known cases where xfServer is slower than a mapped drive is when a file is opened for update with a single user or exclusive access, or for output when the file is not large and/or the records are small (or ISAM compression makes them small), allowing the redirector to cache the data blocks locally. If oplocks are turned off on Server 2003, as recommended, this caching is disabled and performance degrades, though reliability increases. We are investigating an xfServer performance improvement that would provide comparable or better performance than a mapped drive in additional scenarios by allowing users to enable the cache on stores and writes to a file opened for output, append, or exclusive access.
We have provided a test utility in CodeExchange, IsamBenchMark, to help you test performance with your own physical files and network. Using this utility, we saw the results described below.
The following tests were made on a Windows 7 machine connected to another Windows 7 machine over a 1 GB network. The machines each had 6 GB of memory and Core 2 3GHz CPUs. The Windows operating system cache was flushed using the Sysinternals CacheSet utility before each run on the server machine. Neither machine had an antivirus program running or any other software that accessed the network. Both machines were connected on the same physical switch.
Two files were used: one with eight keys and 128-byte records and the other with eight keys and 512-byte records. Each file was filled with 100,000 records during the test, and the files were created without ISAM file compression. We used Task Manager’s networking tab to generate the diagrams below (though we’ve added red vertical lines in one diagram).
Diagram 1 shows network overhead for our first test, which used xfServer to access the file with 512-byte records. The file was first accessed without compression and then with compression (i.e., with SCSCOMPR set).
Using 4-5% of a gig link is equivalent to 50% of a 100 MB Ethernet, so the performance gained by setting compression for xfServer would be even greater on a slower Ethernet or WAN.
Diagram 2 shows network overhead for our second test, which accessed the 512-byte-record file in three ways:
via a mapped drive
using xfServer without compression
using xfServer with compression and READS prefetch support enabled (i.e., with SCSCOMPR and SCSPREFETCH set)
The test program stored 100,000 records, re-opened and used random read for 100,000 records, re-opened and used reads for 100,000 records. Note the change in scale from the previous diagram, and note the overhead of the stores is high (with the flush-on-close peaking with the mapped drive). The next segment is the random read, and then the next peak is the reads. Also notice the setup with SCSCOMPR and SCSPREFETCH makes the reads almost as fast as a local disk access and far faster than the mapped drive.
If this had occurred over a slower network, the stores would be slower on a mapped drive than xfServer and the SCSCOMPR form would outperform all other methods, given records with some degree of compression. If you have an ISAM file with ISAM compression, isutl –v can give you an idea of how much compression of the data can help with xfServer.
Diagram 3 shows network overhead when accessing the file with 128-byte records in the same three ways (mapped drive, xfServer without compression, and then xfServer with compression) and with the same program.
Diagram 4 shows the network overhead for… Well, there is no diagram 4. Our final test used two systems each with the remote 512-byte-record file opened on the network share. One just had the file open. The other ran the test. This setup used 50 MB of bandwidth constantly for several minutes, so the diagram would run off the page and would show only a pegged green line. Instead, here’s a table that summarizes our findings. The last row documents the results for the multi-user mapped drive setup and illustrates how much worse things get when using a mapped drive in a multi-user environment. Bandwidth is quickly overloaded, and it becomes difficult and time consuming for the network to accommodate large groups of packets. As pointed out earlier, these tests used a physical switch. xfServer becomes even more important when the network involves a hub.
Store (in seconds)
Random read (in seconds)
Reads (in seconds)
Mapped drive 1user
xfServer no SCS_COMPR
Mapped drive 2 user
One final thing to be gleaned from this is that running programs that transfer large amounts of data across a network, programs such as month-end or day-end processing or reports, can quickly overload a network. Do what you can on the server by using xfServerPlus, the new Synergy Language Select class, or by running the program on the server.
If your xfServer system is not performing in line with the results described above, I encourage you to contact our Developer Support team so we can assist you in optimizing your system. If you have questions or would like more information on this topic, contact Synergy/DE Developer Support or your account manager.
On this topic, a customer recently reported an ELOAD/System error 64, “The specified network name is no longer available.” When they changed the UNC path to use the IP address rather than the name of the machine, the problem appeared to be resolved. This would suggest poor DNS server performance lookups was causing the error. It appeared the problem was disconnecting client machines on the network, as well as the recovery mechanism used by the re-director after such a failure. In the recovery mechanism, it appears DNS must also work. The problem with DNS is that Microsoft caches “Failures” of DNS, so one failure can cause other issues.
We get quite a number of support calls with either performance or system-down issues related to installation security suites, mostly related to antivirus software. In most cases the culprit ends up being the incorrect setup of the antivirus software.
Let’s first consider what antivirus software has to do and how it ships by default.
In today’s cat and mouse game, the security software vendors are trying to keep up with all of the malware generators that pop up daily. A typical antivirus signature file contains over 80 Mb of compressed signatures, and the major players like Trend, McAphee, Symantec, VIPRE, and Kaspersky provide multiple updates to signatures daily. The problem then is deciding what to scan and when to scan—you obviously don’t want to miss an infected file that’s downloaded between updates to the scan databases, but you also don’t want to bog down your system unnecessarily. By default, most security products scan all files once daily, and use real time scanning to scan infectable files on both read and write. Some even default to continuously scanning all files. Though each vendor has different terminology for “scan on read” and “scan on write” (in fact some confuse read as write and write as read), “scan on read” effectively means scan every time a file is opened and “scan on write” effectively means only scan when a file opened for write is closed. Some vendors even have a flag to scan all files on close. And some products, like VIPRE, don’t have any concept of scan on write only.
Now that we know how these products handle file access, let’s consider some scenarios on live systems.
Scenario 1 – When “scan all files” is set
In this scenario, every file may be scanned for a virus on open and close, regardless of writeability. Consider scanning a .vhd for a virtual image, or a Synergy DBMS file every time a user opens or closes the file. (Both file types are usually opened even for write.) The same would even apply to every file accessed in your SQL Server and Oracle databases, and to all of your Synergy .dbr and .elb files. The implications to your system performance are obvious.
Scenario 2 – Scan only infectable files
In this scenario, infectable files may be scanned on open and close. By default in most vendors’ products, this includes Synergy .ism files as well as .vhd files. This scenario as well has a significant impact on your system performance due to the overhead of scanning large files.
Scenario 3 – Scan only infectable files on Write
In this case, .exe and .dll files are only scanned when updated, but a .vhd and a Synergy .ISM file would also be scanned on close because they are usually opened for write. This technique might be good for a general purpose file server of Word documents, for example, but not for a data server.
As you can see, without some degree of tuning, virus scanning products can have disastrous effects on system performance. (You can use the Sysinternals Process Monitor to see the overhead your virus scanning tool is causing.)
For obvious reasons, scanning of files takes place at a high priority in the kernel mode of the operating system. This usually impacts both system time and user processing time. Additionally, many vendors now use the VISTA filter manager, and I previously bloggedabout the performance penalties of such hooking on Vista and Server 2008. Luckily the overhead is significantly reduced in Server 2008 R2 and Windows 7.
In our recent internal use of Microsoft’s SharePoint server, we were seeing dramatic performance problems when installing and uninstalling software, and even when the IIS SharePoint services (which are .NET-based) were loading and jitting. By correctly disabling the “scan on open for read” options, the performance significantly improved. We also tried the VIPRE product, and this improved performance even further – however, for a very specific reason. VIPRE, as stated previously, scans all files on open and close, and gains its performance edge because it recognizes signed, read-only EXE/DLL files and caches them if they have not changed so that the re-scan is not required. This is what gives it a seemingly large performance gain. However, once you throw in files that are not signed, its scan requires significantly more resources because you can’t disable the “scan on read” functionality (which would require a scan of such products as Diskeeper moving around files). Additionally, VIPRE also scans (but does not report issues with) other excluded files, so the overhead is pretty much permanent for unversioned files like Synergy DBMS files.
The key is, after you have a clean full-file scan on a system, set scan on write only, scan infectable files, and make sure that the file extensions of your databases and VHD files are set to no scan. And, due to its inability to scan on read, we do not recommend VIPRE for use with Synergy/DE installations.
(Of course I’m providing this information for information purposes only, and it is up to each company to set its security policies.)
Several weeks ago we had a new Ikon color printer installed. It has a separate Kodak PC running the printer drivers and color matching software. I noticed that it was Internet connected and that software updates were not being applied.
When we contacted the manufacturer, we were told the PC was an embedded XP device and did not need the XP SP3 nor the security patches. We immediately disabled the Internet connection (embedded XP devices are susceptible to viruses too)—but that’s not really good enough. To date the manufacturer still has not authorized XP SP3 nor the regular monthly security patches, yet all printed documents go through this machine and users can go to the console and copy documents from a USB drive or internal network locations. Once infected with a virus or worm — or even a botnet — we’re SOL, because the manufacturer of the device doesn’t support installing anti-virus software, and any such changes would require an engineer to reload the system from scratch.
The problems are not just with Microsoft. Adobe has had to patch its Flash Player and Reader already this year, and another Reader patch is due. How many of us keep the Adobe Reader and Flash players up to date?
Why is this such a big issue? Well, the problem is that these embedded XP systems can get infected. One example is the Conficker worm. In most cases Conficker is benign until it is woken up by its creators. Users don’t even know they have it, may not even have Internet access (or may not know that they do), and/or may have been infected internally. The only way to detect these kinds of issues other than with a virus scanner is to look at network traffic going back to “phone home.” I think an article from the San Jose Mercury News illustrates the problem well. Even if you have a patch available to avoid infecting a machine, what if every patch and/or daily antivirus update required a 90-day approval process?
My recommendation is that you get with the manufacturers of all embedded XP devices that are connected to your network and get the regular updates and XP SP3, and ensure that Internet Explorer is disabled in such a way that the machine’s users cannot re-enable it. And also be sure to keep your Adobe Reader, Flash players, and similar products up-to-date.
In January we finally determined why file I/O on Vista and Server 2008 disks is slower than on Windows 2003. In a previous blog post I stated that
“The performance problem on disks that have been hooked by applications that use the new Vista/Server2008 filter manager infrastructure – can cause CPU overheads of at least 40% on all I/O operations including cached I/O and locks reducing throughput.”
So what applications use the new filter manager? Well UAC on system disks using the UAFv.sys file system re-director use the filter manager, and many current antivirus applications use the filter manager on all the disks where they are set to perform real-time scanning.
In Vista the initial hit is high to register “any” application to use the filter manager on a volume and then rises even higher for every operation type hooked. The UAC file system re-director – that ensures that writes to Windows-protected directories like windowssystem32 and program files are re-directed to the user’s local path, which the user does have access to. If you use Yahoo Messenger on a Vista system, you will see it has this problem because it always assumes it can write to program files. Now the reason that the uafs.sys file system redirector hooks every file I/O operation on the system disk is because it tries to cache these re-directed operations to avoid creating and writing the temporary re-directed file to disk ever; however this now causes the performance issue on Vista unless file system redirection is turned off by disabling the service (which may cause applications like Yahoo Messenger to fail unless UAC is also turned off).
I had turned uafv.sys off on my Vista system – however performance traces in Intel’s VTUNE performance advisor showed that I was still getting performance degradation due to the filter manager when running our test suites. It turns out that the latest Trend Micro antivirus engine is following Microsoft’s best practices and using the new filter manager on all disks – so the previous work-around of using a non system disk did not work on my machine.
In my dialogue with Microsoft, they indicated that they did not expect the data drives of an internal file server to always need to have an antivirus scan (by this I don’t mean a file server in the Word document sense, rather a dedicated database server that has no internet access), so the overheads related to the virus scanner would not apply to non system disks – and even if a virus scanner was installed that it would only be set to scan the system disk in real-time mode.
The good news is that Windows 7/Server 2008 R2 have significantly improved this situation. Though there is some overhead for the initial attach to the filter manager, additional attaches cause much less overhead, and the overall figure is far better than Vista. Microsoft will continue to look at this area during the release cycle of Server 2008 because of the impact it has when virus scanners are using the filter manager and set to real-time scan all disks on a system.
Over the years, Microsoft has provided many different ways to access data–ODBC, DAO, ADO, and ADO.NET (with data sets and data readers). The next data access technology is the Entity Framework with the 3.5 SP1 version of ADO.NET. Synergex has provided access to all of these technologies through the baseline ADO.NET 2.0 with its xfODBC driver. Synergex has developed its own ADO.NET 3.5 provider with the extended capabilities needed to interoperate with the Entity Framework and the Entity designers in Visual Studio 2008 SP1.
Microsoft views the Entity Framework as the future of all of its data access technologies – and products like SQL Server, Office, and the Visual Studio designers are all either upgraded or being upgraded to require access to databases via the Entity Framework.
Here is how Microsoft describes the ADO.NET Entity Framework:
“Database development with the .NET framework has not changed a lot since its first release. Many of us usually start by designing our database tables and their relationships and then creating classes in our application to emulate them as closely as possible in a set of Business Classes or (false) "Entity" Classes, and then working with them in our ADO.NET code. However, this process has always been an approximation and has involved a lot of groundwork.
This is where the ADO.NET Entity Framework comes in; it allows you to deal with the (true) entities represented in the database in your application code by abstracting the groundwork and maintenance code work away from you. A very crude description of the ADO.NET Entity Framework would be that It allows you to deal with database concepts in your code.“
The ADO.NET Entity Framework is designed to enable developers to create data access applications by programming against a conceptual application model instead of programming directly against a relational storage schema. The goal is to decrease the amount of code and maintenance required for data-oriented applications. Entity Framework applications provide the following benefits:
Applications can work in terms of a more application-centric conceptual model, including types with inheritance, complex members, and relationships.
Applications are freed from hard-coded dependencies on a particular data engine or storage schema.
Mappings between the conceptual model and the storage-specific schema can change without changing the application code.
Developers can work with a consistent application object model that can be mapped to various storage schemas, possibly implemented in different database management systems.
Multiple conceptual models can be mapped to a single storage schema.
We are continually reviewing customer applications to assist with support/development issues, and in doing so often come up with ideas to help customers facilitate debugging problems they may encounter. We use a product from Compuware called DevPartner Studio to help us track down “C” variable access problems in the Synergy components that sometimes cause instability in the runtime. I like to run customer applications with a special runtime that is built with DevPartner, which allows us to check boundary conditions while running “real” customer applications. DevPartner enables us to check use of memory already freed (called dangling pointers) and access to memory before we have written to it (a common cause of symptoms that move around depending on memory and time of day).
One recent application we saw was accessing uninitialized memory before writing to it. As we tracked this down, , we realized the customer was using stack records and %MEMPROC memory that had never been written to. In certain cases this would cause random results, and in this particular case, it was causing the customer’s application to fail when run under the DevPartner tool because the memory was now a consistent but unexpected value.
We decided as a test to add some support in Synergy/DE to see if the Synergy runtime could also detect this use of uninitialized memory with a minimal overhead when running in debug. It turns out that we can do similar checking for assignment statements and “if” tests, and we can differentiate between stack memory and MEM_PROC memory. Using this functionality also enables a developer to break in the debugger after the statement that uses this random memory.
We are considering adding this new debugging functionality to a future release of Synergy/DE. However, so that we can get this useful tool into your hands sooner, we are planning to include it as an “experimental feature” in an upcoming patch.
“Experimental features” are features that are under evaluation. They are for early adopters to use and provide us with feedback on. They will be supported, but they may be modified or even removed in subsequent releases.
So look for this new experimental debugging feature in an upcoming patch and consider trying it out. Like the recent feature we added to detect mismatched global data-section sizes (which can cause runtime crashes), this feature to detect uninitialized memory continues our aim to add debug-time detection of coding errors to assist you in producing more reliable applications.
This comes to you from the Microsoft PDC in Los Angeles, where I am among over 10,000 attendees. The PDC is Microsoft’s futures conference where they preview some of the technology coming out over the next couple of years.
Microsoft has demonstrated real UI improvements in Windows 7—improvements that made almost every attendee cheer. For example, Windows 7 includes UAC improvements so you don’t have to accept “On” or “Off”. And, the new iPhone-like touch support is certainly cool. It looks like within 5 years almost every laptop and LCD monitor will include touch support. The great thing with touch is that there are no UI Toolkit changes required to your Synergy/DE Windows applications because touch translates to normal mouse movements and clicks.
Microsoft has also set a goal to make Windows 7 run faster, boot faster and require less memory than Vista, targeting the new ultra mobile 10" laptops that have flash drives and 1GB of memory. This goes hand in hand with new features in the .NET framework that reduce memory requirements and provide improved interoperability with lower overheads. At Synergex we will be testing Synergy/DE with Windows 7 in the near future—to ensure everything works as well in Windows 7 as it currently does in Vista and Server 2008. Windows 7 also contains the same set of files as Server 2008 R2 so any performance improvements in Windows 7 will also benefit the server platform.
I also want to let you know that we have recently completed our 64-bit ActiveX list implementation, and it will be released in our upcoming 9.1.5a version. This means that 64-bit UI Toolkit applications are now possible on 64-bit native operating systems with the same features as their 32-bit counterparts (that is, if the appropriate controls you use are also available). This now enables you to take full advantage of the extra memory and scalability available with Server 2008 x64 Edition. (Server 2008 R2 is already announced as the last 32-bit server O/S by Microsoft.)
I thought it about time I posted an update regarding my Vista post on the 16th of April. In that post I recommended holding off on Server 2008 deployments until more data was available.
So let’s state the real problem.
“All file operations (read, write, file-position, etc.) are 40% slower on a Vista and Server 2008 system disk than they are on XP or Server 2003 system disks.”
These operations are slowed down even when they are serviced from the O/S cache subsystem. The reason for the 40% overhead is the registration of a driver with the newly (Vista) introduced file system filter framework, even if the driver itself performs no work and just returns. Registration can be for a particular device and not just a disk drive. In one case, the UAC file system virtualization driver, UAFV.SYS registers itself with the filter manager framework to perform the protected file virtualization feature new in Vista. As a result of the filter manager subsystem overhead – all read/write/seek operations to the C: drive become slower regardless of the file virtualization operation. Turning off this UAFV.SYS driver restores system disk performance.
How can you tell what this means? You can use the sysinternals procmon utility to see all the I/O operations occurring on your c: drive—every one of those operations is slowed down on a Vista and Server 2008 system disk. This accounts for some of the CPU bottleneck when your laptop starts. It accounts for slower virus scans on Vista system disks, etc.
As nearly all laptops, most small business servers, and the majority of current desktops all have a single system disk, this problem impacts all current Vista and Small Business Server 2008 users to some degree or another. This problem becomes exacerbated when other utility and anti-virus software takes advantage of the new Vista filter manager framework, where performance to non-system disks will be impacted.
Solutions are of course to read/write sequential data in much larger blocks. We changed Synergy/DE to use 4k buffers for sequential output in our recent 9.1.5 release, however the semantics of the sequential input read allowing for random reading precludes us from doing that on input without slowing down performance. Random ISAM reads can’t use larger blocks without damaging performance at the disk level—so they incur the CPU overhead. Most of the I/O patterns I see with procmon also don’t meet the bar for larger I/Os, so the real issue is to get the problem fixed in the O/S.
If you disable UAC (which we don’t recommend) and you have never virtualized a file (for example, you do this at system installation), you can use the registry editor to make the uafv.sys service visible and then disable it. Doing so will also mean you can’t re-enable UAC till the service is re-enabled. Alternatively you can ensure all your data files (this also means your temp and DTKTMP logicals) are placed on a non system drive – and you won’t see most of the impact of this problem.
We are currently working with Microsoft to provide a fix to this in the next Service Pak and and will keep you informed of our results.
As a side note, we also noticed that any scheduled task runs slower in Vista and Server 2008. Typically customers use these to generate reports and run day ends overnight. These tasks now run at a low priority class. You would expect an idle system to run them almost the same—regardless of the priority class (after all the idea is low priority items use available resource when there are no higher priority items running), but it appears that the programs no longer use available resources as prior versions do. Microsoft sees this as by design—which is hard to believe. We have introduced a new API in 9.1.5 to allow you to re-set the priority class of your scheduled tasks to ensure they retain the performance characteristics of prior operating system versions.