Open Menu

Synergex Blog


XP: The O’Hare of Computer Network Traffic

By synergexadmin, Posted on July 30, 2010 at 4:57 pm

Not long ago, I found myself with a lot of time to kill while sitting in an airport and waiting out a flight delay. The culprit, I was told, was a weather system circling over Chicago. At first this seemed odd, since the plane I was awaiting had neither originated in nor connected through O’Hare.

Chicago’s O’Hare Airport is one of the worst places you can find yourself when trying to make a connection. Whenever I see O’Hare on a flight itinerary, I immediately offer up the prayer: “ORD, have mercy.”

I’d intentionally avoided Chicago in my travel arrangements, so I was a little perturbed that O’Hare was still causing me a headache. I began musing on the ripple effect that it plays on the nation’s air traffic network, and a sudden thought occurred to me: “O’Hare does to the air travel network what XP does to data networks.”

I knew I was being unfair to Chicago, but hey: I was irritated. In reality, O’Hare is a well-oiled machine when compared to what Windows XP can do to a network. But thinking of the nation’s air traffic as a computer network really got my Analogy Engine started. I could see every plane as a network packet carrying bits and bytes of information. I saw traffic control towers as network controllers, and airports as individual computers (or hubs, as the case may be). I saw it all in a whole new light…and then wondered, “What would happen if an airport really handled network traffic in a manner similar to XP?”  It was a chilling thought.

For all of its apparent problems, O’Hare still has a significant networking advantage over the default operations of XP (and Windows 2000 and Server 2003): Selective Acknowledgements. The concept at the heart of SACKS allows the controllers at O’Hare to bring planes in for landings without regard for the order in which they were supposed to arrive.

If you’ve ever found yourself trying to diagnose why an XP user is complaining that “the Internet is slow” – even while everyone else on the network seems to be enjoying good speeds – then you’ve already seen what can happen when SACKS aren’t enabled for TCP communications. In fact, in this day and age, SACKS are so vital that it’s amazing Microsoft has never put out a fix on Windows Update – even as an optional one – that enables Selective Acknowledgements. Instead, they leave it up to systems administrators to manually edit the registry of each user’s machine – if, that is, they’re even aware of the problem or its solution.

I should warn you now that because I’m not interested in being the unwitting cause of a registry foul-up that destroys someone’s computer, I’m not going to tell you what those registry tweaks are. There are plenty of articles that will walk you through the process of enabling Selective Acknowledgement on Windows XP, as well as tips on setting the TCPWindowSize to further enhance performance. If you’re just reading this in hope of finding a quick fix to your XP networking woes, you might want to move along now.

On the other hand, if you’d like to learn a little more about SACKS and why it’s so important, then stick around and read on…

Understanding the importance of Selective Acknowledgments is perhaps easier if you understand what happens when they’re not enabled. Imagine that you’re Xavier Purdydumm (XP to your friends), and you’re the “receiving” traffic controller at Nitwit Sacksless Airport (airport code: NTSX)…

You arrive at work before a single plane has entered your airspace. You then review a list of scheduled arrivals for the day. You note the number of planes you expect today, and the order in which they will land. It looks like a light traffic day – you’re only expecting 25 planes before your shift ends.

You’re now ready, so you send out a notification that traffic operations can now commence. And look here! The first plane is on approach. So far, so good.

Plane One lands without incident, as do planes Two and Three. But oh-oh, now there’s a problem: Plane Five just landed…but where’s Plane Four?

You immediately send out a plane of your own. It’s instructions are to fly to Plane Four’s origination point, and to let its traffic controller know that Plane Four never arrived. In the meantime, Plane Six comes in, lands, and heads to the gate.

Still no plane FOUR. However, regulations are regulations so now you send out yet another plane to again request that plane FOUR be sent over. And aha! A new plane arrives. It’s….

…Plane Seven.

You repeat the process again and again, once for each time a new plane arrives. Finally, after you’ve sent off your 15th plane to request the location of the missing aircraft, plane Four suddenly lands. Turns out that plane Four got diverted when it ran into a storm system over Lake Cisco – the same storm system, as it turns out, that you just sent your last 15 planes flying into.

Well, that’s not your concern. You’re here to count planes coming in. And speaking of which, here comes another. It touches down, rolls past you and you see that it’s plane Five. You shake off a sense of déjà and cross it off your list.

You also cross plane Six off of your list – almost (but not quite) certain you’ve seen it before, too – when something odd happens: plane Four lands and taxis by.

Now how could that have happened? You’ve already crossed it off of your list, but there it is (again), plain as day. Deciding you must have made a mistake, you erase the check marks next to planes Five and Six, since there’s no way they could have landed if plane Four hadn’t landed yet.

And just to prove it: Here come planes Five, Six, and…Four?. Again??!?

By now, you’re completely confused, and the fact that one of your underlings keeps reporting that there are already planes sitting at Gates 4 through 6 is really getting on your nerves. He should clearly see that those planes haven’t been checked of your list, so why is he bugging you? You tell him to take care of it, as you’re very busy counting planes at the moment, so he tells them all to get lost. Taking a bit of initiative, he also shoos away the planes waiting to park at gates 7 through 18, too.

This process repeats itself again and again – a total of five times – before the count continues normally with planes Seven, Eight, Nine, Ten, and so forth. By the time plane Twenty Five successfully touches down and unloads at the gate, you feel that somehow your easy traffic day became a nightmare.

And it was a nightmare – but not just for poor Xavier Purdydumm. It was a grueling day for the traffic controller at the airport from which all twenty-five planes departed, as well as for everyone else using the air traffic network – including the folks that never even passed through NTSX.

Let’s take a quick look at the network traffic as shown in the above scenario, versus how it might have looked if Xavier had been able to work with Selective Acknowledgements:

Protocol Packets Received Total
No SACKS 1-2-3-5-6-7-8-9-10-11-12-13-14-15-16-17-18-4-
5-6-4-5-6-4-5-6-4-5-6-4-5-6-7-8-9-10-11-12-13-14-15-16-17-18-
19-20-21-22-23-24-25
51
SACKS 1-2-3-5-6-7-8-9-10-11-12-13-14-15-16-17-18-4-
19-20-4-21-22-4-23-24-4-25-4
29

 

You’ll notice that the first 18 packets are identical, but after that things start to go awry. With only 25 packets in the total communication, a disruption on the 4th packet caused the network to carry 75% more traffic than would have been necessary had SACKS been enabled. Why?

Under TCP, it’s the duty of the receiver to notify the sender when a packet arrives out-of-order. Unfortunately, SACKS-less communications require that all traffic must arrive in order. And just to make matters worse, the originator of the traffic has its own set of rules to follow – the default of which being to only resend a packet if it has been requested in triplicate.

Now, in the example above, one might say that it was the storm over Lake Cisco that caused the issue, but it’s hard to blame the latency caused by the weather. Sure, it certainly caused the disappearance of the original “plane Four.” It also slowed down the traffic both to and from Nitwit Airport, thus allowing fifteen requests to be sent from NTSX before the originator ever received the first three and re-sent plane Four.

But note that the storm caused an equal number of duplicates to be dispatched, whether the protocol had SACKS enabled or not, so as a “factor” the weather pretty much washes out; everyone has to deal with it.

So while the weather causes things to slow down a bit (latency), the root problem is that SACKS-less communications require the sender to resend packets that have already been received in addition to the lost packet.. It’s bad enough in the tiny scenario shown above…but consider the impact if there had been 1,000 packets sent with multiple packet losses.

As I mentioned before, there’s a fix that allows you to turn on Selective Acknowledgements, but it’s not easy to implement – particularly if you’re a developer with multiple software installations at customer sites. The only way around the problem (and remember that it affects Windows 2000 and Server 2003 as well) is to modify the registry. You may find resistance from your customers when you tell that they’re going to need launch RegEdit and start adding DWORD values on every XP workstation they own.

For those of you who are wondering why SACKS-less networking is even in use on XP, remember that “Selective Acknowledgement” is a networking paradigm that came about long after the NT operating system had been created. Back then, when NT launched, there was no such thing as a “high-speed” internet. Networking technology was designed primarily to deal with LANs, which generally meant low-latency communications and fewer lost packets.

Years later, Windows 2000, Windows XP and Server 2003 were introduced. Everyone would probably agree that they were huge steps forward, but unfortunately they all borrowed heavily from NT networking services. That meant that they also adopted a SACKS-less networking default – even as internet speeds, overall network traffic and latency potentials were skyrocketing.

So the next time the skies are full over Chicago, the clouds are massing above Lake Michigan and you figure you’re either going to be late for dinner or late for your connection, remember Xavier Purdydumm…and thank the ORD for Selective Acknowledgements and the fact that O’Hare, at least, has made an effort to keep up with the times.


Learning from Stupidity

By synergexadmin, Posted on July 9, 2010 at 5:56 pm

I'll be the first to admit that I've done some really stupid things in my life.

Like the time I decided to paddle a canoe across a mile-wide river even as threatening clouds loomed on the horizon.  Or the time I got stuck on a hike by taking a "short-cut" which involved shimmying around an overhang, leaving me suspended over a 70-foot drop to some very sharp, very hard rocks below.

Then there was the time I remoted into a production Alpha server and decided to shut down TCP/IP for a few seconds. Now that was fun; I figured my first month on the job was also going to be my last.

But all of these dumb moves — and many others — have at least one thing in common: Though I learned something, everything was over so quickly that I never had much time to worry about the repercussions.

Not so, yesterday.

My laptop had been causing me more and more grief lately, so I decided it was time to re-install the OS and start from scratch. I wasn't in a huge rush, however, and I had other things to do anyways. So it took me several days to complete my preparations for the wipe, during which time I methodically moved files from my laptop to a backup device.

Yesterday, before lunch, I declared my system ready for reinstall, and pulled the proverbial trigger. I reformatted the hard drive, installed a clean copy of Windows 7, and then ran Windows Update to get everything up to snuff. Success! I was really rocking, and I realized that if I hurried, I could get a full backup of my "clean" build before I left for lunch. So of course, I did something incredibly, unbelievably stupid.

Lesson #1: Do NOT destroy a completely valid, disk image backup to make room for a "fresh" disk image backup.

Turns out that my backup device — a 300GB external drive — was getting a little full. I'd been faithfully (more or less) doing disk image backups for quite a while, with the most recent being dated just last Friday. But those files were just SO BIG and I really needed the space for a new backup set.

My rationalization was pretty solid: I'd backed up copies of only those files that I needed, they were all organized well, and I had ISO images of all the programs I was going to need to re-install, so what's the point in keeping a backup of a system I'm never going to use again anyways?

Plus, I really needed the space.

So I deleted the disk image backup, started a new one from scratch, and went to lunch. Upon returning, the backup was complete. Moving right along, I quickly copied my well-organized backup files into place, and started with software installations.

Someone upstairs was watching out for me, however, because the first software I re-installed was a tiny little program that allowed me to access very special, very important and very irreplaceable encrypted files. And though it installed without a hitch, I quickly found that the encrypted files it opens…

…weren't there.

They weren't in the folders I'd copied back to my laptop, and they weren't on the backup drive. I searched network drives, network computers, and even checked a USB flash drive just against the chance that I'd momentarily lost my mind, transferred them there, and then forgotten about it. Perhaps the worst problem was that I had specifically made sure that those files had been backed up two or three days ago, and I knew everything was ok.

Hadn't I?

I finally gave up on locating them the "easy" way, and started downloading software that scanned hard disks to recover deleted files. After trying five different freebie versions, each of which were dismal failures, I'd almost given up hope. So just before midnight, I gave in and downloaded a try-before-you-buy piece of software called File Scavenger.

The demo version offers the ability to scan a hard drive and locate darn near everything that was ever on it and not overwritten, but only lets you recover 64K of a file before it asks you to pay. Knowing I'd happily pay the $49 if it worked, I downloaded and installed it. Upon running it, however, it looked as if it was going to take at least a couple of hours to scan whatever was left of my hard drive after the format/reinstall, so I decided to retire for the night and get some sleep.

Lesson #2: You can't sleep when you've probably just lost something that's irreplaceable.  (It's the Not Knowing and the What-If's that will keep snapping you back to full consciousness…again and again and again.)

Early this morning, I was back at my desk, with the knowledge that if the scan that I had left running was going to find something, it would have done so by now. I watched the ribbons bounce around on my laptop's monitor. I probably stared at them for a full minute before steeling myself for the worst, taking a last sip of coffee, and moving the mouse to break the spell of the screen saver.

There they were. I almost couldn't believe it. All three of the large, encrypted files that contained countless (or at least, well in excess of 150,000) other files.

Lesson #3: When pulled from a wallet too quickly, a credit card can cut through the pocket that holds it. Sometimes, it's safer for your wallet — and easier on you — to try the Pay-For "premium" solution before you waste hours hunting down the free alternative.

It was the fastest online purchase I've ever made. And within 30 minutes, I'd recovered all three files and confirmed that they were intact and had all of their information. I'd also backed them up (again) and re-confirmed their existence on the backup drive. I then put them on the hard drive of two other computers. Gun-shy? Absolutely.

But I've got to say, this software is amazing — and not just a little scary, too. While doing my scans of my laptop's hard drive, I found a lot of stuff that shouldn't be there. Like stuff that's been deleted for years. Doing a scan of my backup drive, a networked drive personal drive we use to keep copies of pictures, music, movies and (eek!) bank account information, and my little USB flash drive, I found lots and lots and lots of stuff that simply shouldn't be there.

Lesson #4 (Unintended): Deleted isn't deleted until you make it so.

Turns out that NTFS is really, really good at keeping your files intact even after they've been deleted — or even subjected to a quick re-format. FAT32 is as fragile as crystal by comparison, but still has the potential to leave a file intact long after you've deleted it. And while most everyone who's reading this already knows to use a disk utility to overwrite "unused" disk space before getting rid of a drive, remember that until you do, the data is likely still there.

And by the by…did you know that most printers and copiers have hard drives in them? Think twice before you donate or sell them, because the person that takes them off of your hands may have File Scavenger (or something similar) in their possession! With what I've learned — and now purchased — it brings a whole new world of (shady) opportunities to the table. For instance, my neighbor down the street actually has a bunch of printers sitting in his yard under a big sign that says "Take Me I'm Free" (no kidding). It's suddenly tempting to pick them up and take a little peek inside, but fortunately (for both of us) I don't have the time right now, as I'm heading out the door on holiday in only a few short hours.

Now, if only I could just learn

Lesson #5: Don't post a blog entry about your stupidity where the director of your department is likely to read about it

I could be reasonably sure that my job is still secure when I return from a well-needed vacation…

And yes: I'm going to Disneyland.


Another TechEd Sticky Note

By synergexadmin, Posted on June 10, 2010 at 11:58 pm

The other night, I discovered the way to beat the heat while here in New Orleans. It’s a fruity little concoction known as the Hurricane, and while it doesn’t actually affect the climate around you, it sure makes feeling hot and sticky a lot more enjoyable. I’m also pretty sure how it got its namesake: in the morning, you find yourself trying to reconstruct the previous 12 hours of your life by putting together the pieces and fragments of your memory.

TechEd 2010 draws to a close this evening, and though it’s been increasingly difficult to find sessions that seem pertinent to we Synergexians, it’s still been a worthwhile experience.
I’ve learned a lot just by watching presenters step through the build of a Silverlight UI using Microsoft Expression, or show off the latest features of Visual Studio 2010 and how it can be used to quickly create a web app, or walk through the use of new simplified Windows Communication Foundation 4 features.I’ve even filled in the holes in my schedule with sessions on interesting (to me) topics, such as IPv6, trends in cybercrime, and hacker techniques.

Which all brings me to the point of this little blog entry: It seems to me that the value of conferences lies not in the number of sessions that directly apply to you, but in the quantity and quality of the little tidbits you pick up each day. It’s in the discussions you have with other developers and like-minded individuals – whether they take place while sitting down over a cup of coffee, or simply during a quick ride in the elevator. It’s in the creative ideas that spring up when you see a clever implementation and wonder if you can apply the same techniques to an unrelated solution of your own. It’s in the tips, tricks and techniques that you pick up, which will not only save you hours, days, and even weeks of effort in the year ahead, but which can also be shared with the rest of your team to make them more productive as well.

Just a sales pitch for SPC2010? Perhaps…but that wasn't the intent. After all, this is my blog, and with it I get to share helpful experiences from my time “out in the field.” If writing about it all means I’ll get to see more of you when we set up shop in October at the Citizen Hotel, then so much the better. But in the end, my little revelation about the value of coming to TechEd – even with so much focus on technologies that I can’t use – is helping me to sit back and enjoy this final day of the conference, secure in the knowledge that I’m going to be learning something interesting at every turn. And isn’t that what attending the conference is all about?

That, and the Hurricanes, of course…


A Sticky Note from TechEd 2010

By synergexadmin, Posted on June 9, 2010 at 12:00 am

So, I’m here at TechEd 2010 in the hot, muggy, all-around sticky town of New Orleans. I’m pretty sure that the person who decided that holding a summer conference in the bayou was a good idea is not here, as I’ve yet to hear of any lynchings.

Fortunately, the conference center is nice and cool (I’m sure the air conditioning bill is staggering), and the fact that I’m surrounded by thousands of techies – mostly of the male variety – is somehow less onerous when combined with the cool, climate-controlled breeze swirling about me.

TechEd is most certainly a Microsoft conference, and it can be difficult to find the right sessions to attend. Sure, we want to keep up on the latest and greatest uses of Microsoft technology, but only as they relate to the needs of Synergex’s customers. Learning all there is to know about SQL Azure, or figuring out how to take advantage of SharePoint SuperDuper Edition just isnt’ going to help many of us.

However, there’s been at least one session during every schedule slot which highlights some product, feature or design pattern that can assist Synergex customers who employ Microsoft technologies. Surprisingly, there have even been a few presentations that contained nuggets of good material that can be extended to some of our OpenVMS and Linux/Unix customers as well.

I’ll be following up in the days and weeks to come with some “Tech Tips” that will hopefully save some of you a headache or two. From diagnosing network problems that affect the performance of xf-enabled solutions (you’ve just gotta love what XP does to networks), to using Visual Studio 2010 to quickly set up a working CRUD application (which pretty much looks like it sounds, but at least it works!), to Silverlight desktop deployments (anyone for providing a Mac solution?), I’ll be trying to share some of the knowledge with those fortunate enough not to be trapped in the sauna known as the Big Easy.

Until next time!


Challenges Facing Synergy Developers

By synergexadmin, Posted on May 19, 2010 at 4:09 pm

I was recently involved in a discussion concerning Synergy tools and technologies, and how our customers can best position themselves to take advantage of them. Somewhere along the way, I was asked for (or I volunteered – I don’t exactly remember) my opinion about the current software development landscape, and the challenges that face so many of our customers today.

I identified three areas of concern for our customer base and the future of their applications. And since you’re reading this, I assume that you are a Synergy customer, so hopefully what I had to say will strike a chord with you – either as a challenge you’ve already faced and overcome, or as an issue with which you’re currently grappling.

GUI is King

Yes, you’ve heard it a million times before, but unfortunately you’re going to keep hearing about it until you’ve updated the look and feel of your application. The continued survival of almost every solution boils down to implementing one of two main GUI choices: web technologies or Windows .NET (which means WPF, SilverLight or WinForms).

If you were reading that last part carefully, you’ll note that this discussion is not targeted solely at the *nix and OpenVMS operations; if you’re on Windows already, but are still using the same UI Toolkit displays that your app was using years ago (i.e., you’re not incorporating .NET technologies with the Synergy/DE .NET API), then you’re in the same boat as everyone else.

Arguments about the efficiency of character-based data entry are still valid, but they’re becoming less and less relevant (or realistic) – particularly if you’re using the speed of data entry as an excuse to allow the rest of your application to wallow. While you may have several “workhorse” data entry screens, chances are they make up a very small percentage of your overall application. Information displays, reports, “look-ups” or maintenance utilities have no reason to be tied down to a cell-based presentation.

GUI’s are simply too pervasive and too familiar to ignore, and purely cell-based software solutions cannot hope to compete for much longer. Even the most enthusiastic supporters of your application, the ones who can see beyond the green screen or the pseudo-Windows of Toolkit to the power of your application, will readily agree that it’s becoming more and more difficult to convince new hires (or prospective customers) of the superiority of your Synergy solution.

And I’m not just talking to the Synergy shops that develop vertical applications which are then distributed and sold; Synergy “end users,” the shops that have an in-house, custom-built Synergy solution, need to take note as well. Serious scrutiny of your Synergy application is coming with the next management or executive-level turnover.

Perception is Reality

We’ve all heard the term before, but I think it would be better stated as “People act on perception as if it is reality.”

One of the most common (mis)perceptions in today’s society is “newer is better.” It’s beat it into to us at every turn and in every advertisement, from New and Improved Acme glass cleaner to Next Best Thing flat panel televisions. And unfortunately, a cell-based UI isn’t doing you any favors in this department. Cell-based apps are generally looked at as “old,” “outdated,” “legacy,” or simply (my favorite) “DOS.” It’s this perception that gives rise to a host of other concerns and questions. Interoperability. Reliability. Power.

But perhaps the most dangerous questions are the ones that aren’t asked. If the users of your application generally see it as a throwback to a bygone era, then it’s possible – nay, likely – that it never even occurs to them to ask the right questions. Can it communicate with Brand X’s software? Can it display graphs that give a visual indication of the state of your sales? Does it offer a web service? Does it support ad hoc reporting? These questions are the sales points from competing software vendors, and rest assured that management has heard the pitches – whether it be the CEO at your customer’s company, or the management team of your own.

Remember: Your UI is the gateway to the power of your application. If it looks like something that was distributed on a 5 ¼” floppy, then it’s time to seriously rethink the front end of your application.
Now is the Time to Act

The current recession has made life difficult for almost everyone. Sales are down, money is tight, and thoughts of corporate expansion have been put on hold.

For the past two years or so, companies simply don’t have the money to invest in a brand new software solution, whether it be an ERP package to replace your in-house software, or a glossy, glitzy, Windows-and-SQL-based solution from your competitor. If you’re wise, you’ll look at this period as a reprieve from increasingly heavy competition, and as an opportunity to throw development efforts into high gear.

Use the time to ensure that your software has every last competitive edge when the wheels of the economy start moving again. Take advantage of the lull in sales and expansion, and incorporate the latest and greatest that Synergy has to offer. Put a GUI on your application – even if it needs to be one screen at a time. Use the Synergy.NET API to enhance the look and feel of the most commonly-used screens of your Toolkit application. Make the small investment in Synergy SQL Connect, and set up a data warehouse that can be used by SQL Reporting Services, or open up the data in your system to other commonly-used software by utilizing Synergy xfODBC. Create a web portal, add a web service or two, and take advantage of the APIs and web services of other software solutions to enhance your own.

In Conclusion…

I want it known for the record that I’m a huge fan of Unix/Linux, and absolutely love OpenVMS, so understand that I’m not advocating a switch to the Windows OS. Remember that you can still take advantage of the reliability and stability of both *nix and VMS backend systems, and just bolt on a Windows GUI client wherever appropriate. Heck, chances are that your users are already using Windows boxes running VT emulator software anyways, so there’s probably no hardware investment that need concern you.

And there are plenty of methods available to leverage your core systems and routines if you’re one of the many Synergy shops currently running a “green screen” solution. Investigate xfODBC, SQL Connect, xfServer and xfServerPlus. Install and play around with the (free) Synergy Data Provider for .NET, and get yourself acquainted with Windows- or web-based technologies, programming languages and development environments. Start cleaning up your code and separate that business logic from the UI components – it may not be as hard as you think.

A little research and a small investment in time will go a long way toward illuminating the path of a better, more robust and more marketable software solution.

Do you agree? Disagree? Have a story that’s relevant, or want to share your own challenges and solutions to the GUI problems or perceptions you’ve faced? Let us know about them, and let’s get the discussion started.


New select classes are a home run

By synergexadmin, Posted on September 23, 2009 at 4:18 pm

Early last week, I was given a copy of the beta build of Synergy/DE 9.3.  My task was to do some testing of one the exciting new features it includes: The Select classes.

Now, testing isn’t always fun, and it can be frustrating trying to figure out if a bug is really a bug, or just a problem born of having no clue what I’m doing.  This time, however, any minor problems I encountered were completely overshadowed by the sheer awesomeness of the new classes.

The Select classes provide a SQL-like syntax for communicating with Synergy databases, and it’s amazing just how simple they are to use.  Once I had a basic understanding of how they worked, I was able to compress a simple READS loop – complete with “filters” and error checking – into a single line.

Consider the following code, which loops through active customer records and prints out the customer number, name and last sales date of anyone with no sales for more than a year:

    repeat
        begin
        reads(ch_cusmas,cusmas)  [err = eof]     if (cusmas.status .ne. ‘A’); If customer is not Active, ignore it
    nextloop
        if (cusmas.last_sale.year < lastYear)
        call printLine
    end
eof,    etc…

The basic syntax and usage of the Select Class is:

    foreach myRecord in @Select(@From[, @Where][, @OrderBy])
    doSomething

And so, using the Select classes, I condensed everything into:

customers = new From(ch_cusmas,cusmas)
noNewSales = new Select(customers,(where)status.eq.’A’ .and. last_sale.year < lastYear)
foreach cusmas in noNewSales
    call printLine

(I actually condensed the first three lines into just one foreach statement, but the result is a line of code that doesn’t fit nicely into a blog entry, and therefore becomes more difficult to read.)

The syntax is neat, but it’s not the best part; the really cool stuff is happening under the hood.  The actual database I/O layer is now handling all of the “filter” logic, and it’s doing it faster than a regular READS loop can handle.  In fact, during my tests, a filtered return of around 18,375 records showed a performance benefit that ranged from 11 to 21 percent.  Now, that’s a small data set and we’re only talking about milliseconds, but it demonstrates a performance boost nevertheless – and that’s for a local application, running against a local database.  The savings over a network connection to a remote database (i.e., xfServer) is likely to be enormous, as the I/O layer on the server is now doing the filtering, rather than returning the data to the client to handle.

Other features include the OrderBy class, which (as expected) sorts the returned data in either ascending or descending order based on the key being read.  The classes also provide for a sparse record population, in which only the fields needed by the application are actually returned.  There are even methods available to get at each individual record returned in the set, write back to the file, etc.

The fact that an update to Synergy/DE 9.3 is all that’s required is impressive as well.  There’s no need to perform any database conversions, or add additional services or products; the Select classes work right out of the box.

The Select classes represent a significant addition to the language, and I can imagine a time in the not-too-distant future when they become the primary Synergy database access mechanism.  My hat’s off to the Synergex development team; it appears that they’ve hit this one out of the park.


Don't miss a post!

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts Tag Cloud Archives