Open Menu

Synergex Blog


XP: The O’Hare of Computer Network Traffic

By synergexadmin, Posted on July 30, 2010 at 4:57 pm

Not long ago, I found myself with a lot of time to kill while sitting in an airport and waiting out a flight delay. The culprit, I was told, was a weather system circling over Chicago. At first this seemed odd, since the plane I was awaiting had neither originated in nor connected through O’Hare.

Chicago’s O’Hare Airport is one of the worst places you can find yourself when trying to make a connection. Whenever I see O’Hare on a flight itinerary, I immediately offer up the prayer: “ORD, have mercy.”

I’d intentionally avoided Chicago in my travel arrangements, so I was a little perturbed that O’Hare was still causing me a headache. I began musing on the ripple effect that it plays on the nation’s air traffic network, and a sudden thought occurred to me: “O’Hare does to the air travel network what XP does to data networks.”

I knew I was being unfair to Chicago, but hey: I was irritated. In reality, O’Hare is a well-oiled machine when compared to what Windows XP can do to a network. But thinking of the nation’s air traffic as a computer network really got my Analogy Engine started. I could see every plane as a network packet carrying bits and bytes of information. I saw traffic control towers as network controllers, and airports as individual computers (or hubs, as the case may be). I saw it all in a whole new light…and then wondered, “What would happen if an airport really handled network traffic in a manner similar to XP?”  It was a chilling thought.

For all of its apparent problems, O’Hare still has a significant networking advantage over the default operations of XP (and Windows 2000 and Server 2003): Selective Acknowledgements. The concept at the heart of SACKS allows the controllers at O’Hare to bring planes in for landings without regard for the order in which they were supposed to arrive.

If you’ve ever found yourself trying to diagnose why an XP user is complaining that “the Internet is slow” – even while everyone else on the network seems to be enjoying good speeds – then you’ve already seen what can happen when SACKS aren’t enabled for TCP communications. In fact, in this day and age, SACKS are so vital that it’s amazing Microsoft has never put out a fix on Windows Update – even as an optional one – that enables Selective Acknowledgements. Instead, they leave it up to systems administrators to manually edit the registry of each user’s machine – if, that is, they’re even aware of the problem or its solution.

I should warn you now that because I’m not interested in being the unwitting cause of a registry foul-up that destroys someone’s computer, I’m not going to tell you what those registry tweaks are. There are plenty of articles that will walk you through the process of enabling Selective Acknowledgement on Windows XP, as well as tips on setting the TCPWindowSize to further enhance performance. If you’re just reading this in hope of finding a quick fix to your XP networking woes, you might want to move along now.

On the other hand, if you’d like to learn a little more about SACKS and why it’s so important, then stick around and read on…

Understanding the importance of Selective Acknowledgments is perhaps easier if you understand what happens when they’re not enabled. Imagine that you’re Xavier Purdydumm (XP to your friends), and you’re the “receiving” traffic controller at Nitwit Sacksless Airport (airport code: NTSX)…

You arrive at work before a single plane has entered your airspace. You then review a list of scheduled arrivals for the day. You note the number of planes you expect today, and the order in which they will land. It looks like a light traffic day – you’re only expecting 25 planes before your shift ends.

You’re now ready, so you send out a notification that traffic operations can now commence. And look here! The first plane is on approach. So far, so good.

Plane One lands without incident, as do planes Two and Three. But oh-oh, now there’s a problem: Plane Five just landed…but where’s Plane Four?

You immediately send out a plane of your own. It’s instructions are to fly to Plane Four’s origination point, and to let its traffic controller know that Plane Four never arrived. In the meantime, Plane Six comes in, lands, and heads to the gate.

Still no plane FOUR. However, regulations are regulations so now you send out yet another plane to again request that plane FOUR be sent over. And aha! A new plane arrives. It’s….

…Plane Seven.

You repeat the process again and again, once for each time a new plane arrives. Finally, after you’ve sent off your 15th plane to request the location of the missing aircraft, plane Four suddenly lands. Turns out that plane Four got diverted when it ran into a storm system over Lake Cisco – the same storm system, as it turns out, that you just sent your last 15 planes flying into.

Well, that’s not your concern. You’re here to count planes coming in. And speaking of which, here comes another. It touches down, rolls past you and you see that it’s plane Five. You shake off a sense of déjà and cross it off your list.

You also cross plane Six off of your list – almost (but not quite) certain you’ve seen it before, too – when something odd happens: plane Four lands and taxis by.

Now how could that have happened? You’ve already crossed it off of your list, but there it is (again), plain as day. Deciding you must have made a mistake, you erase the check marks next to planes Five and Six, since there’s no way they could have landed if plane Four hadn’t landed yet.

And just to prove it: Here come planes Five, Six, and…Four?. Again??!?

By now, you’re completely confused, and the fact that one of your underlings keeps reporting that there are already planes sitting at Gates 4 through 6 is really getting on your nerves. He should clearly see that those planes haven’t been checked of your list, so why is he bugging you? You tell him to take care of it, as you’re very busy counting planes at the moment, so he tells them all to get lost. Taking a bit of initiative, he also shoos away the planes waiting to park at gates 7 through 18, too.

This process repeats itself again and again – a total of five times – before the count continues normally with planes Seven, Eight, Nine, Ten, and so forth. By the time plane Twenty Five successfully touches down and unloads at the gate, you feel that somehow your easy traffic day became a nightmare.

And it was a nightmare – but not just for poor Xavier Purdydumm. It was a grueling day for the traffic controller at the airport from which all twenty-five planes departed, as well as for everyone else using the air traffic network – including the folks that never even passed through NTSX.

Let’s take a quick look at the network traffic as shown in the above scenario, versus how it might have looked if Xavier had been able to work with Selective Acknowledgements:

Protocol Packets Received Total
No SACKS 1-2-3-5-6-7-8-9-10-11-12-13-14-15-16-17-18-4-
5-6-4-5-6-4-5-6-4-5-6-4-5-6-7-8-9-10-11-12-13-14-15-16-17-18-
19-20-21-22-23-24-25
51
SACKS 1-2-3-5-6-7-8-9-10-11-12-13-14-15-16-17-18-4-
19-20-4-21-22-4-23-24-4-25-4
29

 

You’ll notice that the first 18 packets are identical, but after that things start to go awry. With only 25 packets in the total communication, a disruption on the 4th packet caused the network to carry 75% more traffic than would have been necessary had SACKS been enabled. Why?

Under TCP, it’s the duty of the receiver to notify the sender when a packet arrives out-of-order. Unfortunately, SACKS-less communications require that all traffic must arrive in order. And just to make matters worse, the originator of the traffic has its own set of rules to follow – the default of which being to only resend a packet if it has been requested in triplicate.

Now, in the example above, one might say that it was the storm over Lake Cisco that caused the issue, but it’s hard to blame the latency caused by the weather. Sure, it certainly caused the disappearance of the original “plane Four.” It also slowed down the traffic both to and from Nitwit Airport, thus allowing fifteen requests to be sent from NTSX before the originator ever received the first three and re-sent plane Four.

But note that the storm caused an equal number of duplicates to be dispatched, whether the protocol had SACKS enabled or not, so as a “factor” the weather pretty much washes out; everyone has to deal with it.

So while the weather causes things to slow down a bit (latency), the root problem is that SACKS-less communications require the sender to resend packets that have already been received in addition to the lost packet.. It’s bad enough in the tiny scenario shown above…but consider the impact if there had been 1,000 packets sent with multiple packet losses.

As I mentioned before, there’s a fix that allows you to turn on Selective Acknowledgements, but it’s not easy to implement – particularly if you’re a developer with multiple software installations at customer sites. The only way around the problem (and remember that it affects Windows 2000 and Server 2003 as well) is to modify the registry. You may find resistance from your customers when you tell that they’re going to need launch RegEdit and start adding DWORD values on every XP workstation they own.

For those of you who are wondering why SACKS-less networking is even in use on XP, remember that “Selective Acknowledgement” is a networking paradigm that came about long after the NT operating system had been created. Back then, when NT launched, there was no such thing as a “high-speed” internet. Networking technology was designed primarily to deal with LANs, which generally meant low-latency communications and fewer lost packets.

Years later, Windows 2000, Windows XP and Server 2003 were introduced. Everyone would probably agree that they were huge steps forward, but unfortunately they all borrowed heavily from NT networking services. That meant that they also adopted a SACKS-less networking default – even as internet speeds, overall network traffic and latency potentials were skyrocketing.

So the next time the skies are full over Chicago, the clouds are massing above Lake Michigan and you figure you’re either going to be late for dinner or late for your connection, remember Xavier Purdydumm…and thank the ORD for Selective Acknowledgements and the fact that O’Hare, at least, has made an effort to keep up with the times.


Starting Services on Linux

By Steve Ives, Posted on July 24, 2010 at 3:14 pm

For a while now I’ve been wondering about what the correct way is to start boot time services such as the Synergy License Manager, xfServer and xfServerPlus on Linux systems. A few years ago I managed to “cobble something together” that seemed to work OK, but I had a suspicion that I only had part of the solution. For example, while I could make my services start at boot time, I’m not sure that they were getting stopped in a “graceful” way during system shutdown. I also wondered why my “services” didn’t show up in the graphical service management tools.

My cobbled together solution involved placing some appropriately coded scripts into the /etc/rc.d/init.d folder, then creating symbolic links to those files in the appropriate folders for the run levels that I wanted my services started in, for example /etc/rc.d/rc5.d.

This week, while working on a project on a Linux system, I decided to do some additional research and see if I couldn’t fill in the blanks and get things working properly.

My previous solution, as I mentioned, involved placing an appropriately coded script in the /etc/rc.d/init.d folder. Turns out that part of my previous solution was correct. For the purposes of demonstration, I’ll use the Synergy License Manager as an example; my script to start, stop, restart and determine the status of License Manager looked like this:

#

# synlm – Start and stop Synergy License Manager

#

. /home/synergy/931b/setsde

case "$1" in

   

start)

        echo -n "Starting Synergy License Manager"

       

synd

       

;;

   

stop)

       

echo -n "Stopping Synergy License Manager"

       

synd –q

       

;;

   

restart)

       

$0 stop

       

$0 start

       

;;

   

status)

       

if ps ax | grep -v grep | grep -v rsynd | grep synd > /dev/null

       

then

           

echo "License Manager is running (pid is `pidof synd`)"

       

else

           

echo "License Manager is NOT running"

        fi

       

;;

   

*)

       

echo $"Usage: synlm {start|stop|restart|status}"

       

exit 1

esac

exit 0

If you have ever done any work with UNIX shell scripts then this code should be pretty self explanatory. The script accepts a single parameter of start, stop, restart or status, and takes appropriate action. The script conforms to the requirements of the old UNIX System V init subsystem, and if placed in an appropriate location will be called by init as the system runlevel changes. As mentioned earlier, I had found that if I wanted the “service” to start, for example when the system went to runlevel5, I could create a symbolic link to the script in the /etc/rc.d/rc5.d folder, like this:

ln –s /etc/rc.d/init.d/synlm /etc/rc.d/rc5.d/S98synlm

Init seems to process files in a run level folder alphabetically, and the existing scripts in the folder all seemed to start with S followed by a two digit number. So I chose the S98 prefix to ensure that License Manager would be started late in the system boot sequence.

This approach seemed to work pretty well, but it was kind of a pain having to create all those symbolic links … after all, on most UNIX and LINUX systems, run levels 2, 3, 4 and 5 are all multi-user states, and probably required License Manager to be started.

Then, almost by accident, I stumbled across a command called chkconfig. Apparently this command is used to register services (or more accurately init scripts) to be executed at various run levels. PERFECT … I thought! I tried it out:

# chkconfig –-level 2345 synlm on

service synlm does not support chkconfig

Oh! … back to Google… Turns out I was something really critical in my script, and believe it or not, what I was missing was a bunch of comments! After doing a little more research I added these lines towards the top of the script:

# chkconfig: 2345 98 20

# description: Synergy/DE License Manager

# processname: synd

Low and behold, this was the missing piece of the puzzle! Comments … you gotta love UNIX! So now all I have to do to start License Manager at boot time, and stop it at system shutdown is use the chkconfig command to “register” the service.

And there’s more … With License Manager registered as a proper service, you can also use the service command to manipulate it. For example, to manually stop the service you can use the command:

# service synlm stop

And of course you can also use similar commands to start, restart, or find the status of the service. Basically, whatever operations are supported by the init script that you provide.

Oh, by the way, because License Manager is now running as a proper service it also shows up in the graphical management tools, and can be manipulated by those tools … very cool!

Of course License Manager is just one of several Synergy services that you could use this same technique with. There’s also xfServer, xfServerPlus and the SQL OpenNet server.


Visual Studio 2008 SP1 Hangs After Office Upgrade

By Steve Ives, Posted on July 22, 2010 at 5:55 pm

Just incase you run into the same issue…

This week I had to revert back to using Visual Studio 2008 while working on a customer project, and I pretty quickly found that I had a problem. I was working on an ASP.NET web project, and found that each time I opened a web page for editing, Visual Studio would appear to hang. Clicking anywhere on the Visual Studio window resulted in the ubiquitous Windows “beep” sound.

On researching the problem in the “Universal Documentation System” (Google) I quickly found that I was not alone in my frustrations … in fact it seems like this is a common issue right now.

Turns out that the problem is related to the fact that I recently updated from Office 2007 to Office 2010. I guess Visual Studio 2008 uses some components from Office 2007 when editing HTML and ASPX pages, and I guess that component got screwed up by the Office 2010 upgrade. If you encounter this problem you will likely find that when Visual Studio 2008 hangs it has started a SETUP.EXE process, but that process never seems to complete. Apparently it’s attempting to do a repair of the “Microsoft Visual Studio Web Authoring Component”, but for some reason can’t.

The solution seems to be to manually run the setup program and select “Repair”. On my system the setup program was C:Program Files (x86)Common Filesmicrosoft sharedOFFICE12Office Setup ControllerSetup.exe. My system runs a 64-bit O/S … if you’re using a 32-bit O/S you’ll presumably just need to drop the (x86) part.

The repair took about two or three minutes, and low and behold I have my Visual Studio 2008 installation working just fine again!


Linux ls Color Coding

By Steve Ives, Posted on July 20, 2010 at 4:32 pm

It’s always driven me CRAZY the way that RedHat, Fedora, and presumably other Linux systems apply color coding to various types of files and directories in the output of the ls command. It wouldn’t be so bad, but it seems like the default colors for various file types and protection modes are just totally unreadable … for example black on dark green doesn’t show up that well!.

Well, today I finally got around to figuring out how to fix it … my preference being to just turn the feature off. Turns out it was pretty easy to do, open a terminal, su to root, and edit /etc/DIR_COLORS. Towards the top of the file there is a command that was set to COLOR tty, and to disable the colorization all I had to do was change it to COLOR none. Problem solved!

Of course if you look further down in the file you’ll see that there are all kinds of settings for the color palettes to be used for various file types, file protection modes, etc. You could spend time “refining” the colors that are used … but personally I’m happier with the feature just GONE!


Winner Winner Chicken Dinner!

By Don Fillion, Posted on July 12, 2010 at 12:45 pm

Spain isn’t the only winner…
Congratulations to Graeme Harris of G. Harris Software—the lucky winner of the Official World Cup Jabulani ball!
Graeme, your name was picked in the random drawing of all PSG blog subscribers who entered to win. Your official World Cup soccer ball is already en route to your doorstep. (Let's hope it flies true and doesn't sail over the goal…)
Thanks to everyone who subscribed and participated. We hope that you continue to read and enjoy the blog!


Learning from Stupidity

By synergexadmin, Posted on July 9, 2010 at 5:56 pm

I'll be the first to admit that I've done some really stupid things in my life.

Like the time I decided to paddle a canoe across a mile-wide river even as threatening clouds loomed on the horizon.  Or the time I got stuck on a hike by taking a "short-cut" which involved shimmying around an overhang, leaving me suspended over a 70-foot drop to some very sharp, very hard rocks below.

Then there was the time I remoted into a production Alpha server and decided to shut down TCP/IP for a few seconds. Now that was fun; I figured my first month on the job was also going to be my last.

But all of these dumb moves — and many others — have at least one thing in common: Though I learned something, everything was over so quickly that I never had much time to worry about the repercussions.

Not so, yesterday.

My laptop had been causing me more and more grief lately, so I decided it was time to re-install the OS and start from scratch. I wasn't in a huge rush, however, and I had other things to do anyways. So it took me several days to complete my preparations for the wipe, during which time I methodically moved files from my laptop to a backup device.

Yesterday, before lunch, I declared my system ready for reinstall, and pulled the proverbial trigger. I reformatted the hard drive, installed a clean copy of Windows 7, and then ran Windows Update to get everything up to snuff. Success! I was really rocking, and I realized that if I hurried, I could get a full backup of my "clean" build before I left for lunch. So of course, I did something incredibly, unbelievably stupid.

Lesson #1: Do NOT destroy a completely valid, disk image backup to make room for a "fresh" disk image backup.

Turns out that my backup device — a 300GB external drive — was getting a little full. I'd been faithfully (more or less) doing disk image backups for quite a while, with the most recent being dated just last Friday. But those files were just SO BIG and I really needed the space for a new backup set.

My rationalization was pretty solid: I'd backed up copies of only those files that I needed, they were all organized well, and I had ISO images of all the programs I was going to need to re-install, so what's the point in keeping a backup of a system I'm never going to use again anyways?

Plus, I really needed the space.

So I deleted the disk image backup, started a new one from scratch, and went to lunch. Upon returning, the backup was complete. Moving right along, I quickly copied my well-organized backup files into place, and started with software installations.

Someone upstairs was watching out for me, however, because the first software I re-installed was a tiny little program that allowed me to access very special, very important and very irreplaceable encrypted files. And though it installed without a hitch, I quickly found that the encrypted files it opens…

…weren't there.

They weren't in the folders I'd copied back to my laptop, and they weren't on the backup drive. I searched network drives, network computers, and even checked a USB flash drive just against the chance that I'd momentarily lost my mind, transferred them there, and then forgotten about it. Perhaps the worst problem was that I had specifically made sure that those files had been backed up two or three days ago, and I knew everything was ok.

Hadn't I?

I finally gave up on locating them the "easy" way, and started downloading software that scanned hard disks to recover deleted files. After trying five different freebie versions, each of which were dismal failures, I'd almost given up hope. So just before midnight, I gave in and downloaded a try-before-you-buy piece of software called File Scavenger.

The demo version offers the ability to scan a hard drive and locate darn near everything that was ever on it and not overwritten, but only lets you recover 64K of a file before it asks you to pay. Knowing I'd happily pay the $49 if it worked, I downloaded and installed it. Upon running it, however, it looked as if it was going to take at least a couple of hours to scan whatever was left of my hard drive after the format/reinstall, so I decided to retire for the night and get some sleep.

Lesson #2: You can't sleep when you've probably just lost something that's irreplaceable.  (It's the Not Knowing and the What-If's that will keep snapping you back to full consciousness…again and again and again.)

Early this morning, I was back at my desk, with the knowledge that if the scan that I had left running was going to find something, it would have done so by now. I watched the ribbons bounce around on my laptop's monitor. I probably stared at them for a full minute before steeling myself for the worst, taking a last sip of coffee, and moving the mouse to break the spell of the screen saver.

There they were. I almost couldn't believe it. All three of the large, encrypted files that contained countless (or at least, well in excess of 150,000) other files.

Lesson #3: When pulled from a wallet too quickly, a credit card can cut through the pocket that holds it. Sometimes, it's safer for your wallet — and easier on you — to try the Pay-For "premium" solution before you waste hours hunting down the free alternative.

It was the fastest online purchase I've ever made. And within 30 minutes, I'd recovered all three files and confirmed that they were intact and had all of their information. I'd also backed them up (again) and re-confirmed their existence on the backup drive. I then put them on the hard drive of two other computers. Gun-shy? Absolutely.

But I've got to say, this software is amazing — and not just a little scary, too. While doing my scans of my laptop's hard drive, I found a lot of stuff that shouldn't be there. Like stuff that's been deleted for years. Doing a scan of my backup drive, a networked drive personal drive we use to keep copies of pictures, music, movies and (eek!) bank account information, and my little USB flash drive, I found lots and lots and lots of stuff that simply shouldn't be there.

Lesson #4 (Unintended): Deleted isn't deleted until you make it so.

Turns out that NTFS is really, really good at keeping your files intact even after they've been deleted — or even subjected to a quick re-format. FAT32 is as fragile as crystal by comparison, but still has the potential to leave a file intact long after you've deleted it. And while most everyone who's reading this already knows to use a disk utility to overwrite "unused" disk space before getting rid of a drive, remember that until you do, the data is likely still there.

And by the by…did you know that most printers and copiers have hard drives in them? Think twice before you donate or sell them, because the person that takes them off of your hands may have File Scavenger (or something similar) in their possession! With what I've learned — and now purchased — it brings a whole new world of (shady) opportunities to the table. For instance, my neighbor down the street actually has a bunch of printers sitting in his yard under a big sign that says "Take Me I'm Free" (no kidding). It's suddenly tempting to pick them up and take a little peek inside, but fortunately (for both of us) I don't have the time right now, as I'm heading out the door on holiday in only a few short hours.

Now, if only I could just learn

Lesson #5: Don't post a blog entry about your stupidity where the director of your department is likely to read about it

I could be reasonably sure that my job is still secure when I return from a well-needed vacation…

And yes: I'm going to Disneyland.


“Applemania” and iPhone 4

By Steve Ives, Posted on July 1, 2010 at 1:22 pm

So I finally did what I said I would never do … I set out from home, in the wee hours of the morning, to stand in line for hours in order to buy something!  The venue? … my local AT&T store. The event? … the first in store availability of iPhone 4. In my lifetime I have never done this before, but I figured … what the heck! I grabbed my iPad for a little entertainment while in line and headed out around 3am.

I stopped off at the 24 hour Starbucks drive through on the way there and stocked up with a large black coffee and a sandwich, and by 3.15am I had staked my place in line. I was surprised that there were only around 20 people ahead of me in line, I was expecting more. Apparently the first guy in the line had been there since 7pm the previous evening … a full twelve hours before the store was due to open at 7am!

So how was the wait? Actually it was kind of fun. There were all kinds of people in line, from technology geeks like me, to teens with too much money on their hands, to families, and even a few retired seniors. Everyone was chatting away, and the time passed pretty quickly. It was a beautiful night out, not too cold, not too hot, and before we knew it the sun was rising at 5.30am. By this time the line had grown considerably longer, and by the time the store opened at 7 had probably grown to two or three hundred people! I remember thinking to myself that if the same thing was being repeated at every AT&T store in the country then there were a LOT of people standing in line.

Opening hour arrived and within a few minutes I was walking out of the store with my new phone and heading off for a day in the office.

So … was the new iPhone worth the wait? … ABSOLUTELY! I’ve been using an iPhone 3G for quite a while now and I was already in love with the thing. I’d skipped the whole 3GS iteration of the device, so the differences between my old phone and my new one was … staggering!

The new Retina display, with a resolution of 960 x 640 (vs. the 480 x 320 of earlier models) means that there are four times the number of pixels packed into the same amount of screen real estate. This translates to a screen which looks fabulous, and photos and videos which look considerably better.

Speaking of photos and videos, the upgrade to a 5MP camera and the addition of an LED flash finally make it possible to take reasonably good pictures and videos with an iPhone. There is also a new camera on the front of the phone; it is a much lower resolution (only VGA in fact) but actually that's perfect if you want to take a quick photo, or record a short video and then email it out to someone (especially if you on the new 200MB data plan … but more about that later).

The iPhone 4, like the iPad, uses Apples new proprietary A4 (computer on a chip) silicone, and I must say, the performance gain does seem to be considerable, even compared to the more recent 3GS models. Another benefit of this is that, despite the fact that the new device is smaller than pervious iPhones, there is more room inside for a bigger battery! This is great news, because battery endurance has never been one of iPhones strong points to date.

Of course one of the coolest new features is FaceTime … video calling between iPhone 4 users. I haven’t had a chance to try this out yet, but I’m looking forward to doing so soon. Apparently FaceTime only works over Wi-Fi networks, which is probably a good thing both from a performance point of view, and also potentially a cost point of view … which bring be to the subject of data plans.

In the past, in the US at least with AT&T, all iPhone users had to cough up $30/month for their data plan, and in return were able to use an unlimited amount of data. This was great because it meant that you could happily use your shiny iPhone to the full extent of its considerable capabilities and not have to worry about how much bandwidth you were actually consuming. But now … things have changed!

New iPhone customers now have a choice of two data plans. There is a $15/month plan which allows for 200MB of data transfer, or a $25 plan providing 2GB. AT&T claim that the 200MB plan will cover the data requirements of 65% of all iPhone users, which may or may not be true. Even if you opt for the more expensive 2GB plan you still have a cap, and may need to be careful. Personally I don’t think I’d be very happy on the 200MB plan, mainly because of things outside my control, like email attachments, which iPhone users don’t really have any control of.

I have been trying to find out what happens when you reach your monthly limit, but so far without success. One AT&T employee told me that on reaching your data limit the account will simply be changed for another “block” of data, without any requirement for the user to “opt in”. Another AT&T employee told me essentially the opposite; that network access would be suspended until the user opts in to purchase more data (similar to the way the iPad works). What I do know is that as you draw close to your limit you should receive free text messages (three I believe, at various stages) warning you of the issue. All I can suggest right now is … watch out for those text messages!

For existing iPhone customers, the good news is that your existing unlimited plan will be “grandfathered in” at the same rate that you currently pay, so we can all continue to consume as much bandwidth as we like and not worry too much about it!

Apple seems to have done a pretty nice job with the implementation of the recently introduced iOS 4. The platform finally has multi-tasking capabilities, which some may not immediately appreciate the benefit of, but it just makes the whole user experience so much more streamlined.  Also the new folders feature makes it easy to organize your apps logically without having to flip through endless screens of icons. Pair the advances in the operating system with the significant advances in the hardware of the new device and the overall impact is really quite significant.

Overall, I think Apple did a good job with the iPhone 4, but there are a couple of things I don't like. The main one is … well, with its "squared off" edges … the new device just doesn't feel as good in your hand as the older models. Also, no doubt you'll have heard all the hype about lost signal strength if the device is held in a certain way … well, I must say that it seems like there could be something too that. Unfortunately, when using the device for anything other than making a call, I reckon that most people hold the phone in a way that causes the problem! Of course Apple has offered two solutions to the problem … 1) don't hold the device that way … and 2) purchase a case!

But on balance I think the upgrade was worth it. There are so many cool new things about iPhone 4 but I’m not going to go into more detail here … there are hundreds of other blogs going into minute detail about all the features, and if you want to find out more a good place to start is http://www.apple.com/iphone/features.


RSS

Subscribe to the RSS Feed!

Recent Posts Categories Tag Cloud Archives