We have just released a CodeGen update that includes a fix for a problem that was discovered recently related to the processing of enumerated fields. If your repository includes enumerated fields and you use the field selection loop token <SELECTION_VALUE> (or the Symphony Framework custom token <SYMPHONY_SELECTION_VALUE>) then we recommend that you update to the new version and re-generate your code. As a reminder CodeGen recently moved to GitHub, you can find the new release at https://github.com/Synergex/CodeGen/releases.
Today we are announcing that we have moved the open source CodeGen project from it’s former home on CodePlex to a new home on GitHub. We made the decision to do this for several reasons, not least of which is the fact that GitHub has effectively become the de-facto standard place for hosting open source projects. Even Microsoft, who built and operate the CodePlex site using their own Team Foundation Server source control technologies seem to have lost interest in it; in the last 18 months or so they have moved pretty much all of their own considerable number of open source projects to GitHub also! GIT also has several very nice features over and above what TFS has to offer, and also has the benefit of being very considerably faster to use. Related to the move is a new version (CodeGen 5.1.1), but the only changes in the new version are related from the move from CodePlex to GitHub; there is no new functionality in the new release over the 5.1.0 version that was released a few days ago.
If you don’t already have one we encourage you to create a GitHub account and once logged in to “watch” CodeGen. If you wish to receive notifications about new CodeGen releases you can also subscribe to the CodeGen Releases Atom feed. CodeGen is still distributed under the terms of the New BSD License. For the time being we plan to leave the CodePlex environment intact, but no new changes will be checked in there and no new releases will be published there.
Here are a few useful GitHub URLs related to our new home:
|Download latest version||https://github.com/Synergex/CodeGen/releases/latest|
|Releases Atom feed||https://github.com/Synergex/CodeGen/releases.atom|
Just a quick note to announce that we have today released CodeGen 5.1. This release has but one new feature, but it does allow me to solve a challenging problem that I faced while working on a customer project recently. I have dubbed this new feature conditional processing blocks. Essentially it is the ability to conditionally include (or exclude) parts of a template file based on the presence or absence of identifiers that can be declared on the command line. It allows you to achieve the same kind of results that you would when using .DEFINE, .IFDEF and .IFNDEF in DBL source code, but within template files. For example a developer could include code like this in a template file:
The developer would then have the ability to choose whether to include or exclude the code that assigns the I/O hooks object to the channel that was opened at the time that they generate the code. By default the I/O hooks code would not be included; if it was needed the developer would define the ATTACH_IO_HOOKS identifier as they generate the code. They would do this by using a new –define command line option:
codegen –s EMPLOYEE –t FILE_IO_CLASS –r –define ATTACH_IO_HOOKS
This may seem like a very simple change, and it is, but my mind is now racing thinking about all of the new possibilities it opens up.
I recently attended the OpenVMS Boot Camp in Nashua New Hampshire. I am pleased to report (with a nod to Mark Twain) that rumors of the death of OpenVMS were greatly exaggerated! VMS Software Incorporated (VSI) has taken over the product and appears to have the situation well in hand.
There were over 100 companies in attendance at the Boot Camp, with Hewlett Packard a very visible participant. The conference was quite lively, with multiple tracks running from 8:00 to 6:00 daily, and events planned each evening. At the conference, there was an undercurrent of optimism and energy, which was no doubt tied to the future of VMS. VSI has already released OpenVMS 8.4-1H1, which provides support for HP Integrity i4 server models based on the Intel® Itanium® 9500 series processors. Moving forward, VSI presented at the Boot Camp a rolling roadmap that provides for at least one release per year for the next few years, improving and extending the software on its current HP platforms—including new versions of TCP/IP and Java, a new file system, and CLI improvements. Concurrently, they are working on VSI OpenVMS 9, which will add support for x86-64 bit processers (slated for 2018). They are planning to support select HP (Intel and AMD) servers first, then Dell and others as well. ARM support is slated to be considered after x86-64.
VSI has pledged at least 5 years of active product support per release, followed by a minimum of 2 years of prior-version support. With releases planned into 2018, this provides a viable, supported future for OpenVMS at least into 2025 and likely well beyond.
The future of OpenVMS is now being tended to by some very experienced engineers—many have come from HP and have been with the O/S throughout its various versions and ownership.
So, VMS users, the immediate takeaway is to listen to the words of the late great Douglas Adams: “Don’t Panic!“ OpenVMS is not going away anytime soon.
A few weeks ago I announced that a new API called SynPSG_PDF had been added to the code exchange. Today I am pleased to announce that the API has been updated and, in addition to Windows, is now also supported on systems running Linux (32-bit and 64-bit), OpenVMS (AXP and IA64) and Synergy .NET.
Also, as a direct result of recent customer feedback, I have added a mechanism that allows a PDF file to be easily created from an existing text file with just a few lines of code. This means that existing report programs that already produce plain text output can be easily modified to produce PDF output with a small amount of code like this:
If you would like to check out the API you can download the code from https://resourcecenter.synergex.com/devres/code-exchange-details.aspx?id=245.
Years ago I used to say to our direct corporate end-user customers, “You’re lucky. It doesn’t matter what your application(s) looks like because you’re not selling to compete for new business—all that matters is that it works well and meets your business needs.” End-users plugged merrily along, content to focus on functionality and substance, often in the form of a green-screen front end. In fact, many of those customers claimed that a character-based/green-screen application was much more efficient than using a “cumbersome mouse”—especially when it came to data entry. In the 90’s when Windows, GUI, and the like came on the scene our Independent Software Vendors (ISVs) had a different story—to be competitive, the ISVs suddenly had to worry about both how well their applications functioned AND how they looked. People and companies didn’t want to buy applications that weren’t shiny and new with a great user interface (UI)—even if a sophisticated UI didn’t always correlate with a sophisticated application under the hood. It became a game of how flashy can you make it as opposed to how well does it function.
Fast-forward a few years, and now everyone has to play on the same field—ISVs and corporate end-users. In today’s world, even corporate end-users need to make the move to modernization. If they don’t, the next generation of decision makers will. And when that happens, it’s likely the existing, time-proven solution that has been customized and fine-tuned over the past 30+ years, the one that makes the business unique and competitive, the one that has solved—and continues to solve—everyday business issues, will not survive. Yep, this new generation of decision makers will judge the book by its cover and determine the value of the application based on the way it looks and not what it does. It makes sense, because this new generation grew up knowing only great-looking applications—applications that are generally simpler and more discrete in functionality than complete, integrated solutions that touch every part of the organization but appear less shiny and sophisticated.
So, the bottom line is that if your application doesn’t look great, it will be perceived as less than great, and when that new decision maker comes in—it may be too late to save what you’ve spent so many years perfecting. Needless to say, I strongly recommend that all customers invest in modernizing their application(s) with a great looking UI and UX (user experience). As Billy Hollis affirmed at the recent Synergy DevPartner Conference, UX is equally as important. It’s not just the look and feel, but also the experience of the user that’s critical. It’s important to emphasize here too that a great UI/UX design and a high-performing/highly productive solution are not mutually exclusive. Having a well-designed GUI based application can only add to the functionality and power of your solution. So even if you feel your character-based solution is really the best one for your business, it’s rare for the look and feel to be overlooked in favor of substance. I can’t stress enough the importance of making this investment.
A significant benefit of having a Synergy-based application is that you can separate the UI from the logic and data. This means you can use future UIs without sacrificing the years of investment you have put into your business application. While the look and feel is what everyone sees, in reality the business logic is the true value. And once these two are separated, you can extend the life of your application(s) indefinitely, taking advantage of the ever evolving UI trends that come along. Although it may take some effort initially to separate the UI from the back-end, this is the course of least resistance and investment, and it will offer the largest and longest return.
At Synergex, our main focus is to develop solutions to help you advance and leverage your investment to take advantage of the latest modern technologies. In fact, with our recent release of Synergy DBL, we are venturing into the Universal Windows Platform (UWP), the latest UI experience. And while none of us can be certain what UI trends will be popular 10 years from now, just as none of us back in the ‘80s could have imagined what today’s UI would look like, I’m confident that we will be able to help you leverage your back-end and take advantage of whatever the future holds.
So Hilton’s latest thing is the “Digital Key”; while standing within 5 feet of the door to your hotel room it is now possible (in certain locations) to click a virtual button in the Hilton App on your smart phone and have the door to your hotel room unlock, as if by magic. The digital key also knows about other areas of the hotel that you have access to, such as the Executive lounge (I tried it, it works) and gymnasium (apparently) and provides access to those places too.
Last week I used the app to make my reservation. Yesterday I used the app to check in for my stay and and also to select my room. And today, having already checked in electronically, I was able to totally bypass the reception desk and proceed directly to my room.
Tomorrow morning the credit card associated with my profile will be automatically charged, and I will walk out of the front door and drive to the airport a few exits down the freeway.
If it hasn’t dawned on you what my point is here, it is that I will have booked and totally completed a stay in a hotel … without ever having the need to interact with a single other other human being; all of which seems to me to be a pretty sad state of affairs! Maybe we’re taking this whole technology thing a little too far in some areas?
For some time now the Synergy/DE Code Exchange has included an item called PDFKIT which essentially contains a set of DBL wrapper code that allows the open source Haru PDF Library to be used from DBL. The work done in PDFKIT was a great start and has been used successfully by several developers, but I don’t think that anyone would disagree with me if I were to suggest that it’s not exactly the most intuitive software to use, and it’s not exactly what you would call well documented either; just like the underlying Haru library!
So as time permitted for the last few weeks I have been working on what I hope is an improved solution. I certainly didn’t want to totally reinvent the wheel by starting from scratch, as I mentioned PDFKIT was a great start, but I did want to take a slightly different approach that I thought would be more useful to a wider number of developers, and I did want to make sure that complete documentation was included. What I came up with is called SynPSG.PDF, and it is available in Code Exchange now.
When you download and extract the zip file (SynPSG_PDF.zip) you will find that it contains these main elements:
- This is the DBL code that wraps the Haru PDF library and is taken directly from the latest version of PDFKIT.
Haru PDF Library DLL’s
- The same DLL’s that are distributed with PDFKIT. Refer to the documentation for instructions on where to place these DLL’s.
- A source file containing the new API that I have created.
- A Synergy/DE Workbench workspace that can be used to build the code, as well as build and run several sample programs that are also included (this is a Workbench 10.3.1 workspace and will not work with earlier versions of Workbench).
- A Windows help file containing documentation for the new API
You don’t need to use the Workbench configuration that I have provided, if you prefer you can simply include the pdfdbl.dbl and SynPSG.PDF.dbl files into the build for your subroutine library. But remember that both of these files contain OO code, so you will need to prototype that code with DBLPROTO.
As you will see when you refer to the documentation, most things in the API revolve around a class called PdfFile. This class lets you basically do four things:
- Create a PDF file.
- Save the PDF file to disk.
- View the PDF file by launching it in a PDF viewer application.
- Print the PDF file to a printer.
I’m not going to go into a huge amount of detail about creating PDF documents or using the API here because these topics are discussed in the documentation, but I will mention a couple of basic things.
PDF documents inherently use an X,Y coordinates system that is based on a unit called a device independent pixel. These pixels are square and are 1/72 of an inch in each direction. The coordinates system that is used within pages of a PDF document is rooted in the lower left corner of the page which is assigned the X,Y coordinate 0,0. The width and height of the page in pixels depends on the page type as well as the orientation. So for example a standard US Letter page in a portrait orientation is 8.5 x 11 inches, so in device independent pixels it has the dimensions 612 x 792.
With most PDF API’s you work directly with this coordinates system, and you can do so with this API also, but doing so can require a lot of complex calculations, and hence can be a slow process. But often times when we’re writing software it is convenient for us to work in simple “rows and columns” of characters, using a fixed-pitch font. The new API makes it very easy to do just that, meaning that results can be produced very quickly, and also meaning that existing report programs (that already work in terms of rows and columns) can be easily modified to produce PDF output.
Here is an example of a simple row / column based report that took only a few minutes to create:
Of course there are times when you need to produce more complex output, and the new API lets you do that too. To give you an idea of what it is capable of, here’s a screen shot of a mock up of a delivery ticket document that I created while working on a recent customer project:
As you can see this second example is considerably more complex; it uses multiple fonts and font sizes, line drawing, box drawing, custom line and stroke colors, etc. And although not shown on these examples, there is of course support for including images also.
The new API is currently available on Windows under traditional Synergy. It should be possible to make the code portable to other platforms in the near future, and .NET compatibility is definitely in the pipeline. The software requires the latest version of Synergy which at the time of writing is V10.3.1b. You can download the code from here:
It is early days for this new API and I have many ideas for how it can be extended and enhanced. I am looking forward to working on it some more soon, and also to receiving any feedback or suggestions that you may have.
Just a quick note to let all of you CodeGen users out there that a new version (CodeGen 5.0.5) has just been released. You can get more information about the changes and download the release from https://codegen.codeplex.com/releases.
If you would like to receive email or RSS notifications when new CodeGen versions are released then there are links on the above mentioned page to allow you to set that up, and we encourage you to do so.
“Hiring good programmers is hard.” I can’t count the number of times I’ve heard this phrase during the past 30+ years I’ve been in this business. And, from my experience and research, I agree. A few customers have also told me that good Synergy programmers are harder to find than others, but over the years I’ve found that it doesn’t matter whether you’re looking for developers experienced in Synergy DBL, C#, Java, VB.NET, or any other language… hiring good programmers is just hard. The exception, of course, is the gaming industry, where a plethora of young talented programmers are excited to spend countless hours writing games for almost no money. Sort of reminds me of the early programmers who wrote business application solutions back in the day!
So, how do you find a good Synergy developer? Well, for starters, don’t limit your pool to developers experienced in Synergy. Find a great programmer and make him/her a master in the language you use. Any good programmer can learn Synergy, or C#, or Java, etc. But not every programmer who knows Synergy or C# or Java is or will become a great programmer. Seek out developers who have current modern day developer skills such as OO, .NET, etc. If they don’t already know Synergy, they’ll pick it up quickly and will appreciate that it is a modern OO language that runs on virtually all platforms, including mobile, and is fully integrated with Visual Studio. Then, send your new developers to a Synergex class, have PSG come on-site to get them up to speed, and (of course) send them to the annual Synergy DevPartner conference.
Tip: Consider domain knowledge specific to your industry. You are much better off hiring a good developer who is knowledgeable in your particular vertical market and teaching him/her DBL than vice versa.
Also, open the door to hiring developers with programming experience (vs. just having a computer science degree). When you look back at the early years of our industry there were very few universities offering programming degrees—most of the original developers of what are now world class enterprise applications had no formal education on programming. These developers had raw talent and enthusiasm to solve problems and create solutions. (Some of you reading this blog are likely those original developers!) This too is how Synergex started. In fact, many of our top talent never received formal education in programming. That said, I’m not recommending that you seek developers without formal degrees but I am encouraging you to focus on smart, eager developers whom you can train and educate to be part of your next generation of leaders. Here at Synergex we’ve developed and use a variety of third-party tests that can help vet sharp young talent—this talent has made a big impact on our development team. We would be happy to share the tools we use.
So you’ve advertised for a talented, trainable, language-agnostic developer, interviewed your candidates to confirm a good fit with your culture, vetted out analytical aptitude, and are convinced that your candidate will be a great addition to your team… What if the candidate turns the tables on you and asks, “Why would I want to program in Synergy DBL?” What do you say?
I recommend that you have this question answered in their minds long before they have the opportunity to ask it. Make sure your candidates all understand the value of Synergy DBL and the exciting opportunities they will have to work with these modern development tools. Let them know that skills they will gain using Synergy DBL will provide a lifetime of employment opportunities any place in the world they want to live and work.
Congratulations on hiring your next great programmer!
Today Microsoft announced that Visual Studio 2015 and .NET 4.6 are available for download. As a member of the Microsoft Visual Studio Industry Partner (VSIP) program, Synergex will soon be supporting this version of Visual Studio with a Developer Build, followed by a fully supported release that also supports Windows 10. Our new release includes support for the new concord-based debugger and Light Bulb features.
We are excited about all of the new performance tools that Synergy developers can utilize in Visual Studio 2015.
Two of Synergex’s senior developers were quoted in today’s eWeek article announcing the release: http://www.eweek.com/developer/microsoft-ships-visual-studio-2015-.net-4.6.html
If you have ever developed and worked with a WCF service you may have noticed that the very first time you connect to a newly started instance of the service there can sometimes be a noticeable delay before the service responds. But invoking subsequent operations often seems almost instantaneous. Usually the delay is relatively short, perhaps even just a fraction of a second, but still noticeable. Well earlier this week I encountered a WCF service that exhibited this behavior, but the delay for the first operation was almost three minutes! Something had to be done.
Some time later, after much debugging, web searching and more than a little head scratching, we realized that the “problem” that we were seeing was actually “by design” in WCF and was related to the generation of metadata for the service. It turns out that if “metadata exchange” is enabled for the service then WCF generates the metadata, regardless of whether anyone is currently requesting it or not, at the time that the first operation is requested by a client. Often the generation of the metadata takes almost no time at all, but as the size and complexity of a service grows (in terms of the number of operations exposed, the number of parameters, the number and nature of complex types exposed, etc.) the time taken to generate the metadata grows. In the case of this particular service there were over 800 individual operations defined, with lots and lots of complex types being exposed, and the service was still growing!
The only time you need metadata exchange enabled is when you need to access the WSDL for the service, so in simple terms whenever you need to do an “Add Service Reference” or “Update Service Reference”. The rest of the time having it enabled is just slowing things down at runtime.
I can’t tell you exactly how to enable and disable metadata exchange with your service, because there are several different ways it can be configured, but it’s likely going to be one of these:
- A <serviceMetadata/> token used in the <serviceBehaviors> section of a Web.config or App.config file.
- An <endpoint/> token that uses the IMetaDataExchange contract defined in a <service/>section of a Web.config or App.config file.
- Code that does the equivalent of one of the two options above.
So the lesson learned was to enable metadata exchange only when it is needed, for the purpose of creating or updating client proxy code; the result was an almost instantaneous response from the service once metadata exchange had been disabled. Of course it goes without saying that metadata exchange should NEVER be enabled on production services.
The old adage tells us that you can’t teach an old dog new tricks. But after the last three days, I beg to differ! It’s been an interesting few days for sure; fun, challenging, rewarding and heated are all words that come to mind when reflecting on the last few days. But at this point, three days into a four-day engagement, I think that we may just have dispelled that old adage. For one this “old dog” certainly feels like he has learned several new tricks.
So what was the gig? It was to visit a company that has an extensive application deployed on OpenVMS, and to help them to explore possible ways to extend the reach of those applications beyond the current OpenVMS platform. Not so hard I hear you say, there are any number of ways of doing that. xfServerPlus immediately comes to mind, as do xfODBC and the SQL Connection API, and even things like the HTTP API that could be used to allow the OpenVMS application to do things like interacting with web services. All true, but there was one thing that was threatening to throw a “spanner (wrench) in the works”. Did I mention that the application in question was developed in COBOL? That’s right, not a line of DBL code anywhere in sight! Oh and by the way, until about a week ago I’d never even seen a single line of COBOL code.
Now perhaps you understand the reason that challenging was one of the words I mentioned earlier. But I’m up for a challenge, as long as I think I have a fighting chance of coming up with something cool that addresses a customers needs. And in this case I did. I didn’t yet know all of the details, but I figured the odds of coming up with something were pretty good.
Why all of this confidence? Well, partly because I’m really good at what I do (can’t believe I just said that), but seriously, it was mainly because of the fact that a lot of the really cool things that we developers just take for granted these days, like the ability to write Synergy .NET code and call it from C#, or write VB.NET code and call it from Synergy .NET, have their roots in innovations that were made 30+ years ago by a company named Digital Equipment Corporation (DEC).
You see OpenVMS had this little thing called the Common Language Environment. In a nutshell this meant that the operating system provided a core environment in which programming languages could interoperate. Any language that chose to play in that ball park would be compatible with other such languages, and most languages on OpenVMS (incuding DIBOL and DBL) did just that. This meant that BASIC could call FORTRAN, FORTRAN could call C, C could call PASCAL and … well you get the idea. Any YES it means that COBOL can call DBL and DBL can call COBOL. OK, now we’re talking!
So why is this such a big deal? Well it turns out that Digital, later Compaq, and later still HP didn’t do such a great job of protecting their customers investments in their COBOL code. It’s been quite a while since there was a new release of COBOL on OpenVMS, so it’s been quite a while since OpenVMS COBOL developers had access to any new features. This means that there isn’t a way to call OpenVMS COBOL routines from .NET or Java, there isn’t a way for OpenVMS COBOL code to interact with SQL Server or Oracle, and there isn’t an HTTP API … so don’t even think about calling web services from COBOL code.
But wait a minute, COBOL can call DBL … and DBL can call COBOL … so YES, COBOL CAN do all of those things … via DBL! And that fact was essentially the basis for my visit to Toronto this week.
I’m not going to get into lots of details about exactly what we did. Suffice it to say that we were able to leverage two core Synergy/DE technologies in order to implement two main things:
- A generic mechanism allowing COBOL code executing on OpenVMS to interact with Windows “stuff” on the users desktop (the same desktop that their terminal emulator is running on).
- A generic mechanism allowing Windows “stuff” executing on the users desktop to interact with COBOL code back on the OpenVMS system.
The two core technologies have already been mentioned. Outbound from OpenVMS was achieved by COBOL calling a DBL routine that in turn used the Synergy HTTP API to communicate with a WCF REST web service that was hosted in a Windows application running in the users system tray. Inbound to OpenVMS was of course achieved with a combination of xfNetLink .NET and xfServerPlus.
So just who is the old dog? Well as I mentioned earlier I probably fall into that category at this point, as do several of the other developers that it was my privilege to work with this week. But as I set out to write this article I must admit that the main old dogs in my mind were OpenVMS and COBOL. Whatever, I think that all of the old dogs learned new tricks this week.
It’s been an action packed three days but I’m pretty pleased with what has been accomplished, and I think the customer is too. I have one more day on site tomorrow to wrap up the more mundane things like documentation (yawn) and code walkthroughs to ensure that everyone understands what was done and how all the pieces fit together. Then it’s back home on Friday before a well deserved vacation next week, on a beach, with my wife.
So what did I learn this week?
- I really, really, REALLY don’t like COBOL!
- OpenVMS was WAY ahead of its time and offered LOTS of really cool features. Actually I didn’t just learn this, I always knew it, but I wanted to recognize it in this list … and it’s MY BLOG so I can .
- Synergy/DE is every bit as cool as I always believed; and this week I proved it to a bunch of people that had never even heard of it before.
- New fangled elevators are very confusing for old dogs!
Those of you who attended the recent DevPartner conference in Philadelphia will no doubt remember the excellent presentation on UX Design that was given by guest speaker, Billy Hollis. During his presentation Billy cited photographs of a couple of elevator control panels. He used one as an example of bad design, the other an example of good.
I won’t show the actual photos that Billy used (sorry, you had to be there for that!) but in a nutshell the layout of the buttons and other information (floor numbers, etc.) on the first panel was at best confusing. There was clear physical evidence that users had been confused by the panel and frequently had not understood how to operate the elevator!
The second example was much a much better panel design. The designer had successfully used techniques such as visually grouping related things together in a way that made the correct operation of the elevator a much more obvious task … intuitive even.
Well, upon arriving at a customers office building in Toronto, Canada earlier today I encountered an elevator control panel that, for me at least, took the confusion to a whole new level.
I should make it clear that the elevator in question was one of a cluster if four in the lobby of a shared office building, and that I was arriving at the customer site at about the same time that everyone was arriving at work. The point is that the lobby was pretty busy at the time, it wasn’t as simple as just walking up and pressing an “I want to go up” button.
No problem I thought, it may be two or three elevator cars before I get to make the final step of my journey up to the 4th floor. I’m a few minutes early and all is good.
Finally my turn came, I waited while a few other people stepped on, then I took my place in the elevator car. Intuitively I spun around to determine whether one of my elevator buddies had already pressed the 4th floor button, and I was ready to press it myself if not. The panel opposite is what I encountered.
Now I like to think of myself as a reasonably bright guy, so I instantly figured it out; the buttons would be on the OTHER SIDE of the door. And I was correct … well … kind of. I glanced to the opposite side of the elevator door … and saw an identical panel on that side too!
Not wanting to appear totally inept I just waited quietly until the other people got off at their (somehow) chosen floors … and no, unfortunately nobody else was going to 4.
The doors swished closed and I was finally alone in the elevator. I don’t remember exactly what my out loud remark to myself was, but I believe it started something along the lines of “WHAT THE ….”. So, patiently I waited and sure enough after a little while the doors once again swished open and I was back where I started from in the lobby!
I’ll be honest with you, I was getting a little “pissed” at this point (excuse my language, but its true). But not wanting to appear like a total fool I stepped away as if I had intentionally returned to the lobby, and waited for the crowd to clear … all the time subtly (I thought) observing to see HOW THE HECK THESE FREAKING ELEVATORS WORKED!!! And then … I saw it … everything instantly became clear. The floor selector buttons were indeed on the other side of the elevator door … they were on the OUTSIDE!!!!
And further, having selected your intended destination on the small tough-screen display in the lobby you are then instructed WHICH of the four elevators (conveniently labeled A, B, C and D) you should step onto in order to reach your desired floor!
Actually this is a pretty clever system, but other than the fancy 6” touch screen display there was absolutely nothing to indicate that anything was different here. Brilliant system but totally unintuitive … and so very frustrating for first-time users. Which I guess was one of the points that Billy was making in the first place.
During DevPartner 2015 a number of people ran through the Utilizing the Repository tutorial which sets out to demonstrate how the meta-data stored in the repository describing your Synergy database can be utilized when building a modern Windows Presentation Foundation desktop application using Synergy and the Symphony Framework.
Using your repository, CodeGen and the associated Symphony Framework templates you can build, from the ground up, a complete WPF application, and this is exactly what you do during the tutorial.
Using the Model-View-View Model pattern you code-generate the model elements as repository based data objects that extend the base Symphony Framework DataObjectBase – this provides field level properties with validation and data bindings. Then we code generate the view – the UI element the user interacts with. The view comprises of windows containing the individual edit controls which in turn use code generated styles. These styles define the visual attributes and data bindings of each field in the repository.
Great – you would think. But I’ve been asked about a default behaviour of a WPF application a number of times and again at the conference, and that is the fact that edit controls, specifically text boxes, don’t auto-select all content when they receive focus. I also find it frustrating but thus far have been unable to think of a solution. “It’s a deal breaker” according to Gayle – who’d just completed the tutorial. Well considering Gayle is a rather fine chap I guess it’s time for me to look at the problem again. I spoke with Jeff @ Synergex who pointed me to a blog by Oliver Lohmann which addresses just this problem.
The solution is to register a behaviour against the TextBox control and handle the GotFocus event – and in the event handler force the selection of the data in the TextBox control. Simple!
And simple it was – and it usually is when you are looking for that “complex” answer. I’ve not done much with behaviours so far, but I think that is about to change! The Symphony Framework has been updated (did that on the plane home) and I’ll be releasing that to GuGet very shortly. The Symphony Framework “style” template will be updated – it’s now released as part of CodeGen – to reflect the new capabilities and normal “behaviour” will be resumed.