Phone800.366.3472 SupportGet Support DocumentationDocumentation Resource CenterResource Center
search
close
Open Menu

Synergex Blog


PDF API Enhancements

By Steve Ives, Posted on March 28, 2016 at 1:38 pm

Steve Ives

Last year I announced that we had created a new PDF API and made it available via the CodeExchange in the Synergy/DE Resource Center. Now I am pleased to announce that we have made some enhancements to the original API, namely by adding the ability to:

  • View existing PDF documents (Windows only).
  • Print existing PDF documents (Windows only).
  • Draw circles.
  • Draw pie charts.
  • Draw bar charts (with a single or multiple data series).

Here’s an example of a pie chart that was drawn with the new API:

PieChart

Here’s an example of a bar chart:

BarChart

And here’s an example of a multi-series bar chart:

MultiBarChart

It’s early days for the support of charts, and I plan to make several additional enhancements as time permits, but I wanted to make the work that has been done so far out into the wild, and hopefully get some feedback to help me decide what else needs to be done.

If you’re interested in learning how to use the PDF API then I’ll be presenting a session that will teach you all about it at our up-coming DevPartner conference in May. So if you haven’t already done so, head on over to http://conference.synergex.com to reserve your spot at the conference now.



CodeGen 5.1.2 Released

By Steve Ives, Posted on January 28, 2016 at 9:55 am

Steve Ives

We have just released a CodeGen update that includes a fix for a problem that was discovered recently related to the processing of enumerated fields. If your repository includes enumerated fields and you use the field selection loop token <SELECTION_VALUE> (or the Symphony Framework custom token <SYMPHONY_SELECTION_VALUE>) then we recommend that you update to the new version and re-generate your code. As a reminder CodeGen recently moved to GitHub, you can find the new release at https://github.com/Synergex/CodeGen/releases.


CodeGen Has a New Home

By Steve Ives, Posted on December 9, 2015 at 1:45 pm

Steve Ives

Today we are announcing that we have moved the open source CodeGen project from it’s former home on CodePlex to a new home on GitHub. We made the decision to do this for several reasons, not least of which is the fact that GitHub has effectively become the de-facto standard place for hosting open source projects. Even Microsoft, who built and operate the CodePlex site using their own Team Foundation Server source control technologies seem to have lost interest in it; in the last 18 months or so they have moved pretty much all of their own considerable number of open source projects to GitHub also! GIT also has several very nice features over and above what TFS has to offer, and also has the benefit of being very considerably faster to use. Related to the move is a new version (CodeGen 5.1.1), but the only changes in the new version are related from the move from CodePlex to GitHub; there is no new functionality in the new release over the 5.1.0 version that was released a few days ago.

If you don’t already have one we encourage you to create a GitHub account and once logged in to “watch” CodeGen. If you wish to receive notifications about new CodeGen releases you can also subscribe to the CodeGen Releases Atom feed. CodeGen is still distributed under the terms of the New BSD License. For the time being we plan to leave the CodePlex environment intact, but no new changes will be checked in there and no new releases will be published there.

Here are a few useful GitHub URLs related to our new home:

Project home https://github.com/Synergex/CodeGen
Wiki (information) https://github.com/Synergex/CodeGen/wiki
Download latest version https://github.com/Synergex/CodeGen/releases/latest
Issue tracking https://github.com/Synergex/CodeGen/issues
Releases Atom feed https://github.com/Synergex/CodeGen/releases.atom

CodeGen 5.1 Released

By Steve Ives, Posted on December 4, 2015 at 4:07 pm

Steve Ives

Just a quick note to announce that we have today released CodeGen 5.1. This release has but one new feature, but it does allow me to solve a challenging problem that I faced while working on a customer project recently. I have dubbed this new feature conditional processing blocks. Essentially it is the ability to conditionally include (or exclude) parts of a template file based on the presence or absence of identifiers that can be declared on the command line. It allows you to achieve the same kind of results that you would when using .DEFINE, .IFDEF and .IFNDEF in DBL source code, but within template files. For example a developer could include code like this in a template file:

    open(channel=0,u:i,”<FILE_NAME>”)
    <IF DEFINED_ATTACH_IO_HOOKS>
    new <StructureName>Hooks(channel)
    </IF>

The developer would then have the ability to choose whether to include or exclude the code that assigns the I/O hooks object to the channel that was opened at the time that they generate the code. By default the I/O hooks code would not be included; if it was needed the developer would define the ATTACH_IO_HOOKS identifier as they generate the code. They would do this by using a new –define command line option:

    codegen –s EMPLOYEE –t FILE_IO_CLASS –r –define ATTACH_IO_HOOKS

This may seem like a very simple change, and it is, but my mind is now racing thinking about all of the new possibilities it opens up.


Old Dog … New Tricks … Done!

By Steve Ives, Posted on June 3, 2015 at 3:59 pm

Steve Ives

The old adage tells us that you can’t teach an old dog new tricks. But after the last three days, I beg to differ! It’s been an interesting few days for sure; fun, challenging, rewarding and heated are all words that come to mind when reflecting on the last few days. But at this point, three days into a four-day engagement, I think that we may just have dispelled that old adage. For one this “old dog” certainly feels like he has learned several new tricks.

So what was the gig? It was to visit a company that has an extensive application deployed on OpenVMS, and to help them to explore possible ways to extend the reach of those applications beyond the current OpenVMS platform. Not so hard I hear you say, there are any number of ways of doing that. xfServerPlus immediately comes to mind, as do xfODBC and the SQL Connection API, and even things like the HTTP API that could be used to allow the OpenVMS application to do things like interacting with web services. All true, but there was one thing that was threatening to throw a “spanner (wrench) in the works”. Did I mention that the application in question was developed in COBOL? That’s right, not a line of DBL code anywhere in sight! Oh and by the way, until about a week ago I’d never even seen a single line of COBOL code.

Now perhaps you understand the reason that challenging was one of the words I mentioned earlier. But I’m up for a challenge, as long as I think I have a fighting chance of coming up with something cool that addresses a customers needs. And in this case I did. I didn’t yet know all of the details, but I figured the odds of coming up with something were pretty good.

Why all of this confidence? Well, partly because I’m really good at what I do (can’t believe I just said that), but seriously, it was mainly because of the fact that a lot of the really cool things that we developers just take for granted these days, like the ability to write Synergy .NET code and call it from C#, or write VB.NET code and call it from Synergy .NET, have their roots in innovations that were made 30+ years ago by a company named Digital Equipment Corporation (DEC).

You see OpenVMS had this little thing called the Common Language Environment. In a nutshell this meant that the operating system provided a core environment in which programming languages could interoperate. Any language that chose to play in that ball park would be compatible with other such languages, and most languages on OpenVMS (incuding DIBOL and DBL) did just that. This meant that BASIC could call FORTRAN, FORTRAN could call C, C could call PASCAL and … well you get the idea. Any YES it means that COBOL can call DBL and DBL can call COBOL. OK, now we’re talking!

So why is this such a big deal? Well it turns out that Digital, later Compaq, and later still HP didn’t do such a great job of protecting their customers investments in their COBOL code. It’s been quite a while since there was a new release of COBOL on OpenVMS, so it’s been quite a while since OpenVMS COBOL developers had access to any new features. This means that there isn’t a way to call OpenVMS COBOL routines from .NET or Java, there isn’t a way for OpenVMS COBOL code to interact with SQL Server or Oracle, and there isn’t an HTTP API … so don’t even think about calling web services from COBOL code.

But wait a minute, COBOL can call DBL … and DBL can call COBOL … so YES, COBOL CAN do all of those things … via DBL! And that fact was essentially the basis for my visit to Toronto this week.

I’m not going to get into lots of details about exactly what we did. Suffice it to say that we were able to leverage two core Synergy/DE technologies in order to implement two main things:

  1. A generic mechanism allowing COBOL code executing on OpenVMS to interact with Windows “stuff” on the users desktop (the same desktop that their terminal emulator is running on).
  2. A generic mechanism allowing Windows “stuff” executing on the users desktop to interact with COBOL code back on the OpenVMS system.

The two core technologies have already been mentioned. Outbound from OpenVMS was achieved by COBOL calling a DBL routine that in turn used the Synergy HTTP API to communicate with a WCF REST web service that was hosted in a Windows application running in the users system tray. Inbound to OpenVMS was of course achieved with a combination of xfNetLink .NET and xfServerPlus.

So just who is the old dog? Well as I mentioned earlier I probably fall into that category at this point, as do several of the other developers that it was my privilege to work with this week. But as I set out to write this article I must admit that the main old dogs in my mind were OpenVMS and COBOL. Whatever, I think that all of the old dogs learned new tricks this week.

It’s been an action packed three days but I’m pretty pleased with what has been accomplished, and I think the customer is too. I have one more day on site tomorrow to wrap up the more mundane things like documentation (yawn) and code walkthroughs to ensure that everyone understands what was done and how all the pieces fit together. Then it’s back home on Friday before a well deserved vacation next week, on a beach, with my wife.

So what did I learn this week?

  1. I really, really, REALLY don’t like COBOL!
  2. OpenVMS was WAY ahead of its time and offered LOTS of really cool features. Actually I didn’t just learn this, I always knew it, but I wanted to recognize it in this list … and it’s MY BLOG so I can Smile.
  3. Synergy/DE is every bit as cool as I always believed; and this week I proved it to a bunch of people that had never even heard of it before.
  4. New fangled elevators are very confusing for old dogs!

DevPartner 2015 – WOW!

By , Posted on May 15, 2015 at 6:37 pm

Avatar

That was the week that was the DevPartner 2015 conference in Philadelphia. Ok, so I’m biased but I really have to say this was one of the best conference weeks I’ve had the pleasure to be part of for many years. There were some really great sessions: The HBS customer demonstration rocked! They came to a conference a couple of years ago, did a tutorial on xfServerPlus and with this new found knowledge (and some PSG guidance) created a cool web bolt-on to their existing Synergy app.

We saw some fresh new faces from Synergex: Marty blasted through the Workbench and visual Studio development environments we provide and showed some really great tools and techniques. Phil gave us a 101 introduction to many of the “must know” features and capabilities of Synergy SDBMS – and of course was able to address mine and Jeff’s performance issues – you had to be there:). Roger demonstrated his wizardry to enlighten everyone as to the issues you need to consider when transferring your data within local and wide area networks – I was the bad router!

Bill Mooney set the whole tone of the conference with a great opening presentation showing just how committed Synergex are to empowering our customers with the best software development capabilities available.

My first day’s session followed and gave me the opportunity to demonstrate how you actually can bring all our great tools together to create true single-source, cross-platform applications which run on platforms as diverse as OpenVMS, UNIX and Microsoft Windows and onto a Sony watch running Google Wear!

Steve Ives went 3D holographic with videos from his recent trip to the Microsoft Build conference that showed just how amazing the Microsoft platform is becoming – and we aim to continue to be a first class player in that arena.

So many of our products are reaching a level of maturity that blows the competition away. Gary Hoffman from TechAnalysts presented a session showing how to use CodeGen and Symphony in the real world and showed just what you can achieve today in Synergy.

Jeff Greene (Senior .NET engineer @ Synergex) and I presented a rather informal (read written the night before) presentation showing the performance and analysis tools in Visual Studio 2015 that you can use to identify problem area and memory leaks in your application. Within minutes Brad from Automated System forwarded me an email he’d just sent to his team:

“At the Synergex conference just this morning, they just showed fantastic new diagnostics tools in Visual Studio 2015.  I just put the Team on the trail of potential memory issues with these new tools in a Virtual PC environment so we don’t alter our current developer stations. This could both reduce the memory footprint and improve performance.” – You can’t beat such instant feedback!

The tutorial time gives attendees the opportunity to play with the latest tools on a pre-configured virtual machine – plug in and code! And we continued the hands-on theme with Friday’s post conference workshop – where we built the DevPartner 2015 App from the ground up!

nexus_working

Thanks to everyone for coming and making the conference such a great success. It’s our 30th conference next year so keep your eyes and ears open for dates and details – it will be a conference not to miss!


Conference Time

By , Posted on March 23, 2015 at 2:26 pm

Avatar

Day one and a 9:30 start, which is quite early for a Monday morning – and I have just hustled my way through the London underground commuter traffic!  Front row seat for the Xamarin multi-targeting apps workshop, and the presenter is a fellow “Evangelist” so things are looking good.

My goal for today is to squeeze as much knowledge from James, the Developer Evangelist from Xamarin, to ensure everything we are doing at Synergex is going to take us down the multi-targeting utopia highway.

DevWeek in London is an annual conference I try to attend because over the years it’s given me many thought provoking ideas that have made their way into Synergy, Symphony and the applications I write and assist on.  I’m especially looking forward to this year because everything is no longer just about C#, it’s now about multi-platform targeted development – you know, just like what we have been doing with Synergy for 30 years!!

Today it’s a pre-conference Xamarin workshop showing how to provide true native applications on android, iOS and Windows without compromising the UI or having to duplicate your application code across multiple development projects.  The trick is Portable Class Libraries (PCL’s) – once you have these you can plug them into your chosen UI project (Android, Windows, iOS).  Of course Synergy allows for the creation of these PCL’s so out-of-the-box your development team have the tools to “go mobile”.

Many years ago Synergex developed the UI Toolkit – and I know many people reading this will still have applications running using it.  It provided a single source for UI design that targeted multiple platforms.  You could write your UI pieces into a script file which would be “compiled” on the target platforms and the same Synergy code would run and display according to the host environment – OpenVMS and UNIX was “green screen”, Microsoft Windows was just that, native Microsoft Windows.

Today’s multi-targeting tools come from Xamarin and are built into Microsoft Visual Studio and Utilise the Microsoft .Net Framework – just like Synergy.  It provides developers with the ability to write-once-deploy-many across all the latest must-have devices including iOS, Android and Windows Phone and Store.  And did I mention Google Wear?

As a Synergy developer you are perfectly placed to take full advantage of these opportunities.  As well as being able to build new applications from Synergy templates, built into Visual Studio for Android and iOS, you can also plug in your Synergy Portable Class Libraries directly into Windows Phone and Store applications.  You could even be sending push notifications to the latest Google Watch technology on your wrist, if that’s what your application needs.

We will be showing you just how easy it is to create great cross-device apps using Synergy and the latest Xamarin tools at this year’s DevPartner 2015 conference.

And if you want to get right up to speed on all things Synergy don’t forget to sign-up for the pre and post conference workshops.  These workshops will take you from zero to hero when it comes to building great Synergy applications using all the latest tools and techniques, and understanding what all the buzz words and jargon mean.


Symphony Framework Basics: Control Styling

By , Posted on September 6, 2013 at 5:05 am

Avatar

In my previous article (Symphony Framework Basics: Data Binding) I demonstrated how to perform simple data binding between your XAML UI controls and your Data Objects.  This article demonstrates how to build powerful styles to define and control your user interface and provide automated data binding to your Data Objects.

Before we look at styles, let’s recap how we do data binding.  Consider the following simple repository structure;

Record group_record

GROUP_ID    ,A20   ; (1,20) group id

DESCRIPTION ,A100  ; (21,120) description

When created as a Data Object this creates two properties;

public property Group_id, a20

public property Description, a100

In the XAML code we can data bind the properties exposed by the Data Object to standard UI controls;

<TextBox Text=”{Binding Path=Group_id, Converter={StaticResource alphaConverter}}”/>

<TextBox Text=”{Binding Path=Description, Converter={StaticResource alphaConverter}}”/>

There are a number of issues here, and not all of them are obvious.  Although we have performed the data binding, there is no code in the XAML to prevent the user typing more characters than the underlying data allows.  The Group_id property for example only allows up to twenty characters, so we need to add code to prevent more being entered.  In the repository we’ve defined the field to only contain uppercase characters and again the XAML is not honouring this requirement.  When a field is in error, for example a required field that is blank, the underlying Data Object exposes this information, but we are not utilising it here.  Also, controlling if the field is read-only, if entry is disabled, etc.  All these setting and more can be configured against the field in the Synergy Repository.

Using CodeGen and the correct Symphony templates we can generate styles that define exactly how we require field entry to be controlled.

Generating the style files is very simple.  The syntax to execute CodeGen with is;

codegen -s GROUP -t Symphony_Style -n GroupMaint -ut ASSEMBLYNAME=GroupMaint -cw 16

One interesting item on the CodeGen command line is the “-cw 16”.  This simply defines the standard width as 16 pixels for each character and is used when defining the size of a control.

The generated style file contains individual styles for each field in the repository structure, as well as a style for the prompt.  Here is an example of a prompt style;

<Style x:Key=”Group_Group_id_prompt” TargetType=”{x:Type Label}”>

<Setter Property=”Template”>

<Setter.Value>

<ControlTemplate TargetType=”{x:Type Label}”>

<Label

Content=”Group ID”

IsEnabled=”{Binding Path=Group_idIsEnabled}”>

</Label>

</ControlTemplate>

</Setter.Value>

</Setter>

</Style>

And a field style;

<Style x:Key=”Group_Group_id_style” TargetType=”{x:Type symphonyControls:FieldControl}”>

<Setter Property=”FocusVisualStyle” Value=”{x:Null}”/>

<Setter Property=”Focusable” Value=”False”></Setter>

<Setter Property=”Template”>

<Setter.Value>

<ControlTemplate TargetType=”{x:Type symphonyControls:FieldControl}”>

<TextBox Name=”ctlGroup_Group_id”

Text=”{Binding Path=Group_id, Converter={StaticResource alphaConverter},

UpdateSourceTrigger=PropertyChanged,

ValidatesOnDataErrors=True}”

Validation.ErrorTemplate=”{StaticResource validationTemplate}”

MaxLength=”20″

Width=”320″

CharacterCasing=”Upper”

IsEnabled=”{Binding Path=Group_idIsEnabled}”

IsReadOnly=”{Binding Path=Group_idIsReadOnly}”

VerticalAlignment=”Center”

HorizontalAlignment=”Left”

ToolTip=”{Binding RelativeSource={RelativeSource Self},Path=(Validation.Errors), Converter={StaticResource errorConveter}}”>

<TextBox.Style>

<Style>

<Style.Triggers>

<DataTrigger Binding=”{Binding Path=Group_idIsFocused}” Value=”true”>

<Setter Property=”FocusManager.FocusedElement”

Value=”{Binding ElementName=ctlGroup_Group_id}”></Setter>

</DataTrigger>

<DataTrigger Binding=”{Binding RelativeSource={RelativeSource Self},Path=(Validation.HasError)}” Value=”True”>

<Setter Property=”TextBox.Background”>

<Setter.Value>

<LinearGradientBrush StartPoint=”0.5,0″ EndPoint=”0.5,1″>

<LinearGradientBrush.GradientStops>

<GradientStop Offset=”0.2″ Color=”WhiteSmoke” />

<GradientStop Offset=”3″ Color=”Red” />

</LinearGradientBrush.GradientStops>

</LinearGradientBrush>

</Setter.Value>

</Setter>

</DataTrigger>

</Style.Triggers>

</Style>

</TextBox.Style>

</TextBox>

</ControlTemplate>

</Setter.Value>

</Setter>

</Style>

This code may look a little verbose but enables a number of capabilities, including;

  • Data binds the underlying UI control to the Data Object property
  • Control features like field length, character casing, read-only, etc.
  • The Tooltip is used to display any error information if the field is in error.
  • The control background colour is made red if the field is in error.

Once you have created your styles and added them to your Visual Studio project you can then reference and use them in your UI design.  To reference the style;

<ResourceDictionary Source=”pack:/GroupMaint;component/Resources/Group_style.CodeGen.xaml”/>

Each style is based on a control in the Symphony Framework called “FieldControl” which can be found in the Symphony.Conductor.Controls namespace.  You must add a reference to this namespace in your XAML code;

xmlns:symphonyControls=”clr-namespace:Symphony.Conductor.Controls;assembly=SymphonyConductor”

Now you can reference the FieldControl and apply the required style to it;

<symphonyControls:FieldControl      DataContext=”{Binding Path=MasterData}”

Style=”{StaticResource Group_Group_id_style}”>

</symphonyControls:FieldControl>

And to add the prompt, or label style use;

<Label Style=”{StaticResource Group_Group_id_prompt}”

DataContext=”{Binding Path=MasterData}” />

Because the styles are linked to the same property in the same Data Object when your code disables the input control the prompt will be greyed out as well.

The code snippets here are just part of the overall solution.  To see the full details you can watch a short video at http://youtu.be/FqWpMRrSb4w. This article convers styling of the user interface.  The next article will demonstrate using all of the difference Synergy fields types and utilizing controls like date pickers, check boxes, etc.

 


HTTP API Enhancements in DBL 10.1

By Steve Ives, Posted on January 14, 2013 at 11:24 pm

Steve Ives

In addition to introducing several totally new features DBL 10.1 also includes enhancements to the client portion of the HTTP API. These enhancements make the API significantly easier to use, and also make it possible to achieve things that were not previously possible.

Since the HTTP API was introduced in DBL 7.5 the client part of the API consisted of two routines. These routines are HTTP_CLIENT_GET and HTTP_CLIENT_POST. As suggested by their names these routines allowed you to issue GET and POST requests to an HTTP server. A GET request is a simple request to a server in which a URI is sent to the server and a response (which may include data) comes back. A POST request is slightly different in that in addition to the URI, additional data may also be sent to the server in the body of the HTTP request.

When dealing with an HTTP server it isn’t always possible to pre-determine the amount of data to be sent to the server, and it’s certainly not possible to know how much data will come back from the server for any given request. So in order to implement the HTTP API it was necessary to have a mechanism to deal with variable length data of any size, and at that time the only solution was to use dynamic memory.

Using dynamic memory worked fine, any data to be sent to the HTTP server as part of a POST request was placed into dynamic memory and the memory handle passed to the API, and any data returned from a GET or POST request was placed into dynamic memory by the API and the handle returned to the application. Dealing with variable length strings using dynamic memory isn’t particularly hard, but the fact of the matter is that while only a single line of code is required to perform an HTTP GET or POST, typically several lines of code were required in order to marshal data into and out of memory handles.

When the System.String class was introduced in DBL 9.1, so was the opportunity to simplify the use of the HTTP API, and that became a reality in DBL 10.1.

In order to maintain compatibility with existing code the HTTP_CLIENT_GET and HTTP_CLIENT_POST routines remain unchanged, but they are joined by two new siblings named HTTP_GET and HTTP_POST. These routines are similar to the original routines, essentially performing the same task, but they are easier to use because they use string objects instead of dynamic memory. And because the string class has a length property it is no longer necessary to pass separate parameters to indicate the length of the data being sent, or to determine the length of the data that was received. String objects are also used when passing and receiving HTTP headers.

So the new HTTP_GET and HTTP_POST routines make the HTTP API easier to use, but there is a second part to this story, so read on.

One of the primary use cases for the HTTP API is to implement code that interacts with Web Services, and in recent years a new flavor of Web Services called REST Services (REST stands for Representational State Transfer) has become popular. With traditional Web Services all requests were typically sent to the server via either an HTTP GET or POST request, but with REST Services two additional HTTP methods are typically used; the HTTP PUT and DELETE methods.

Many of you will be familiar with the term “CRUD” which stands for “Create, Read, Update and Delete”. Of course these are four operations that commonly occur in software applications. The code that we write often creates, reads, updates or deletes something. When designing traditional Web Services we would often indicate the type of operation via a parameter to a method, or perhaps even implement a separate method for each of these operations. With REST based web services however, the type of operation (create, read, update or delete) is indicated by the type of HTTP request used (PUT, GET, POST or DELETE).

To enable DBL developers to use the HTTP API to interact with REST services an extension to the HTTP API was required, and DBL 10.1 delivers that enhancement in the form of another two new routines capable of performing HTTP PUT and DELETE requests. As you can probably guess the names of these two new routines are HTTP_PUT and HTTP_DELETE. And of course, in order to make these new routines easy to use, they also use string parameters where variable length data is being passed or received.

You can find much more information about the HTTP API in the DBL Language Reference Manual, which of course you can also find on-line at http://docs.synergyde.com. In fact, if you’re feeling really adventurous you could try Googling something like “Synergy DBL HTTP_PUT”.


Unit Testing with Synergy .NET

By Steve Ives, Posted on at 11:02 pm

Steve Ives

One of the “sexy” buzz words, or more accurately “buzz phrases” that is being bandied around with increased frequency is “unit testing”. Put simply unit testing is the ability to implement specific tests of small “units” of an application (often down at the individual method level) and then automate those tests in a predictably repeatable way. The theory goes that if you are able to automate the testing of all of the individual building blocks of your application, ensuring that each of those components behaves as expected under various circumstances, testing what happens when you use those components as expected, and also when you use them in ways that they are not supposed to be used, then you stand a much better change of the application as a whole behaving as expected.

There are several popular unit testing frameworks available and in common use today, many of which integrate directly with common development tools such as Microsoft Visual Studio. In fact some versions of Visual Studio have an excellent unit testing framework build in; it’s called the Microsoft Unit Test Framework for Managed Code and it is included in the Visual Studio Premium and Ultimate editions. I am delighted to be able to tell you that in Synergy .NET version 10.1 we have added support for unit testing Synergy applications with that framework.

I’ve always been of the opinion that unit testing is a good idea, but it was never really something that I had ever set out to actually do. But that all changed in December, when I found that I had a few spare days on my hands. I decided to give it a try.

As many of you know I develop the CodeGen tool that is used by my team, as well as by an increasing number of customers. I decided to set about writing some unit tests for some areas of the code generator.

I was surprised by how easy it was to do, and by how quickly I was able to start to see some tangible results from the relatively minimal effort; I probably spent around two days developing around 700 individual unit tests for various parts of the CodeGen environment.

Now bear in mind that when I started this effort I wasn’t aware of any bugs. I wasn’t naive enough to think that my “baby” was bug free, but I was pretty sure there weren’t many bugs in the code, I pretty much thought that everything was “hunky dory”. Boy was I in for a surprise!

By developing these SIMPLE tests … call this routine, pass these parameters, expect to get this result type of stuff … I was able to identify (and fix) over 20 bugs! Now to be fair most of these bugs were in pretty remote areas of the code, in places that perhaps rarely get executed. After all there are lots of people using CodeGen every day … but a bug is a bug … the app would have fallen over for someone, somewhere, sometime, eventually. We all have those kind of bugs … right?

Anyway, suffice it to say that I’m now a unit testing convert. So much so in fact that I think that the next time I get to develop a new application I’m pretty sure that the first code that I’ll write after the specs are agreed will be the unit tests … BEFORE the actual application code is written!

Unit testing is a pretty big subject, and I’m really just scratching the surface at this point, so I’m not going to go into more detail just yet. So for now I’m just throwing this information out there as a little “teaser” … I’ll be talking more about unit testing with Synergy .NET at the DevPartner conferences a little later in the year, and I’ll certainly write some more in-depth articles on the subject for the BLOG also.


Release Notifications for CodeGen and Symphony Framework

By Steve Ives, Posted on July 31, 2012 at 5:19 pm

Steve Ives

At the DevPartner conference I told people that in order to receive notifications for new releases of the open source CodeGen and Symphony Framework projects they should “Follow” the projects on CodePlex.

It turns out that “following” a project doesn’t send release notifications. If you want to get release notifications then you must go to the “Downloads” page for each project and subscribe for notifications. If you are interested in either of these two projects then I would recommend that you do just that, as we’ve done some great enhancements to both recently, and there are more great things still in the pipeline.

The downloads page for CodeGen is at http://codegen.codeplex.com/releases and the downloads page for Symphony Framework is at http://symphonyframework.codeplex.com/releases.


Synergy .NET Without Purchasing Visual Studio

By Steve Ives, Posted on June 20, 2012 at 1:16 pm

Steve Ives

It occurs to me that we may have forgotten to tell you about something important … sorry! Of course if you were at either of the recent Synergex DevPartner conferences then you will already know this, but if not then “listen up” because this could save you some money!

The development environment for Synergy .NET is provided by Microsoft Visual Studio. For Synergy 9 we support Visual Studio 2010 Professional or higher. If a developer wants to develop with Synergy .NET then they would install Synergy/DE and Visual Studio 2010, and then install “Synergy Language Integration for Visual Studio” (we call it SLI because it’s less of a mouthful) to add all of the Synergy .NET capabilities and templates alongside the other Microsoft languages like C# and Visual Basic. Many developers already have Visual Studio 2010 so this isn’t a problem … but what if you don’t?

Well … good news! In addition to purchasing Visual Studio 2010 there is also a free solution. It’s called Visual Studio 2010 Shell (Integrated) and you can download it directly from Microsoft. Basically the Integrated Shell is a bare bones version of Visual Studio, with all of the other languages stripped out. It’s not much use on its own, but if you install it and then install SLI … hey presto you have a full Synergy .NET development environment!

If you do decide to try this out then please remember that Synergy .NET requires Visual Studio 2010 Service Pack 1, so you’ll need to install that after installing Integrated Shell, but before installing SLI.

You can download the files that you’ll need from these locations:

Visual Studio 2010 Shell (Integrated)

http://www.microsoft.com/en-us/download/details.aspx?id=115

Visual Studio 2010 Service Pack 1

http://www.microsoft.com/en-us/download/details.aspx?id=23691

By the way, when Synergy 10 is released later this year you will have a choice of two development environments. We’ll continue to support Visual Studio 2010, but we’ll also support Visual Studio 2012 … and there is an Integrated Shell available for it too, but the requirements are a little different. There is also something called Visual Studio 2012 RC Shell (Isolated), and you have to install that before you can install the Integrated Shell.

Visual Studio 2012 RC Shell (Isolated)

http://www.microsoft.com/en-us/download/details.aspx?id=29927

Visual Studio 2012 RC Shell (Integrated)

http://www.microsoft.com/en-us/download/details.aspx?id=29912

By the way, it appears that the Visual Studio 2012 RC Integrated Shell installation may have a problem. I found that it reported errors about missing components when I tried to install it. I also found that if I did a reboot between installing Isolated Shell and Integrated Shell – the errors went away!

If you do decide to play with Synergy 10 (the beta will be out very soon now) and Visual Studio 2012, and we really hope that you all will, remember that Visual Studio 2012 is currently release candidate. When the final product ships later in the year there will be new Isolated and Integrated shells to download and install.


Microsoft Removes Installer Project Templates from Visual Studio 2012

By Steve Ives, Posted on June 13, 2012 at 8:09 am

Steve Ives

Microsoft has recently announced that the various “Visual Studio Installer” project templates that were included in Visual Studio 2010 will NOT be included in Visual Studio 2012. These project templates are used within Visual Studio 2010 to build Windows Installer setup packages for .NET applications.

I’m writing this post because one of the 2012 tutorials that was published during our recent DevPartner conferences was based on using these these project templates; it showed developers how to build an installation package for a simple windows application. I wanted let customers know that the specific mechanism taught during the tutorial will not work in future versions of Visual Studio.

Bear in mind, however, the end result of the tutorial is a “standard” Windows Installer MSI package, and many of the techniques and concepts presented are still very much applicable to building any Windows Installer packages using other products.

Unfortunately there will no longer be any free tools for building installations included with Visual Studio, so developers will likely have to select one of several third-party products (e.g. InstallShield) to build their installation packages.


CodeGen 4.1 Released

By Steve Ives, Posted on June 7, 2012 at 12:48 am

Steve Ives

Recently I announced that our code generator, CodeGen, had been published on CodePlex for everyone to use. Today I am delighted to announce that we have released a new version of CodeGen which includes some significant new features.

It is now possible to generate code which is based on information drawn from multiple repository structures, which makes it possible to generate many more types of routines and classes than ever before.

Also we have added the ability to launch code generation based on a repository file definition. CodeGen will make any structures that are assigned to the file available to the template when generating code.

We’re now starting planning for the next release. CodeGen can already be used to generate code for Synergy Language, C#, Visual Basic and Objective-C, and one of the features we’ll be adding in the next release is data type mappings and new field loop tokens for the Java language.

For more information about CodeGen refer to the CodeGen site on CodePlex, which you will find at http://codegen.codeplex.com.


Don't miss a post!

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts Categories Tag Cloud Archives