Steve Ives, Author at Synergex | Page 4 of 7
Phone800.366.3472 SupportGet Support DocumentationDocumentation Resource CenterResource Center
search
close
Open Menu

Synergex Blog


CodeGen 5.1.2 Released

By Steve Ives, Posted on January 28, 2016 at 9:55 am

Steve Ives

We have just released a CodeGen update that includes a fix for a problem that was discovered recently related to the processing of enumerated fields. If your repository includes enumerated fields and you use the field selection loop token <SELECTION_VALUE> (or the Symphony Framework custom token <SYMPHONY_SELECTION_VALUE>) then we recommend that you update to the new version and re-generate your code. As a reminder CodeGen recently moved to GitHub, you can find the new release at https://github.com/Synergex/CodeGen/releases.


CodeGen Has a New Home

By Steve Ives, Posted on December 9, 2015 at 1:45 pm

Steve Ives

Today we are announcing that we have moved the open source CodeGen project from it’s former home on CodePlex to a new home on GitHub. We made the decision to do this for several reasons, not least of which is the fact that GitHub has effectively become the de-facto standard place for hosting open source projects. Even Microsoft, who built and operate the CodePlex site using their own Team Foundation Server source control technologies seem to have lost interest in it; in the last 18 months or so they have moved pretty much all of their own considerable number of open source projects to GitHub also! GIT also has several very nice features over and above what TFS has to offer, and also has the benefit of being very considerably faster to use. Related to the move is a new version (CodeGen 5.1.1), but the only changes in the new version are related from the move from CodePlex to GitHub; there is no new functionality in the new release over the 5.1.0 version that was released a few days ago.

If you don’t already have one we encourage you to create a GitHub account and once logged in to “watch” CodeGen. If you wish to receive notifications about new CodeGen releases you can also subscribe to the CodeGen Releases Atom feed. CodeGen is still distributed under the terms of the New BSD License. For the time being we plan to leave the CodePlex environment intact, but no new changes will be checked in there and no new releases will be published there.

Here are a few useful GitHub URLs related to our new home:

Project home https://github.com/Synergex/CodeGen
Wiki (information) https://github.com/Synergex/CodeGen/wiki
Download latest version https://github.com/Synergex/CodeGen/releases/latest
Issue tracking https://github.com/Synergex/CodeGen/issues
Releases Atom feed https://github.com/Synergex/CodeGen/releases.atom

CodeGen 5.1 Released

By Steve Ives, Posted on December 4, 2015 at 4:07 pm

Steve Ives

Just a quick note to announce that we have today released CodeGen 5.1. This release has but one new feature, but it does allow me to solve a challenging problem that I faced while working on a customer project recently. I have dubbed this new feature conditional processing blocks. Essentially it is the ability to conditionally include (or exclude) parts of a template file based on the presence or absence of identifiers that can be declared on the command line. It allows you to achieve the same kind of results that you would when using .DEFINE, .IFDEF and .IFNDEF in DBL source code, but within template files. For example a developer could include code like this in a template file:

    open(channel=0,u:i,”<FILE_NAME>”)
    <IF DEFINED_ATTACH_IO_HOOKS>
    new <StructureName>Hooks(channel)
    </IF>

The developer would then have the ability to choose whether to include or exclude the code that assigns the I/O hooks object to the channel that was opened at the time that they generate the code. By default the I/O hooks code would not be included; if it was needed the developer would define the ATTACH_IO_HOOKS identifier as they generate the code. They would do this by using a new –define command line option:

    codegen –s EMPLOYEE –t FILE_IO_CLASS –r –define ATTACH_IO_HOOKS

This may seem like a very simple change, and it is, but my mind is now racing thinking about all of the new possibilities it opens up.


Updated PDF API

By Steve Ives, Posted on October 16, 2015 at 11:24 am

Steve Ives

A few weeks ago I announced that a new API called SynPSG_PDF had been added to the code exchange. Today I am pleased to announce that the API has been updated and, in addition to Windows, is now also supported on systems running Linux (32-bit and 64-bit), OpenVMS (AXP and IA64) and Synergy .NET.

Also, as a direct result of recent customer feedback, I have added a mechanism that allows a PDF file to be easily created from an existing text file with just a few lines of code. This means that existing report programs that already produce plain text output can be easily modified to produce PDF output with a small amount of code like this:

AttachFileExampleCode

If  you would like to check out the API you can download the code from https://resourcecenter.synergex.com/devres/code-exchange-details.aspx?id=245.


The Digital World … Going too Far?

By Steve Ives, Posted on October 1, 2015 at 7:32 pm

Steve Ives

So Hilton’s latest thing is the “Digital Key”; while standing within 5 feet of the door to your hotel room it is now possible (in certain locations) to click a virtual button in the Hilton App on your smart phone and have the door to your hotel room unlock, as if by magic. The digital key also knows about other areas of the hotel that you have access to, such as the Executive lounge (I tried it, it works) and gymnasium (apparently) and provides access to those places too.

Last week I used the app to make my reservation. Yesterday I used the app to check in for my stay and and also to select my room. And today, having already checked in electronically, I was able to totally bypass the reception desk and proceed directly to my room.

Tomorrow morning the credit card associated with my profile will be automatically charged, and I will walk out of the front door and drive to the airport a few exits down the freeway.

If it hasn’t dawned on you what my point is here, it is that I will have booked and totally completed a stay in a hotel … without ever having the need to interact with a single other other human being; all of which seems to me to be a pretty sad state of affairs! Maybe we’re taking this whole technology thing a little too far in some areas?


New Tools for Working with PDF Files

By Steve Ives, Posted on September 11, 2015 at 2:44 pm

Steve Ives

For some time now the Synergy/DE Code Exchange has included an item called PDFKIT which essentially contains a set of DBL wrapper code that allows the open source Haru PDF Library to be used from DBL. The work done in PDFKIT was a great start and has been used successfully by several developers, but I don’t think that anyone would disagree with me if I were to suggest that it’s not exactly the most intuitive software to use, and it’s not exactly what you would call well documented either; just like the underlying Haru library!

So as time permitted for the last few weeks I have been working on what I hope is an improved solution. I certainly didn’t want to totally reinvent the wheel by starting from scratch, as I mentioned PDFKIT was a great start, but I did want to take a slightly different approach that I thought would be more useful to a wider number of developers, and I did want to make sure that complete documentation was included. What I came up with is called SynPSG.PDF, and it is available in Code Exchange now.

When you download and extract the zip file (SynPSG_PDF.zip) you will find that it contains these main elements:

  • pdfdbl.dbl
    • This is the DBL code that wraps the Haru PDF library and is taken directly from the latest version of PDFKIT.

    Haru PDF Library DLL’s

    • The same DLL’s that are distributed with PDFKIT. Refer to the documentation for instructions on where to place these DLL’s.
  • SynPSG.PDF.dbl
    • A source file containing the new API that I have created.
  • SynPDG.PDF.vpw
    • A Synergy/DE Workbench workspace that can be used to build the code, as well as build and run several sample programs that are also included (this is a Workbench 10.3.1 workspace and will not work with earlier versions of Workbench).
  • SynPSG.PDF.chm
    • A Windows help file containing documentation for the new API 

You don’t need to use the Workbench configuration that I have provided, if you prefer you can simply include the pdfdbl.dbl and SynPSG.PDF.dbl files into the build for your subroutine library. But remember that both of these files contain OO code, so you will need to prototype that code with DBLPROTO.

As you will see when you refer to the documentation, most things in the API revolve around a class called PdfFile. This class lets you basically do four things:

  1. Create a PDF file.
  2. Save the PDF file to disk.
  3. View the PDF file by launching it in a PDF viewer application.
  4. Print the PDF file to a printer.

I’m not going to go into a huge amount of detail about creating PDF documents or using the API here because these topics are discussed in the documentation, but I will mention a couple of basic things.

PDF documents inherently use an X,Y coordinates system that is based on a unit called a device independent pixel. These pixels are square and are 1/72 of an inch in each direction. The coordinates system that is used within pages of a PDF document is rooted in the lower left corner of the page which is assigned the X,Y coordinate 0,0. The width and height of the page in pixels depends on the page type as well as the orientation. So for example a standard US Letter page in a portrait orientation is 8.5 x 11 inches, so in device independent pixels it has the dimensions 612 x 792.

With most PDF API’s you work directly with this coordinates system, and you can do so with this API also, but doing so can require a lot of complex calculations, and hence can be a slow process. But often times when we’re writing software it is convenient for us to work in simple “rows and columns” of characters, using a fixed-pitch font. The new API makes it very easy to do just that, meaning that results can be produced very quickly, and also meaning that existing report programs (that already work in terms of rows and columns) can be easily modified to produce PDF output.

Here is an example of a simple row / column based report that took only a few minutes to create:

image 

Of course there are times when you need to produce more complex output, and the new API lets you do that too. To give you an idea of what it is capable of, here’s a screen shot of a mock up of a delivery ticket document that I created while working on a recent customer project:

image

As you can see this second example is considerably more complex; it uses multiple fonts and font sizes, line drawing, box drawing, custom line and stroke colors, etc. And although not shown on these examples, there is of course support for including images also.

The new API is currently available on Windows under traditional Synergy. It should be possible to make the code portable to other platforms in the near future, and .NET compatibility is definitely in the pipeline. The software requires the latest version of Synergy which at the time of writing is V10.3.1b. You can download the code from here:

https://resourcecenter.synergex.com/devres/code-exchange-details.aspx?id=245

It is early days for this new API and I have many ideas for how it can be extended and enhanced. I am looking forward to working on it some more soon, and also to receiving any feedback or suggestions that you may have.


CodeGen 5.0.5 Released

By Steve Ives, Posted on August 28, 2015 at 12:59 pm

Steve Ives

Just a quick note to let all of you CodeGen users out there that a new version (CodeGen 5.0.5) has just been released. You can get more information about the changes and download the release from https://codegen.codeplex.com/releases.

If you would like to receive email or RSS notifications when new CodeGen versions are released then there are links on the above mentioned page to allow you to set that up, and we encourage you to do so.


Why is That First WCF Operation SO Slow?

By Steve Ives, Posted on June 25, 2015 at 12:06 pm

Steve Ives

If you have ever developed and worked with a WCF service you may have noticed that the very first time you connect to a newly started instance of the service there can sometimes be a noticeable delay before the service responds. But invoking subsequent operations often seems almost instantaneous. Usually the delay is relatively short, perhaps even just a fraction of a second, but still noticeable. Well earlier this week I encountered a WCF service that exhibited this behavior, but the delay for the first operation was almost three minutes! Something had to be done.

Some time later, after much debugging, web searching and more than a little head scratching, we realized that the “problem” that we were seeing was actually “by design” in WCF and was related to the generation of metadata for the service. It turns out that if “metadata exchange” is enabled for the service then WCF generates the metadata, regardless of whether anyone is currently requesting it or not, at the time that the first operation is requested by a client. Often the generation of the metadata takes almost no time at all, but as the size and complexity of a service grows (in terms of the number of operations exposed, the number of parameters, the number and nature of complex types exposed, etc.) the time taken to generate the metadata grows. In the case of this particular service there were over 800 individual operations defined, with lots and lots of complex types being exposed, and the service was still growing!

The only time you need metadata exchange enabled is when you need to access the WSDL for the service, so in simple terms whenever you need to do an “Add Service Reference” or “Update Service Reference”. The rest of the time having it enabled is just slowing things down at runtime.

I can’t tell you exactly how to enable and disable metadata exchange with your service, because there are several different ways it can be configured, but it’s likely going to be one of these:

  1. A <serviceMetadata/> token used in the <serviceBehaviors> section of a Web.config or App.config file.
  2. An <endpoint/> token that uses the IMetaDataExchange contract defined in a <service/>section of a Web.config or App.config file.
  3. Code that does the equivalent of one of the two options above.

So the lesson learned was to enable metadata exchange only when it is needed, for the purpose of creating or updating client proxy code; the result was an almost instantaneous response from the service once metadata exchange had been disabled. Of course it goes without saying that metadata exchange should NEVER be enabled on production services.


Old Dog … New Tricks … Done!

By Steve Ives, Posted on June 3, 2015 at 3:59 pm

Steve Ives

The old adage tells us that you can’t teach an old dog new tricks. But after the last three days, I beg to differ! It’s been an interesting few days for sure; fun, challenging, rewarding and heated are all words that come to mind when reflecting on the last few days. But at this point, three days into a four-day engagement, I think that we may just have dispelled that old adage. For one this “old dog” certainly feels like he has learned several new tricks.

So what was the gig? It was to visit a company that has an extensive application deployed on OpenVMS, and to help them to explore possible ways to extend the reach of those applications beyond the current OpenVMS platform. Not so hard I hear you say, there are any number of ways of doing that. xfServerPlus immediately comes to mind, as do xfODBC and the SQL Connection API, and even things like the HTTP API that could be used to allow the OpenVMS application to do things like interacting with web services. All true, but there was one thing that was threatening to throw a “spanner (wrench) in the works”. Did I mention that the application in question was developed in COBOL? That’s right, not a line of DBL code anywhere in sight! Oh and by the way, until about a week ago I’d never even seen a single line of COBOL code.

Now perhaps you understand the reason that challenging was one of the words I mentioned earlier. But I’m up for a challenge, as long as I think I have a fighting chance of coming up with something cool that addresses a customers needs. And in this case I did. I didn’t yet know all of the details, but I figured the odds of coming up with something were pretty good.

Why all of this confidence? Well, partly because I’m really good at what I do (can’t believe I just said that), but seriously, it was mainly because of the fact that a lot of the really cool things that we developers just take for granted these days, like the ability to write Synergy .NET code and call it from C#, or write VB.NET code and call it from Synergy .NET, have their roots in innovations that were made 30+ years ago by a company named Digital Equipment Corporation (DEC).

You see OpenVMS had this little thing called the Common Language Environment. In a nutshell this meant that the operating system provided a core environment in which programming languages could interoperate. Any language that chose to play in that ball park would be compatible with other such languages, and most languages on OpenVMS (incuding DIBOL and DBL) did just that. This meant that BASIC could call FORTRAN, FORTRAN could call C, C could call PASCAL and … well you get the idea. Any YES it means that COBOL can call DBL and DBL can call COBOL. OK, now we’re talking!

So why is this such a big deal? Well it turns out that Digital, later Compaq, and later still HP didn’t do such a great job of protecting their customers investments in their COBOL code. It’s been quite a while since there was a new release of COBOL on OpenVMS, so it’s been quite a while since OpenVMS COBOL developers had access to any new features. This means that there isn’t a way to call OpenVMS COBOL routines from .NET or Java, there isn’t a way for OpenVMS COBOL code to interact with SQL Server or Oracle, and there isn’t an HTTP API … so don’t even think about calling web services from COBOL code.

But wait a minute, COBOL can call DBL … and DBL can call COBOL … so YES, COBOL CAN do all of those things … via DBL! And that fact was essentially the basis for my visit to Toronto this week.

I’m not going to get into lots of details about exactly what we did. Suffice it to say that we were able to leverage two core Synergy/DE technologies in order to implement two main things:

  1. A generic mechanism allowing COBOL code executing on OpenVMS to interact with Windows “stuff” on the users desktop (the same desktop that their terminal emulator is running on).
  2. A generic mechanism allowing Windows “stuff” executing on the users desktop to interact with COBOL code back on the OpenVMS system.

The two core technologies have already been mentioned. Outbound from OpenVMS was achieved by COBOL calling a DBL routine that in turn used the Synergy HTTP API to communicate with a WCF REST web service that was hosted in a Windows application running in the users system tray. Inbound to OpenVMS was of course achieved with a combination of xfNetLink .NET and xfServerPlus.

So just who is the old dog? Well as I mentioned earlier I probably fall into that category at this point, as do several of the other developers that it was my privilege to work with this week. But as I set out to write this article I must admit that the main old dogs in my mind were OpenVMS and COBOL. Whatever, I think that all of the old dogs learned new tricks this week.

It’s been an action packed three days but I’m pretty pleased with what has been accomplished, and I think the customer is too. I have one more day on site tomorrow to wrap up the more mundane things like documentation (yawn) and code walkthroughs to ensure that everyone understands what was done and how all the pieces fit together. Then it’s back home on Friday before a well deserved vacation next week, on a beach, with my wife.

So what did I learn this week?

  1. I really, really, REALLY don’t like COBOL!
  2. OpenVMS was WAY ahead of its time and offered LOTS of really cool features. Actually I didn’t just learn this, I always knew it, but I wanted to recognize it in this list … and it’s MY BLOG so I can Smile.
  3. Synergy/DE is every bit as cool as I always believed; and this week I proved it to a bunch of people that had never even heard of it before.
  4. New fangled elevators are very confusing for old dogs!

UX Design; Elevator Controls

By Steve Ives, Posted on June 1, 2015 at 3:09 pm

Steve Ives

Those of you who attended the recent DevPartner conference in Philadelphia will no doubt remember the excellent presentation on UX Design that was given by guest speaker, Billy Hollis. During his presentation Billy cited photographs of a couple of elevator control panels. He used one as an example of bad design, the other an example of good.

I won’t show the actual photos that Billy used (sorry, you had to be there for that!) but in a nutshell the layout of the buttons and other information (floor numbers, etc.) on the first panel was at best confusing. There was clear physical evidence that users had been confused by the panel and frequently had not understood how to operate the elevator!

The second example was much a much better panel design. The designer had successfully used techniques such as visually grouping related things together in a way that made the correct operation of the elevator a much more obvious task … intuitive even.

Well, upon arriving at a customers office building in Toronto, Canada earlier today I encountered an elevator control panel that, for me at least, took the confusion to a whole new level.

I should make it clear that the elevator in question was one of a cluster if four in the lobby of a shared office building, and that I was arriving at the customer site at about the same time that everyone was arriving at work. The point is that the lobby was pretty busy at the time, it wasn’t as simple as just walking up and pressing an “I want to go up” button.

No problem I thought, it may be two or three elevator cars before I get to make the final step of my journey up to the 4th floor. I’m a few minutes early and all is good.

image1Finally my turn came, I waited while a few other people stepped on, then I took my place in the elevator car. Intuitively I spun around to determine whether one of my elevator buddies had already pressed the 4th floor button, and I was ready to press it myself if not. The panel opposite is what I encountered.

Now I like to think of myself as a reasonably bright guy, so I instantly figured it out; the buttons would be on the OTHER SIDE of the door. And I was correct … well … kind of. I glanced to the opposite side of the elevator door … and saw an identical panel on that side too!

Not wanting to appear totally inept I just waited quietly until the other people got off at their (somehow) chosen floors … and no, unfortunately nobody else was going to 4.

The doors swished closed and I was finally alone in the elevator. I don’t remember exactly what my out loud remark to myself was, but I believe it started something along the lines of “WHAT THE ….”. So, patiently I waited and sure enough after a little while the doors once again swished open and I was back where I started from in the lobby!

I’ll be honest with you, I was getting a little “pissed” at this point (excuse my language, but its true). But not wanting to appear like a total fool I stepped away as if I had intentionally returned to the lobby, and waited for the crowd to clear … all the time subtly (I thought) observing to see HOW THE HECK THESE FREAKING ELEVATORS WORKED!!! And then … I saw it … everything instantly became clear. The floor selector buttons were indeed on the other side of the elevator door … they were on the OUTSIDE!!!!

IMG_0957Yep … believe it or not in this building you need to indicate which floor you want to go to BEFORE you step on to the elevator. After you have stepped on it’s too late; way too late!

And further, having selected your intended destination on the small tough-screen display in the lobby you are then instructed WHICH of the four elevators (conveniently labeled A, B, C and D) you should step onto in order to reach your desired floor!

Actually this is a pretty clever system, but other than the fancy 6” touch screen display there was absolutely nothing to indicate that anything was different here. Brilliant system but totally unintuitive … and so very frustrating for first-time users. Which I guess was one of the points that Billy was making in the first place.


Learning REST Basics

By Steve Ives, Posted on May 23, 2015 at 1:48 pm

Steve Ives

One of the sessions that I presented at the recent DevPartner Conference was on the subject of building RESTful web services using ServiceStack. The basic concepts of REST are often difficult to grasp when you’re first getting started, but while browsing this morning I came across a web site that I thought was a great REST resource, and in particular included a video that I thought did a really nice job of explaining the basic concepts of REST.

The site is http://www.restapitutorial.com/ and the video can be found at http://www.restapitutorial.com/lessons/whatisrest.html


Synergy .NET Platform Targeting Options

By Steve Ives, Posted on May 21, 2015 at 10:25 am

Steve Ives

One of the many options that is available to developers each time they create a new project in Visual Studio is how to configure the platform targeting options in the project properties dialogs Build panel. In the latest versions of Synergy there’s probably not much to worry about because the default values probably do exactly what you want most of the time, but the default values were not the same in some older versions of Synergy .NET, so it’s a good idea to have a good understanding of what your options are, and what the implication of choosing each option is.

In Synergy 10.3.1a we’re talking about two options; the Platform target drop-down and the Prefer 32-bit checkbox.

Project Build Options

The Platform target drop-down allows you to select from three different ways that the assembly that is created by your project (the .DLL or .EXE file) can be created; essentially you are choosing which .NET CLR (and Framework) will be used to execute your code. The options are:

Any CPU (default) Assembly can be executed by either the 64-bit or 32-bit CLR, with 64-bit preferred.
x86 Assembly can ONLY be executed by the 32-bit CLR on an Intel x86 CPU.
x64 Assembly can ONLY be executed by the 64-bit CLR on an Intel x64 CPU.

It is important to understand that we’re not talking about which Synergy runtime components will be used, we’re talking about which .NET CLR will be used, and the matching Synergy runtime components must be present on the system in order for the assemblies to be used.

The Prefer 32-bit checkbox (which was added in Synergy 10.3.1a and is only available when the platform target is set to Any CPU) provides the ability for you to determine that even though your assembly will support both 32-bit and 64-bit environments, you would prefer that it executes as 32-bit if a 32-bit environment is available.

The chart below summarizes which environment (32-bit or 64-bit) will be selected for all possible combinations of platform targeting settings and deployment platform.

Framework Selection Grid

A red n/a entry in the table indicates that an assembly would not be available for use in that particular environment.

So what’s the take-away from all of this? Well it’s pretty simple; Stick to the defaults unless you have a good reason to do so. The current default is Any CPU, Prefer 32-bit which means that on Intel x86 and x64 systems your apps will run as 32-bit. This in turn means that you only need to install the 32-bit version of Synergy on runtime only systems unless you also need to run services such as license sever, xfServer, xfServerPlus or SQL OpenNET on the same system. Development systems should ALWAYS have 32-bit and 64-bit Synergy installed.


New Code Exchange Items

By Steve Ives, Posted on September 23, 2014 at 7:59 pm

Steve Ives

Just a quick post to let you know about a couple of new items that will be added to the Code Exchange in the next day or so. I’m working with one of our customers in Montreal, Canada this week. We’re in the process of moving their extensive suite of Synergy applications from a cluster of OpenVMS Integrity servers to a Windows environment. We migrated most of the code during a previous visit, and now we’re in the process of working through some of the more challenging issues such as how we replace things like background processing runs and various other things that get scheduled on OpenVMS batch queues or run as detached jobs. Most of these will ultimately be implemented as either scheduled tasks or Windows Services, so we started to develop some tools to help us achieve that, and we thought we’d share 🙂

The first is a new submission named ScheduledTask. It’s essentially a class that can be used to submit Traditional Synergy applications (dbr’s) for execution as Windows Scheduled Tasks. These tasks can be scheduled then kicked off manually as required, or scheduled to run every few minutes, hours, days, months, etc. You also get to control the user account that is used to schedule the tasks, as well as run the tasks. The zip file also contains two sample programs, one that can be run as a scheduled task, and another that creates a scheduled task to run the program.

The second submission is called Clipboard, and you can probably guess it contains a simple class that gives you the ability to programmatically interact with the Windows Clipboard; you can write code to put text onto the clipboard, or retrieve text from the clipboard. Again the submission includes a small sample program to get you up and running.

Hopefully someone else will find a use for some of this code.


Breaking News … a New Life for OpenVMS?

By Steve Ives, Posted on July 31, 2014 at 4:15 pm

Steve Ives

It seems like there is new hope for those organizations still running OpenVMS, after HP recently announced a partnership with a company called VMS Software Inc. (or VSI). There is now talk of adding support for Intel’s Itanium “Poulson” chips by early 2015, as well as the upcoming “Kittson” chip. There is talk of new versions of OpenVMS, and even mention of a possible port of OpenVMS to the x86 platform.

More information here:

http://www.computerworld.com/s/article/9250087/HP_gives_OpenVMS_new_life


CodeGen Training Videos

By Steve Ives, Posted on April 28, 2014 at 7:36 pm

Steve Ives

I finally got around to something that I have been meaning to do for a while, creating some short training videos for CodeGen. Just five videos right now, but I have a growing list of subjects for future videos.

You can view the videos on the Synergex Channel on YouTube.

Please subscribe to the YouTube channel to receive notifications when new videos are added.


Don't miss a post!

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Recent Posts Categories Tag Cloud Archives