Installing Azure CLI Using CURL

Overview

In preparation for some upcoming work using the Azure platform I took the Pluralsight course Developing with Node.js on Microsoft Azure - Getting Started by Scott Allen. In the course the Azure portal is used for many of the configuration and administration tasks but Scott also demonstrates the use of the Azure Command Line Interface. I installed the Azure CLI but had a lot of trouble on the way.

The AZ not found problem

The first step is to Install Bash - Ubuntu on Windows which is fairly easy to accomplish. Next I used the instructions at Install Azure CLI 2.0 for the CLI install. I ran all the scripts in the section ‘Windows -> Bash on Ubuntu’ under the Windows section:

1
2
3
4
5
6
echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ wheezy main" | \
sudo tee /etc/apt/sources.list.d/azure-cli.list
sudo apt-key adv --keyserver packages.microsoft.com --recv-keys 417A0893
sudo apt-get install apt-transport-https
sudo apt-get update && sudo apt-get install azure-cli

There is a warning to run the last command twice, which I did. But when I tried to run “az –version” I would get a message the az command is unknown. I did a lot of searches and tried a lot of fixes from Stack Overflow and other sources to no avail.

The page Failure during installing the azure-cli 2.0 in Bash on Ubuntu on Windows #1075 instructs you to run the following commands before installing the CLI on Bash for Windows.

1
2
3
sudo apt-get update
sudo apt-get install -y libssl-dev libffi-dev
sudo apt-get install -y python-dev

Not being sure about the last command, I also ran:

1
sudo apt-get install -y python3-dev

But trying the original install scripts again did not work.

Piping CURL into Bash - A victim-less crime

Then somewhere, and I can’t pin down where, I saw this command for the install:

1
curl -L https://aka.ms/InstallAzureCli | bash

There is an entry under ‘Errors with curl redirection’ on the page Install Troubleshooting that gives a workaround if the command does not work. Running the curl command did the trick. It was obvious that this was different because there were now user prompts not present in the original install I tried:

1
2
3
4
5
6
7
===> In what directory would you like to place the install? (leave blank to use '/home/fredwebs/lib/azure-cli'):
===> In what directory would you like to place the 'az' executable? (leave blank to use '/home/fredwebs/bin'):
===> Modify profile to update your $PATH and enable shell/tab completion now? (Y/n): y
===> Enter a path to an rc file to update (leave blank to use '/home/fredwebs/.bashrc'):

In trying to find the source of this command I pulled up a few pages of strong opinions about using CURL and Bash with a pipe. One of these had the title about the victim-less crime. As Ford Prefect told Arthur Dent once, “Don’t knock it, it worked.”

TeamCity Setup for Web API Application

Overview

I was recently tasked with setting up a continuous integration build on a development server for a Visual Studio 2015 Web API solution. I had a little experience using Jenkins but had never attempted to set up such an application. But it was just a single application deploying to the same server. How hard could that be? I quickly found out. In all of my web searches I found maybe a couple of end to end articles but they proved to be incomplete and applied to older versions of TeamCity. The TeamCity documentation is very though but I still struggled with my limited DevOps experience. I can count only a few times I had ever even used MsBuild in my decades of Microsoft experience.

Jet Brains describes TeamCity as “Powerful Continuous Integration out of the box”. The latest version is 10.0 and the hefty 900 MB plus Windows installer may be downloaded here. The Professional version is free and offers the use of up to three build agents (MsBuild, script, NuGet, etc.) and twenty build configurations.

Installation

The installation is pretty straightforward and there were no problems. Be aware that the installer might put the current Windows user account in a Windows reporting user group. I could not determine if TeamCity was responsible for this and we removed the account from the group since we received a security alert about this. TeamCity seems to operate just fine without this group membership.

You will need to create a database for the system to use. We created an empty “TeamCity” database in SQL Server. You will be prompted to enter the connection information along with a SQL account and password. For SQL Server the JDBC drivers must be installed on the server. The JDBC driver package gives you two versions. You will have to tell TeamCity which one to use. I found the 6.0 version had no problems. MySQL and other databases are supported as well.

Important File Locations

With any such large complex application files are created and stored in various places. Here are some of the important locations for our installation. Note that you will specify the TeamCity Installation location during the install:

Name Description
TeamCity Installation E:\TeamCity
Application Data C:\ProgramData\JetBrains\TeamCity
Repository Location E:\TeamCity\buildAgent\work\{generated Id}\
The source, libraries, and compiled files for the project will be here.
NuGet packages folder will be created and loaded at the level of the solution file if you use a NuGet agent in TeamCity.
Windows 10 SDK Installer at https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk
This would not install on our server so I copied the files on the next line from my dev machine to same folder on the server.
SDK Files For MsBuild version 14 this will be “C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.6 Tools\”.
JDBC Files C:\ProgramData\JetBrains\TeamCity\lib\jdbc
Copy the files in sqljdbc_6.0\enu\jre8\sqljdbc42.jar to this folder
Microsoft Build Tools 2015 Installer at https://www.microsoft.com/en-us/download/details.aspx?id=48159
Visual Studio Files C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v14.0
Copy these files from your dev machine to the same location on the server
NuGet Executable https://dist.nuget.org/index.html

Setting up the root project

The first step is to set up your root project. On this page I only used the General Settings and VCS Roots menu items.

Under VCS Roots you create a connection to source control. This was very straightforward. You might want to create a new login for use by TeamCity. It appears it only needs read only access. Note that the Template below is not used in our setup but could be used to simplify the setup of multiple similar builds.

The Root Project

On the General tab click the Create subproject button and give it a Name, a Project Id, and a description as show in this edit view of my subproject.

After you have created your subproject click on the Create Build Configuration and give it a name. I used “Standard Build” here. When it is created you click on the edit link by the Build Configuration (WebSvcBuild here). You then get a menu for the build steps.

Sub-Project Edit View

Build Configuration

General Settings

On this page enter a build name. Here I used “StandardBuild”. TeamCity creates the Build Configuration ID for you, which can be changed. I used the defaults for the rest of the page.

General Settings

VCS Roots

Here you need to click on the Attach VCS root button to create your VCS Connection.

VCS Roots

Edit VCS Roots

Here is the edit view of the VCS root that is shown when you click the Edit link next to the VCS root name. Once again choose a name then enter the location and credentials for your VCS.

Edit VCS Root

Build Steps

Here is where the build and deploy are defined, and where I spent most of my struggles. You may use up to three build agents with the Professional version.

There is a NuGet runner you can use to retrieve your packages. The files will be placed in the packages folder at the same level of the solution. I had problems getting all my packages since we have an aggressive firewall which caused timeouts on various packages. I reverted to copying my packages from my development machine. This was just as well since I could find no way to direct the packages to our standard lib folder that is at a level above the solution. Note that you will have to copy NuGet.exe to the server and tell TeamCity where it is.

For a .Net application there is Solution runner type. I began using the Solution runner to compile my project and added a Powershell script runner to copy the compiled files to the target IIS folder. This seemed to work fine until I got a 404 for every call made to the deployed site.

Another investigation found that two key files were missing:

  • App_global.asax.compiled
  • App_global.asax.dll

These files are created when you build with a Publish profile. I could not make the Solution runner create the Publish profile so I reverted to using the MsBuild runner. I developed the build parameters and publish profiles on my dev machine before trying them in TeamCity. The publish profile is stored at

  • {solution level}\{WebApiProject}\Properties\PublishProfiles\MyServicesPublish.pubxml.

These are the MsBuild parameters that worked on my dev machine:

  • /p:DeployOnBuild=true
  • /p:PublishProfile=MyServicesPublish
  • /p:AspnetMergePath=”C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.6 Tools\”

However when I used these parameters in TeamCity I got this error: “Can’t find the valid AspnetMergePath”. Thus began another two hour investigation. The merge path specified was there. After trying many things and making numerous web searches I still got this same error message.

By chance I noticed the error message had a single quote and double quote at the end in what I thought was the wrong order. I changed the double quotes to single quotes on the AspnetMergePath parameter, same error. I guess it was 1995 when Windows allowed you the luxury of spaces in file and folder names. In desperation I copied the contents to a new folder:

  • {TeamCity Install}\bin\NETFX4.6Tools.

Most any folder will do just get rid of the spaces in the last folder. Without the spaces in the path I removed the quotes from the AspnetMergePath parameter. After this my project ran without errors.

Build Steps

MsBuild Step

Here is my completed MsBuild page. Note that the Build file path is always relative to your root source control folder. The Targets has Rebuild and Publish to ensure App_Global files are created. My Publish profile file has the path to my IIS folder. This eliminates having a script runner that copies files from the VCS location and saves you from using one of your three runners.

MsBuild Step

Triggers

You add a trigger for the build here with the Add new trigger button.

Triggers

Edit VCS Trigger

This is the edit view of the trigger configuration which builds the system whenever there is a check-in.

Edit VCS Trigger

Agent Requirements

I did not have to change anything on Agent Requirements but wanted to point out that you can check here to see that MS Build Tools are installed correctly. Here its condition is “exists”.

Agent Requirements

Keeping a Daily Log in Evernote

For the last few years I have been keeping a daily log of events and information in a daily log using Evernote. In this article I will describe how I use a daily log in Evernote and the code I used to build a journal template.

Using a Daily Log

My daily log is a place where I record notes on programming, applications, machine, and other information. Updates to applications, error reports, meeting notes, code snippets, and links to articles I would like to read later are all kept in my log.

I record events by the date which gives some order to this jumble of information. Also when tracking errors down it is good to have a date associated with their appearance.
Despite the dated entries I don’t really use it as a calendar or planner. However on occasion I will put an entry in a future date about some deadline or task to perform. But mostly it is for a record of what happened when on what project, program, or machine.

I keep one log per year. Although you can search over all your notes in Evernote I find it useful and faster to search within a note if I know the data or event was in a particular year. Also when you first bring up a note you are at January 1, so you have to page down to the current date to start a new entry for the current day.

Advantages of Evernote

Evernote is an online service for documents, images, web pages, and voice notes. Its greatest advantage is that your information is available via several means. The Evernote website has a sophisticated web application to view and edit your notes. There are applications available for Windows, Mac, iOS, and Android. The basic service is free and the paid service adds things like performing OCR on images and adding the words to the search index. There are also Chrome and Firefox add-ins that allow you to save a partial or entire page.

The majority of my Evernote usage is by the Windows application. The data is stored locally and synced with the website at an adjustable period. Notes are edited in a rich text editor that supports different fonts and colors, indents, bullet lists, and tables. Formatted data pasted in will retain the format of the original document.

Keeping the Log

When I started the log I kept a weekly template with horizontal rules to divide the daily entries and put a partial day/date at the top of each section. That meant every Monday morning I would have to copy this template, paste it into the end of the log, and then change seven strings like “2016.mm.dd Monday” to “2016.12.26 Monday”.

I got pretty tired of doing this and last year I started to investigate how to create a template that would suit my needs. Unfortunately paid programming work got in the way and I had to put the project aside. Also I found a 2016 calendar template on the Evernote site. I managed to edit this calendar into a usable log template. It had several drawbacks. The major one was that it was one big table so that limited how you could format things since one day was in one table cell. Also when you did an undo the document would zip back to the top, very frustrating after about April or May!

Creating a Template

This year I was determined to revive my custom template project and have a new template in place on January 1.

My original idea was to have a program create the date text and formatting data and save it into the Windows clipboard for easy pasting into Evernote. There is an Evernote XML export / import format (.enex) but I decided not to dig into that. Besides, how hard could it be to write formatted data to the clipboard? The application source is available here in Github.

The Evernote Log Application

Reading the Clipboard

First I created the function GetClipboardInfo to list the formats in the current clipboard entry and list the formats it contains.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
public static StringBuilder GetClipbordInfo()
{
var sb = new StringBuilder();
var cbDataObj = Clipboard.GetDataObject();
if (cbDataObj == null)
{
return sb;
}
var fmts = cbDataObj.GetFormats();
sb.AppendLine("Data object formats: ");
foreach (var t in fmts)
{
sb.Append("\"");
sb.Append(t);
sb.AppendLine("\"");
}

After listing the formats I then check to see if an HTML Format object exists. If it exists we read it with the GetData function. The Replace function is to change the new line character to a carriage return / new line so the Winforms text control will display the line breaks.

1
2
3
4
5
6
7
8
9
10
// HTML Object
if (!cbDataObj.GetDataPresent("HTML Format"))
{
sb.AppendLine("No Html object");
}
else
{
var doc = cbDataObj.GetData("HTML Format");
sb.AppendLine(doc.ToString().Replace("\n", "\r\n"));
}

The “Version:0.9” is the beginning of the HTML Object followed by markers for the positions of the start / end of the actual HTML and the Fragment, which is what was copied. By copying parts of an Evernote note and using this tool I was able to determine what to put in the generated template.

The class GenEverLog has a single function CreateYear with a single parameter for the year to create. The HTML strings are divided between the sections before and after the date strings to be printed. The format strings are used for date formatting. These could be made variable with control settings added to the form. Also the hard coded colors in the HTML strings could likewise be made select-able.

1
2
3
4
5
6
7
8
9
10
11
private const string HtmlStart = "<span><hr/><div><span style=\"color: rgb(54, 101, 238);\"><b>";
private const string HtmlEnd = "</b></span></div><div><br/></div><div><br/></div><div><br/></div></span>";
private const string HtmlWeekStart = "<span><hr/><div><span style=\"color: rgb(123, 0, 61);\">=== <b>Week ";
private const string HtmlWeekEnd = "</b> ===</span></div><div><br/></div></span>";
private const string HtmlStartMonth = "<span><div><span><br/></span></div><hr/><div style=\"text-align: center\"><span><b><span style=\"color: rgb(50, 135, 18);\">";
private const string HtmlEndMonth = "</span></b></span></div></span>";
private const string MonthFormat = "MMM yyyy";
private const string DayFormat = "yyyy.MM.dd ddd";

From this point it is a simple matter to create a DateTime object, increment the day in a while loop, and build up a StringBuilder object with the HTML and the dates, months, and weeks. Note that a companion text only string is built up and will be sent to the clipboard. This text only string will be retrieved by applications that do not support HTML, such as Notepad.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
public static Tuple<StringBuilder, StringBuilder> CreateYear(int calendarYear)
{
var dateIncr = new DateTime(calendarYear - 1, 12, 31);
var year = calendarYear;
var lastMonth = 0;
var week = 0;
var sbHtml = new StringBuilder();
var sbText = new StringBuilder();
while (year == calendarYear)
{
// Increment date
dateIncr = dateIncr.AddDays(1);
year = dateIncr.Year;
if (year != calendarYear)
{
break;
}
var month = dateIncr.Month;
if (month != lastMonth)
{
lastMonth = month;
// Do month header
sbHtml.AppendLine($"{HtmlStartMonth}{dateIncr.ToString(MonthFormat)}{HtmlEndMonth}");
sbText.AppendLine($"{dateIncr.ToString(MonthFormat)}");
}
// Week entry
if (dateIncr.DayOfWeek.Equals(DayOfWeek.Monday))
{
sbHtml.AppendLine($"{HtmlWeekStart}{++week}{HtmlWeekEnd}");
}
// Do date entry
sbHtml.AppendLine($"{HtmlStart}{dateIncr.ToString(DayFormat)} - ({dateIncr.DayOfYear.ToString("D3")}){HtmlEnd}");
sbText.AppendLine($"{ dateIncr.ToString(DayFormat)} - ({ dateIncr.DayOfYear.ToString("D3")})");
}
sbHtml.AppendLine($"{HtmlStartMonth}End of {calendarYear}{HtmlEndMonth}");
sbText.AppendLine($"End of {calendarYear}");
ClipboardHelper.CopyToClipboard(sbHtml.ToString(), sbText.ToString());
return new Tuple<StringBuilder, StringBuilder>(sbHtml, sbText);
}

To write the HTML to a new clipboard entry, I used the ClipboardHelper code from this article by Arthur Teplitzki. I had to clean up the code copied from the web page to change Word style double quotes to regular quotes and while I was at it I changed the style to use newer C# conventions. The only functional change I made was to remove the insertion of the Doctype line.

The “Write Everlog to Clipboard Function” button will call the CreateYear function and display “Added HTML to Clipboard”. If you want to see the HTML created, simply click the “Read Clipboard” button again.

Viewing the created clipbord entry

If you are only interested in the template, the Fredwebs2017.enex file can be imported directly into Evernote. The file is included in the Github repository or download directly here.

The top of the 2017 log

InstallShield Limited Edition

A recent client project brought me back into the world of Windows WPF applications. The task was to update some very old third party controls and to change Bing Map service calls to the Bing Maps REST API from the disappearing SOAP API.

Updating a six year old project

Naturally a new install program would be needed. The code I was given was in a Visual Studio 2010 Solution with a Visual Studio Installer (.vdproj). This installer type is no longer supported in Visual Studio 2015. There is a Visual Studio add in that supports this old format but I wanted an up to date solution. Since I had already spent significant client funds for software and hardware I decided to try the InstallShield Limited Edition that is licensed free with Visual Studio.

Free software - you get what you pay for

I was able to get an installer build that did work. However the interface is extremely clunky and you are never quite sure when the installer build is actually running. Files you add in by mistake cannot be removed from the build list, you have to uncheck the check box and hope you don’t check it later. A couple of times the InstallShield process killed Visual Studio and required a visit to the Task Mangler to kill the whole thing. The project had some large data files which took InstallShield an incredible amount of time to compress.

Due to the half hour required to build the installer, I uninstalled the installer project from the Visual Studio Solution until I needed to build a new installer. The very first time I attempted to reload the installer project in the solution, the InstallShield installer popped up and informed me that it needed to make changes to continue. I reluctantly gave it the OK. It churned away for a few minutes and then requested a reboot. After the reboot I tried again to reload the installer project. At this time Visual Studio informed me that this type of project is no longer supported and would not load it.

Changing to a better installer

I purchased Deploy Master for $99. It is from Just Great Software whose products I have enjoyed using for years. It took a couple of hours to learn the new program but it did the job superbly. You can easily make changes, it runs outside of Visual Studio, and the installer built in about five minutes versus the half hour with InstallShield. And as a bonus the resulting installer file uploads to web storage much more quickly since it is only 238MB versus 530MB.

Bing Maps Using Web API

Bing Maps recently retired their SOAP web service interface. The new interface is a REST service and the JSON Data Contracts define the response interface. There is a sample program Parsing REST Services JSON Responses that I used as a starting point for my code which is located here.

Bing Maps Key

To use the Bing Maps interface you will need a key. See Getting a Bing Maps key if you don’t have one already. In my code I read the key from the environment variable named “BingMapsKey”:

1
_bingMapsKey = Environment.GetEnvironmentVariable("BingMapsKey");

Building and sending the request URI

The base of the URI for all requests is:

1
private const string BingRestLocation = "http://dev.virtualearth.net/REST/v1/";

With this base we add “Locations” then the search string and add the key at the end. Be sure to use WebUtility.UriEncode on your location search.

1
var urlRequest = $"{BingRestLocation}Locations/{place}?key={_bingMapsKey}";

Next we make a call with Web API to the service using the MakeRequestWebApi function with “New York” in the place string. The Tuple that is returned has a status string and Bing Response object.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public static Tuple<Response, string> MakeRequestWebApi(string requestUrl)
{
var httpResponseMessage = Client.GetAsync(requestUrl).Result;
if (!httpResponseMessage.IsSuccessStatusCode)
{
return new Tuple<Response, string>(null, $"Response Status: {httpResponseMessage.StatusCode}");
}
var jsonString = httpResponseMessage.Content.ReadAsStringAsync().Result;
using (var ms = new MemoryStream(Encoding.Unicode.GetBytes(jsonString)))
{
var deserializer = new DataContractJsonSerializer(typeof (Response));
return new Tuple<Response, string>((Response) deserializer.ReadObject(ms), "success");
}
}

Parsing the return

Once we get a Response object back, we have to cast to the object that we requested, in this case a Location. The other objects are Route, Traffic Incident, CompressedPointList, ElevationData, or SeaLevelData.

The function ProcessLocationResponse is from the original program and shows the locations found with high confidence and their geocode points.

Retrieving a Route

I also needed to get route instructions from the Bing Map interface. I added a Location search for three more specific addresses to make a cross country musical journey from the Brill Building in New York to the Whiskey A Go Go in LA with a stop by the Stax Studios in Memphis.

1
2
3
const string brillBuildingAddress = "1619 Broadway New York NY 10019";
const string staxStudiosAddress = "926 E McLemore Ave, Memphis, TN 38126";
const string whiskyaGoGoAddress = "8901 W. Sunset Blvd West Hollywood, CA 90069";

The Route parameter requires at least two waypoints in the request. The MakeWayPointString takes a List of Locations and builds the string.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public static string MakeWaypointString(List<Location> waypoints)
{
var waypointsSb = new StringBuilder();
var waypointCntr = 1;
foreach (var waypoint in waypoints)
{
waypointsSb.Append($"wp.{waypointCntr}=");
waypointsSb.Append($"{waypoint.Point.Coordinates[0]},");
waypointsSb.Append($"{waypoint.Point.Coordinates[1]}&");
waypointCntr++;
}
return waypointsSb.ToString();
}

The ProcessRouteResponse parses the route information and shows the instructions and coordinates of each itinerary item in both legs of the trip.

Moving a Blog to Hexo

I started blogging using a self hosted version of a platform called Das Blog. It had two major attractions for me. It was an open source .Net application and it stored everything in XML files. Just ten years ago hosting was more expensive and often you were allowed only one database so the XML solution had an appeal. It worked well for years but it took a lot of time and headaches to update the blog when a new version of Das Blog was released.

After a few years of Das Blog operation I really didn’t have much on my blog. I incorrectly assumed that I would write more for the blog if I wasn’t spending time to update it. So I moved to a hosted WordPress solution.

A few more years down the line I took a Pluralsight course titled Build a Better Blog with a Static Site Generator by Jeff Ammons. The course description said you could make your blog load faster with a static website generator without resorting to hand written HTML. I’m bracket adverse so this sounded good to me. The course is great and inspired me to leave the WordPress world behind.

The course covers two different blogging platforms, Hexo and DocPad. Hexo is specifically designed for blogs and is simpler to implement so I went with that solution. Hexo is a node.js application that compiles a few configuration and template files, along with your pages written in Markdown text. The output is placed in a single folder which can be copied to any web server since it is only HTML and JavaScript. In the course Jeff shows how to update your site using Git. WordPress has an export function that creates Markdown files which I easily updated for the new site.

I’ll have to wait and see if actually blog more with Hexo but one bonus about the two conversions I have done is that each was a great opportunity to clean out outdated posts.

2015 France Photos

First days in Paris

Tuesday Afternoon

Wednesday on the Left Bank

Friday in ParisThe 4th arrondissement of Paris

Drive to Breuil

Château de Chantilly

Sunday Brocante (flea market) at Neuilly St. Front

La Ferte-Milon

In and near the country house

Sunrise at the Chateau

Reims Cathederal

Reims and the Ruinart Champagne Tour

The canal at La Ferte-Milon

Marolles

Cointicourt and the walk home

The Ruins at Fere-en-Tardenois

The Oise-Aisne American Cemetery and Memorial

The City of Fere-en-Tardenois

Repair of the Breville BOV450XL Mini Smart Oven

We have been enjoying using the Breville Mini Oven for about two years. It has not only replaced the toaster on the kitchen counter, but it has mostly eliminated the use of the large electric oven. The oven retails for about $150. A few weeks ago the oven suddenly went dead without a beep or anything on the digital display. We had usually unplugged the device when not using it to save the vampire power drain and avoid damage from the frequent 10 second power outages in our neighborhood. The handy plug with the hole in it made this an easy task.

Breville Oven Front View

The first thought was, oh maybe it’s just a fuse. But there is no user replaceable fuse. Searching the web did not turn up any information on how to fix the oven either. What I did find was replacement parts at a site called eReplacement Parts. They had a fuse assembly for the oven for just $4.57. With shipping it was almost $12 dollars. It was not in stock and they gave no estimate as to when they might ship me one. But it only took a couple of weeks before the UPS guy left one on the front porch. They also gave email updates on the order.

Then the fun begins with the disassembly of the oven. First remove all the exterior screws that hold the single piece of stainless that makes up the top and sides of the oven. This will just loosen it and allow you to bend out the sides for access. There are at least two screws that keep this wrapping attached to the chassis up near the front where the controls are. I figured if I took those two out I would never get them back in since there is so little room to work with. Remove the rear cover completely with the external screws. Luckily the fuse assembly is attached to the back left of the oven. Right below it is the temperature sensor. The fuse is temperature sensitive, that is why it is attached to the inner frame that gets hot.

The eReplacement page has a picture of the fuse assembly. You have just two screws that hold it to the frame and two wire connections. The first one is directly connected to the black power cord with a crimp connector. Connect the new fuse wire with a wire nut. The other end has a slip off terminal connector that attaches to a connector at the bottom front of the control assembly. The old one comes off easily with a tug using a pair of needle nose pliers. Getting the new one on that connector is the hardest part of the job. There is a connector on each side of your target connector, and the connectors are recessed.

Breville Oven Disassembled Rear View

Using Webstorm with Udemy AngularJS course by Dan Wahlin

I recently took the course AngularJS Jumpstart with Dan Wahlin on Udemy. Dan does a great job and I highly recommend it. In the course Dan uses the Brackets editor which has the advantage of being able to set your project as the root of the server. This results in addresses such as:

“localhost:8080/index.html”

I use Webstorm for Angular and JavaScript development. I could not find any way to get it to do this and ended up addresses like:

“localhost:63342/Angular/index.html”

The problem comes in when you use the node.js server for the course application. Changing the server port to use 63342 causes a cross domain load error. To get the application to work with Webstorm first modify the server.js node/Express file:

1
2
3
4
5
6
7
var allowedOrigin = 'http://localhost:63342';
app.get('/customers', function(req, res) {
// Add the following line to fix cross domain errors
res.header('Access-Control-Allow-Origin', allowedOrigin);
res.json(customers);
});

Then for each app.get in the file add the res.header line. There are other ways to do this such as using a router or a specialized middleware routine but this is the most straightforward way.

Next modify the customerFactory.js file to retrieve data from the 8080 port instead of defaulting to the application 63342 port:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
(function() {
var customersFactory = function($http) {
var urlBase = 'http://localhost:8080';
var factory = {};
factory.getCustomers = function() {
return $http.get(urlBase + '/customers');
};
factory.getCustomer = function(customerId) {
return $http.get(urlBase + '/customers/' + customerId);
};
factory.getOrders = function() {
return $http.get(urlBase + '/orders');
}
factory.deleteCustomer = function(customerId) {
return $http.delete(urlBase + '/customers/' + customerId);
}
return factory;
};
customersFactory.$inject = ['$http'];
angular.module('customersApp').factory('customersFactory',
customersFactory);
}());

A Website Health Check Page

Over the last two years I have supported a multi-function web site with many application settings, three databases, and several external web services. The need to support this site on many different servers led to the development of a detailed diagnostic health check page for the application. Some of the servers are controlled by the customer which makes debugging problems more difficult since you have to coordinate changes with remotely located support persons in different time zones.

The health check we have developed is written in ASP.Net web forms but could be easily converted to an MVC format. The main purpose here is not focus on the presentation layer but to identify useful information to report and present some of the code used to gather information.

The page supports three options:

  1. Simple Health Check – A fast check that does a quick database connection check and returns a code of 200 if successful else a 500. This is performed if there is no valid query string values.
  2. Get Version – Returns a simple one line version of the application version. This is called by using a query string containing a “GetVersion” key with any value.
  3. Detailed Health Check – Does a detailed database check, web services check, and displays some non-sensitive application settings. Called by a query string containing “details” key with any value.

The Database Section

Database section

The header of the page is set to the site name constant, “Mighty Site” in this example. The current URL follows the site name. This is helpful if you have one or more instances open for different locations.

In this example we check three databases. The first check “Database Connections” is the same check called in the simple health check. The function CheckDbConnection is called for each database and attempts to open a new connection. Before trying, the connection timeout is set to 5 seconds. This keeps you from having to wait the standard 30 seconds if the connection string is bad or the database is down.

Next a random stored procedure is executed for each database. This tells you the database login you are using has execute permissions on stored procedures.

Our second database (DB2) needs full text search installed, which is not the default case when you install SQL Server. The GetServerProperty function in the Helper class is called to retrieve the server property “IsFullTextInstalled”.

The next two lines the the date and time of the SQL Server instance and the IIS server. In the example above, both are on the same localhost machine and are the same. But we did have one occasion when the database was on a separate machine and just a few seconds difference caused one pop up routine to go into an infinite loop.

Web Services

Web Service Section - Passing

The Mighty Site relies on a Single Sign On service to log users in, and a Data Service to get and update information about the users. The ServiceHealthCheck function uses the Web API Client to make calls to the health check on these services and looks for a 200 OK return code. If a service cannot be reached or returns a non-200 code, the row background will be pink and a DNS lookup is attempted on the URL.

Web Service Section - Failing

Application Settings

The application has many settings in the appSettings.config file and it is useful to see some non-sensitive ones on the health check page. You could also display database resident settings here.

Application Settings Section

Version Information

The website maintains version information in the App_GlobalResources\About.resx file.

Version Info Section

Time Zone Information

Unfortunately the Mighty Site was written years ago by some folks who didn’t believe in using UTC date times. The system is based on the time zone where the server is located which causes a lot of headaches since different customers are in different time zones.

Time Zone Section

Conclusion

The ASPX file contains only a single line, the usual first line containing the Page, CodeFile, and other directives. All of the page content is generated in the code behind with the help of the functions in the HTML Tables region. The source code is available on GitHub.