TypeScript Module Resolution and requireJs

I have a web project which uses requireJS with TypeScript and the base URL is set as /Scripts. Most Nuget packaged script libraries (jquery, etc.) install the script into /Scripts so I try to keep my own code separate in sub folders.

For example all the scripts used by views are in the /Scripts/Views folder, e.g. /Scripts/Views/Home.ts. Various libraries I have created exist in sub-folders off the script directory: for example my KnockoutJS related code lives in /Scripts/ko and Knockout components in /Scripts/components.

When writing a view script that needs a library you can then reference it thus:
import bindingHandlers = require("ko/bindingHandlers");
All so good so far. However, I also have code in a different folder which is not a sub-folder of Scripts  – and this is where TypeScript breaks down. In versions 1.x it will only walk up the current path to look for modules, and therefore cannot find the libraries.

For example, I have a  folder /Products/Broadband with a script Order.ts If I start this with the require statement require("ko/bindingHandlers"); this fails. TypeScript will look in the current directory, then /Products and finally / – and none of these work.

At present there isn’t a way to fix this short of moving all scripts into the /Scripts folder, but there is a proposed change in TypeScript 2.0 that will support a baseUrl and paths option in tsconfig.json configurations.  This will allow the non-relative module references to resolve correctly.

Step-by-Step Guide to Getting SSL for Free using LetsEncrypt on Windows IIS

SSL should be simpler and cheaper, and Let’s Encrypt is a great project to make this happen, but it’s biased toward *nix systems. So what do you do if you want to install a free Let’s Encrypt SSL certificate on your Microsoft IIS server?

Well I did this today, and even using LEWS (see below) there does not seem to be a clear step-by-step guide. So having stumbled and googled my way though the process, I thought I’d document it for all.

This approach uses the letsencrypt-win-simple tool (LEWS) to validate and set up the SSL certificates.

A couple of prerequisites. First, you’re going to need to have admin console access to the hosting server – if you don’t have this you won’t be able to follow the simple guide here. Second, the DNS names must already be configured externally, e.g. mysite.example.org should be resolvable to your server on http for this to work.

Assuming you do, here is what you need to do:

  1. Log onto the console of the Windows Server that is hosting the site you want to add SSL to
  2. Run Internet Information Services (IIS) Manager (I’ll call it IISM for short)
  3. Select the site you want to add SSL to in the list of Sites
  4. Click the Bindings button in the menu on the right
  5. Ensure your site has a named http binding in the host name section, even if you only have one site and the hostname is blank (to accept all requests). This is required so that LEWS knows what certificate name to create
  6. Close the bindings dialog box
  7. Download the LEWS client from https://github.com/Lone-Coder/letsencrypt-win-simple/releases – the ZIP file contains the client. This guide was written using version 1.7
  8. Unzip the contents to a folder on the server, e.g. C:\SSL
  9. Start a command prompt in Administrator mode (right click the Command Prompt on Start and select Run as Administrator)
  10. Navigate to the folder you unzipped LEWS to, e.g. cd \ssl
  11. Type the command letsencrypt and press enter
  12. If this is your first time you’ll be prompted to enter your email address to register and accept the EULA .. do this and accept to continue (subsequent runs will bypass this)
  13. You should now have a text menu, with a list of numbered Site Bindings and options for M, A and Q (manual, all and quit)
  14. Select the site (1-9) you want to add SSL to, or use A if you want to do all. If your site does not appear it may be because you forgot to add a host-name binding.
  15. The process will now create a verification file in a subfolder of the site, and use this to try to authorize the SSL certificate (see Note A below).
  16. If this is successful, you should see a confirmation. If you get block of red text this indicates an error.
  17. Assuming it worked, go back to IISM and select “Bindings” on the site – you should now have an SSL (https) binding with a dated certificate name: note that LetsEncrypt certificates are limited to 90 days for security reasons, but of course renewals are free.

Possible problems you might encounter:

Authorization Failures

If you’ve recently added a DNS entry, or recently change DNS configurations (e.g. IP address change) these changes can take time to become effective (lots of systems cache or only update infrequently). This can cause the

Note A

The authorization process creates a subfolder .well-known/acme-challenge and a verification file on your site and the LetsEncrypt service tries to access this static text file using DNS.

For example, if I want to set up SSL for mydomain.example.org it might create a file called 7sYBFMggYCsR3roQ2SqpNkgwXCs8aD1NoXaUnnZDdQ0 and then attempt to access http://mydomain.example.org/.well-known/acme-challenge/7sYBFMggYCsR3roQ2SqpNkgwXCs8aD1NoXaUnnZDdQ0 

Because the default behaviour of IIS is to block extensionless URLs this would normally cause an error, so LEWS also adds a simple web.config file in the acme-challenge folder.

If you still run into issues check the issues for LEWS.

RIP WD20EADS

I have a HP Microserver N36L edition (one of the first ones) with a couple of 2TB disks mirrored as a backup server. Yesterday one of the drives started to make noises, so after a backup, I checked out what was up.

According to the BIOS RAID utility, the first disk was the failing one, so I removed that. Oddly the AMD RAID controller BIOS utility had no way to repair the array (I could only create or delete). HP’s MicroServer support and downloads page listed a RAID management utility called RAIDXpert (when it actually worked at all), but it was only linked to a page on AMD’s site. This led to a broken link with no downloads. Googling the AMD site for RAIDXpert led to a page with only a PDF instruction manual for RAIDXpert – thanks a lot AMD.

Fortunately more Googling and Softpedia came to the rescue with old copies of the relevant software. Installed and ran; nominated the replacement disk as a spare and the rebuild started. Yay.

The failed disk was a WD20EADS Caviar Green, with a May 2009 date on the label. So it lasted six years – that’s pretty impressive for spinning rust. Checking the SMART data reveals a lot of reallocated sectors, which pretty much what I expected. Interestingly this had a power-on-hours of 49,120. That’s 5.6 years use – almost continual usage since I purchased it.

So a nod to those unsung heroes in the engineering departments of hard disk manufacturers who have created incredibly reliable devices that we so often take for granted.

RequireJS, TypeScript and Knockout Components

Despite the teething problems I had getting my head around RequireJS, I had another go this week on sorting this out. The motivation for this was Knockout Components – re-usable asynchronous web components that work with Knockout. So this article details the steps required to make this work on a vanilla ASP.NET MVC website.

Components

If you’re not familiar with components, they are similar to ASP.NET Web Controls – re-usable, modular components that are loosely coupled – but all using client-side JS and HTML.

Although you can use and implement components without an asynchronous module loader, it makes much more sense to do so and to modularise your code and templates. This means you can specify the JS code and or HTML template is only loaded at runtime, asynchronously, and on-demand.

Demo Project

To show how to apply this to a project, I’ve created a GitHub repository with a web app. Step one was to create the ASP.NET MVC application. I used .NET 4.5 and MVC version 5 for this but older versions would work just as well.

Next I upgraded all the default Nuget packages including JQuery so that we have the latest code, and then amended the home page to remove the standard ASP.NET project home page. So far, nothing new.

RequireJS

Next step is to add RequireJS support. At present our app loads the JavaScript modules synchronously from the HTML pages, using the ASP.NET bundler.

        public static void RegisterBundles(BundleCollection bundles)
        {
            bundles.Add(new ScriptBundle("~/bundles/jquery").Include(
                        "~/Scripts/jquery-{version}.js"));

These are inserted via the _Layout.cshtml template:

    @Scripts.Render("~/bundles/jquery")
    @Scripts.Render("~/bundles/bootstrap")
    @RenderSection("scripts", required: false)

First we add RequireJS to the web app from the Nuget project, and the Text Addon for RequireJS. This is required to allow us to load non-JavaScript items (e.g. CSS and HTML) using RequireJS. We’ll need that to load HTML templates. These will load into the /Scripts folder.

Configuring RequireJS

Next, we create a configuration script file for RequireJS. Mine looks like this, yours may be different depending on how your scripts are structured:

require.config({
    baseUrl: "/Scripts/",
    paths: {
        jquery: "jquery-2.1.1.min",
        bootstrap: "bootstrap.min"
    },
    shim: {
        "bootstrap": ["jquery"]
    }
});

This configuration uses the default Scripts folder, and maps the module ‘jquery’ to the version of JQuery we have updated to. We’ve also mapped ‘bootstrap’ to the minified version of the Bootstrap file, and added a shim that tells RequireJS that we need to load JQuery first if we use Bootstrap.

I created this file in /Scripts/require/config.ts  using TypeScript. At this point TypeScript is flagging an error saying that require is not defined. So now we need to add some TypeScript definition files. The best resource for these is the Github project Definitely Typed, and these are all on Nuget to make it even easier. We can do this from the Package Manager console:

   1: Install-Package jquery.TypeScript.DefinitelyTyped

   2: Install-Package bootstrap.TypeScript.DefinitelyTyped

   3: Install-Package requirejs.TypeScript.DefinitelyTyped

Implementing RequireJS

At this point we have added the scripts but not actually changed our application to use RequireJS. To do this, we open the _Layout.cshtml file, and change the script segment to read as follows:

    <script src="~/Scripts/require.js"></script>
    <script src="~/Scripts/require/config.js"></script>
    @* load JQuery and Bootstrap *@
    <script>
        require(["jquery", "bootstrap"]);
    </script>
    @RenderSection("scripts", required: false)

This segment loads require.js first, then runs the config.js file which configures RequireJS.

Important: Do not use the data-main attribute to load the configuration as I originally did –  you’ll find that you cannot guarantee that RequireJS is properly configured before the require([…]) method is called. If the page loads with errors, e.g. if the browser tries to load /Scripts/jquery.js or /Scripts/bootstrap.js then you’ve got it wrong.

Check the network load for 404 errors in the browser developer tools.

Adding Knockout

We add Knockout version 3.2 from the package manager along with the TypeScript definitions as follows:

install-package knockoutjs
Install-Package knockout.TypeScript.DefinitelyTyped

You need at least version 3.2 to get the Component support.

I then modified the configuration file to add a mapping for knockout:

require.config({ baseUrl: "/Scripts/", paths: { jquery: "jquery-2.1.1.min", bootstrap: "bootstrap.min", knockout: "knockout-3.2.0" }, shim: { "bootstrap": ["jquery"] } });

Our last change is to change the TypeScript compilation settings to create AMD output instead of the normal JavaScript they generate. We do this using the WebApp properties page:

image

This allows us to make us of require via the import and export keywords in our TypeScript code.

We are now ready to create some components!

Demo1 – Our First Component: ‘click-to-edit’

I am not going to explain how components work, as the Knockout website does an excellent job of this as do Steve Sanderson’s videos and Ryan Niemeyer’s blog.

Instead we will create a simple page with a view model with an editable first and last name. These are the steps we take:

  1. Add a new ActionMethod to the Home controller called Demo1
  2. Add a view with the same name
  3. Create a simple HTML form, and add a script section as follows:
    <p>This demonstrates a simple viewmodel bound with Knockout, and uses a Knockout component to handle editing of the values.</p>
    <form role="form">
        <label>First Name:</label>
        <input type="text" class="form-control" placeholder="" data-bind="value: FirstName" />
        @* echo the value to show databinding working *@
        <p class="help-block">
            Entered: {<span data-bind="text: FirstName"></span>}
        </p>
        <button data-bind="click: Submit">Submit</button>
    </form>
    
    @section scripts {
        <script>
            require(["views/Demo1"]);
        </script>
    }
  4. We then add a folder views to the Scripts folder, and create a Demo1.ts TypeScript file.
// we need knockout
import ko = require("knockout");

export class Demo1ViewModel {
    FirstName = ko.observable<string>();

    Submit() {
        var name = this.FirstName();
        if (name)
            alert("Hello " + this.FirstName());
        else
            alert("Please provide a name");
    }
}

ko.applyBindings(new Demo1ViewModel());

This is a simple viewmodel but demonstrates using RequireJS via an import statement. If you use a network log in the browser development tools, you’ll notice that the demo1.js script is loaded asynchronously after the page has finished loading.

So far so good, but now to add a component. We’ll create a click-to-edit component where we show a text observable as a label that the user has to click to be able to edit.

Click to Edit

Our component will show the current value as a span control, but if the user clicks this, it changes to show an input box, with Save and Cancel buttons. If the user edits the value then clicks cancel, the changes are not saved.

image view mode and  image edit mode

To create this we’ll add three files in a components subfolder of Scripts:

  • click-to-edit.html – the template
  • click-to-edit.ts – the viewmodel
  • click-to-edit-register.ts – registers the component

The template and viewmodel should be simple enough if you’re familiar with knockout: we have a viewMode observable that is true if in view mode, and false if editing. Next we have value – an observable string that we will point back to the referenced value. We’ll also have a newValue observable that binds to the input box, so we only change value when the user clicks Save.

TypeScript note: I found that TypeScript helpfully removes ‘unused’ imports from the compiled code, so if you don’t use the resulting clickToEdit object in the code, it gets removed from the resulting JavaScript output. I added the line var tmp = clickToEdit; to force it to be included.

Registering and Using the Component

To use the component in our view, we need to register it, then we use it in the HTML.

Registering we can do via import clickToEdit = require(“click-to-edit-register”); at the top of our Demo1 view model script. The registration script is pretty short and looks like this:

import ko = require("knockout");

// register the component
ko.components.register("click-to-edit", {
    viewModel: { require: "components/click-to-edit" },
    template: { require: "text!components/click-to-edit.html" }
});

The .register() method has several different ways of being used. This version is the fully-modular version where the script and HTML templates are loaded via the module loader. Note that the viewmodel script has to return a function that is called by Knockout to create the viewmodel.

TypeScript note: the viewmodel code for the component defines a class, but to work with Knockout components the script has to return a function that is used to instantiate the viewmodel, in the same form as this script. To do this, I added a line  return ClickToEditViewModel; at the end of this module. This appears to return the class, which of course is actually a function that is the constructor. This function takes a params parameter that should have a value property that is the observable we want to edit.

 

Using the component is easy: we use the name that we used when it was registered (click-to-edit) as if it were a valid HTML tag.

<label>First Name:</label>
    <div class="form-group">
        <click-to-edit params="value: FirstName"></click-to-edit>
    </div>

We use the params attribute to pass the observable value through to the component. When the component changes the value, you will see this reflected in the page’s model.

Sequence of Events

It’s interesting to follow through what happens in sequence:

  1. We navigate to the web URL /Home/Demo1, which maps to the controller action Demo1
  2. The view is returned, which only references one script “views/Demo1” using require()
  3. RequireJS loads the Demo1.js script after the page has finished loading
  4. This script references knockout and click-to-edit-register, which are both loaded before the rest of the script executes
  5. The viewmodel binds using applyBindings(). Knockout looks for registered components and finds the <click-to-edit> tags in our view, so it makes requests to RequireJS to load the viewModel script and the HTML template.
  6. When both have been loaded, the find binding is completed.

It’s interesting to watch the network graph of the load:

image

The purple vertical line represents when the page finished loading, after about 170ms – about half way through the page being completed and ready. In this case I had cleared the cache so everything was loaded fresh. The initial page load only has just over 33KB of data, whereas the total loaded was 306KB. This really helps make sites more responsive on a first load.

Another feature of Knockout components is that they are dynamically loaded only when they are used. If I had a component used for items in an observable array, and the array was empty, then the components’ template and viewModel would not be used. This is really great if you’re creating Single Page Applications.

Re-using the Component

One of the biggest benefits of components is reuse. We can now extend our viewModel in the page to add a LastName property, and use the click-to-edit component in the form again:

    <label>First Name:</label>
    <div class="form-group">
        <click-to-edit params="value: FirstName"></click-to-edit>
    </div>
    <label>Last Name:</label>
    <div class="form-group">
        <click-to-edit params="value: LastName"></click-to-edit>
    </div>

Now the page has two values using the same control, independently bound.

An IT Perspective on Scottish Independence

Lots of column inches on paper and blogs about what Scottish Independence would mean for the economy, ordinary people, etc. Not much I could see that looked at this from an IT perspective. If you run an IT operation in the UK (or internationally) what are the implications?

With so many unknowns about what an independent Scotland would look like, we cannot be certain about the precise impacts, but we can guess about the probability of some main issues.

Timing

For the purpose of this article we’ll assume that a ‘Yes’ vote happens on the 19th September. Nothing immediately will change since it will take a lot of political horse-trading to settle exactly how independence will happen, and of course when. The SNP is aiming for March 2016, but considering this is just 18 months after the vote, it’s possible that it will be later than this.

Eighteen months might be enough for some companies to adapt. Bear in mind the negotiations on what independence means in reality will take time, and given the size and scope of these issues. I would expect this process to take many months, so this could mean the time between negotiations completing and the date of independence could be as little as six months. You should start your planning on 20th September.

Currency

Leaving the political arguments aside, Scotland has three choices: the Euro, sharing the UK Pound, or creating a new Scottish Pound.

Euro

The Euro is politically unpalatable given the current state of the Eurozone, and the SNP itself says there is “no prospect” of using the Euro. So we can rule this one out, yes?

Maybe not. Scotland will want to join the EU, and EU membership rules for new members require the use of the Euro as currency. Scotland will definitely want to re-join the EU, so the circle has to be squared somehow. Scotland negotiating an opt-out as part of the membership negotiations is possible, but unlikely given their small size and therefore weak position. Set against this is that many of the recent members (Romania, Bulgaria, Czech Republic, Croatia, Poland and Hungary) have not joined the Euro. However, in most cases these countries will have to adopt the Euro eventually – it’s more a question of when, than if.

If we look at the Baltic states as an example, Estonia and Latvia joined the EU on 1st May 2004 but only converted to the Euro in 2011.

Interestingly the Convergence Criteria for the Euro require that the country in question would need to have “participated in ERM II for two years without severe tensions”. Assuming Scotland shares the pound it has no way to join ERM II without the UK’s agreement (which we can safely assume it’s not going to get), and therefore is both required to join and prevented from doing so.

So don’t assume this will never happen. As other countries have shown, you don’t have to convert to the Euro immediately and the rules would suggest that Scotland isn’t going to be eligible anyway unless they create their own currency first.

From an IT perspective though, it’s almost certainly four or five years away at best, so we can discount it for now.

Sharing UK Pound

This is the best outcome from an IT viewpoint – it means all systems can operate unchanged in currency terms at least.

But how likely is it? And would it last?

Sharing the UK Pound would mean that Scotland would lose control over its currency and therefore over interest rates, and to a degree over fiscal policy. This is not a political argument, but a logical outcome of simple economics. To explain:

Scotland’s economy is about 9% of the UK’s total GDP. If the pound is shared, the decisions about interest rates and exchange rate control will have to reflect what happens in the 91% of the economy more than what happens in Scotland. Interest rates in the rest of the UK would therefore determine the interest rates in Scotland, since you can’t have different interest rates in the same currency for very long.

A currency union also directly impacts fiscal policy, as the Eurozone has so ably demonstrated, with Greece being told how much they can spend and borrow. Scotland cannot set its spending and debt plans with complete independence. Agreed or informal currency sharing is quite likely to be the case immediately after independence, but I don’t expect it to last in the medium term.

So from an IT perspective, you can probably assume with a high degree of confidence that the pound will remain as Scotland’s currency for the immediate future, whether by agreement or not. How long this would last is an interesting question.

Scottish Pound

The third alternative is for Scotland to create it’s own currency (we’ll call it the Scottish Pound for now). This scenario has a similar level of impact as the Euro from an IT perspective – it’s a separate currency with its own exchange rates, symbols etc. The exchange rate might be set to shadow the pound initially, but don’t bank on that being the case, and assume that the initial 1:1 exchange rate might change in the future.

Is this a probable outcome? No, I don’t think a “Scottish pound” is very likely. It would be a very weak currency, and the cost of setting it up would be very high and introduce transaction costs on Scottish businesses dealing with the UK. And once Scotland re-joins the EU they will have to dump it and start again with the Euro. It would be the worst case scenario in the short term.

Legal

Scotland has its own legal system already (with some overlap with the UK) and laws are broadly similar with those in England/Wales. So it’s likely that not much would change. Cross-border contracts might be impacted if the pound isn’t shared post-independence (e.g. pricing) but since I believe that to be unlikely it’s not an immediate problem.

Regulation

Data protection might suddenly be an issue for some IT operations. An independent Scotland would not be part of the EU, and therefore EU rules about storing data on EU customers outside the EU would now apply. This might possibly result in repatriation of data to the UK, but it’s not a common scenario I believe.

Other regulatory bodies might also have an impact. Our business is in telecommunications, so we fall under Ofcom’s remit. A Scottish Ofcom (SofCom?) might impose different regulations and require operational changes on customers based in Scotland.

Taxation

An independent Scotland would be free to set its own tax rates and regulations. The VAT rate might alter, and the rules on things like exemptions and applicability would mean that Scottish and UK customers would need to be treated differently.

The rules on employment and employment taxes would also probably change. These would obviously impact on transactional and accounting systems.

Internet

The top-level-domain (TLD) .uk is currently used for many UK businesses and organisations. A new TLD for Scotland .sco has already been created but following a Yes vote this could see a rush of requests to grab the good names.

At present the domain is in a “sunrise phase” where trademark holders can pre-register domains, but this expires on 23rd September. So if you want a .sco domain, now is the time to register.

Conclusion

The independence vote has been a long time coming, but it’s only now with opinion polls putting the Yes camp in front that it is being considered as a possible reality.

The fundamental problem is that every key issue, from the currency downwards, is not clearly defined. In the event of the Yes vote, IT departments might want to start the planning process, but you don’t need to panic about it.

Disclaimer

I’m not trying to argue a pro- or anti- independence case here, just trying to work out what the likely outcomes would be and the impacts on IT. I welcome any comments that point out any factual errors, or that add any key areas I’ve missed (I’m sure there are lots!).

RequireJS – Shooting Yourself in the Foot Made Simple

I’ve run across a few JavaScript dependency issues when creating pages, where you have to ensure the right libraries are loaded in the right order before the page-specific stuff is loaded.

RequireJS seems like the solution to this problem as you can define requirements in each JavaScript file. It uses the AMD pattern and TypeScript supports AMD, so it looked like the obvious choice.

I’ve tried looking at RequireJS before briefly. A quick look was enough to make me realise this wasn’t a simple implementation, so until this weekend I had not made a serious attempt to learn it. Having tried to do this over a weekend, I’ve begun to realise why it’s not more widely used.

Introduction

On the face of it, it seems that it should be simple. You load the require.js script and give it a config file in the data-main attribute, to set it up. When you write JavaScript code, you do so using AMD specifications so it can be loaded asynchronously and in the correct order.

For TypeScript users like myself, this presents a big barrier to entry (not RequireJS’s fault), as you need to specify the –module AMD flag on the compiler. This means all TypeScript code in the application is now compiled with AMD support, so existing non-requireJS pages won’t work. You have to migrate all in one go.

Setting Up a Test App

I created a simple ASP.NET web application with MVC to test requireJS, and followed the (many) tutorials, which explain how to configure the baseURL, paths etc. I then add a script tag to the _Layout.cshtml template to load and configure requireJS:

<script data-main="/Scripts/Config.js"

src="~/Scripts/require.js"></script>

Seems simple enough, doesn’t it? In fact I’d just shot myself in the foot: I just didn’t realise it.

Since my original layout loaded jQuery and Bootstrap, I needed to replicate that, so I added the following code:

    <script>
        require(['jquery', 'bootstrap'], function ($, bootstrap) {
        });
    </script>

This would tell requireJS to look for a javascript file called /Scripts/jquery.js, but since I loaded jQuery using Nuget the file is actually /Scripts/jquery-2.1.1.min.js

I obviously don’t want to bake that version number into every call or require() statement, so requireJS supports an ‘aliasing’ feature. In the config you can specify a path, in this format:

    paths: {
        "jquery": "jquery-2.1.1.min",
        "bootstrap": "bootstrap.min"
    }

Requesting the ‘jquery’ module should now actually ensure that jquery-2.1.1.min.js is loaded.

Except that it doesn’t… most of the time.

Loading the page on both IE and Chrome with developer tools, I can see that mostly that require.js is trying to load /Scripts/jquery.js – what gives‽ Changing the settings and trying different combinations seems to have bearing on what actually happens.

I felt like a kid who’d picked up the wrong remote control for his toy car: it didn’t matter what buttons I pressed, the car seemed to ignore me.

StackOverflow to the Rescue (again)

I don’t like going to StackOverflow too quickly. Often, even just writing the question prompts enough ideas to try other solutions and you find the answer yourself.

In this case I’d spent part of my weekend and two days, hunting through lots of tutorials and existing StackOverflow answers, trying to figure out WTF was going on. Finally a kind soul pointed me to the right solution:

A config/main file specified in the in the data-main attribute of the require.js script tag is loaded asynchronously.

To be fair, requireJS does try to make this reasonably clear here http://requirejs.org/docs/api.html#data-main – but this is just one little warning in the sea of information you are trying to learn.

This means that when you call require(…) you can’t be certain that the configuration is actually loaded or not. When I called require(..) just after the script tag for requireJS it had not been run, so it was using a default configuration.

What requireJS intended was that you load your modules just after setting the configuration, inside the config.js file. However, this approach only works if every page is running the same scripts.

So far the only way to get this to work has been to remove the data-main attribute and load Config.js as a separate script tag. At that point we can be sure the config has been applied and can specify dependencies and use aliases.

TypeScript files not compiling in VS2013 project

I created a C# class library project to hold some product-related code, and wanted to emit some JavaScript from this DLL, compiled from a TypeScript file. I added the .ts file and compiled the project, but no .js file was created.

The first thing to check is the Build Action is set to “TypeScriptCompile”, which it was – so why no .js output?

It seems the VS 2013 update 2 which is supposed to incorporate TypeScript compilation is not adding the required Import section to the project file.

If you add this line

  <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)\TypeScript\Microsoft.TypeScript.targets" />

Just after the line

  <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />

Then the TypeScript compiler will be invoked.

My thanks to my colleague Sri for figuring this one out!

Reported as a bug on Microsoft Connect: https://connect.microsoft.com/VisualStudio/feedback/details/934285/adding-typescript-files-to-vs-2013-update-2-library-does-not-compile-typescript

Using Google Drive API with C# – Part 2

Welcome to Part 2 which covers the authorization process. If you have not yet set up your Google API access, please read part 1 first.

OpenAuth

The OpenAuth initially seems pretty complicated, but once you get your head around it, it’s not that scary, honest!

If you followed the steps in Part 1 you should now have a Client ID and Client Secret, which are the ‘username’ and ‘password’. However, these by themselves are not going to get you access directly.

Hotel OpenAuth

You can think of OpenAuth of being a bit like a hotel, where the room is your Google Drive. To get access you need to check in at the hotel and obtain a key.

When you arrive at reception, they check your identity, and once they know who you are, they issue a card key and a PIN number to renew it. This hotel uses electronic card keys and for security reasons they stop working after an hour.

When the card stops working you have to get it re-enabled. There is a machine in the lobby where you can insert your card, enter the PIN and get the card renewed, so you don’t have to go back to the reception desk and ask for access again.

Back To OpenAuth

In OpenAuth ‘Reception’ is the authentication request that appears in the browser you get when you first attempt to use a Client ID and Client Secret. This happens when you call GoogleWebAuthorizationBroker.AuthorizeAsync the first time.

This allows the user to validate the access being requested from your application. If the access is approved, the client receives a TokenResponse object.

The OpenAuth ‘key card’ is called an AccessToken, and will work for an hour after being issued. It’s just a string property in the TokenResponse. This is what is used when you try to access files or resources on the API.

When the AccessToken expires you need to request a new one, and the ‘PIN number’ is a RefreshToken (another property in TokenResponse) which also got issued when the service validated you. You can save the refresh token and re-use it as many times as you need. It won’t work without the matching Client ID and Client Secret, but you should still keep it confidential.

With the .NET API this renewal process is automatic – you don’t need to request a new access key if you’ve provided a RefreshToken. If the access is revoked by the Drive’s owner, the RefreshToken will stop working, so you need to handle this situation when you attempt to gain access.

Token Storage

The first time you make a call to AuthorizeAsync will result in the web authorization screen popping up, but in subsequent requests this doesn’t happen, even if you restarted the application. How does this happen?

The Google .NET client API stores these access requests using an interface called IDataStore. This is an optional parameter in the AuthorizeAsync method, and if you didn’t provide one, a default FileDataStore (on Windows) would have been used. This stores the TokenResponse in a file in a folder [userfolders]\[yourname]\AppData\Roaming\Drive.Auth.Store

When you call AuthorizeAsync a second time, the OpenAuth API uses the key provided to see if there is already a TokenResponse available in the store.

Key, what key? The key is the third parameter of the AuthorizeAsync method, which in most code samples is just “user”.

   1: var credential = GoogleWebAuthorizationBroker.AuthorizeAsync(

   2:                 secrets,

   3:                 new string[] {DriveService.Scope.Drive}, 

   4:                 "user", 

   5:                 CancellationToken.None).Result;

It follows that if you run your drive API application on a different PC, or logged in as a different user, the folder is different and the stored TokenResponse isn’t accessible, so the user will get prompted to authorise again.

Creating Your Own IDataStore

Since Google uses an interface, you can create your own version of the IDataStore. For my application, I would only be using a single client ID and secret for the application, but I wanted it to work on the live server without popping up a web browser.

I’d already obtained a TokenResponse by calling the method without a store, and authorised the application in the browser. This generated the TokenResponse in the file system as I just described.

I copied the value of just the RefreshToken, and created a MemoryDataStore that stores the TokenResponses in memory, along with a key value to select them. Here’s the sequence of events:

  1. When my application starts and calls AuthorizeAsync the first time, I pass in MemoryDataStore.
  2. The Google API then calls the .GetAsync<T> method in my class, so I hand back a TokenResponse object where I’ve set only the ResponseToken property.
  3. This prompts the Google OAuth API to go and fetch an AccessToken (no user interaction required) that you can use to access the drive.
  4. Then the API calls StoreAsync<T> with the resulting response. I then replace the original token I created with the fully populated one.
  5. This means the API won’t keep making requests for AccessTokens for the next hour, as the next call to GetAsync<T> will return the last key (just like the FileStore does).

Note that the ResponseToken we get back has an ExpiresInSeconds value and an Issued date. The OAuth system has auto-renewal (although I’ve not confirmed this yet) so when your AccessToken expires, it gets a new one without you needing to do this.

My code for the MemoryDataStore is as follows:

   1: using Google.Apis.Auth.OAuth2.Responses;

   2: using Google.Apis.Util.Store;

   3: using System;

   4: using System.Collections.Generic;

   5: using System.Linq;

   6: using System.Text;

   7: using System.Threading.Tasks;

   8:  

   9: namespace Anvil.Services.FileStorageService.GoogleDrive

  10: {

  11:     /// <summary>

  12:     /// Handles internal token storage, bypassing filesystem

  13:     /// </summary>

  14:     internal class MemoryDataStore : IDataStore

  15:     {

  16:         private Dictionary<string, TokenResponse> _store;

  17:  

  18:         public MemoryDataStore()

  19:         {

  20:             _store = new Dictionary<string, TokenResponse>();

  21:         }

  22:  

  23:         public MemoryDataStore(string key, string refreshToken)

  24:         {

  25:             if (string.IsNullOrEmpty(key))

  26:                 throw new ArgumentNullException("key");

  27:             if (string.IsNullOrEmpty(refreshToken))

  28:                 throw new ArgumentNullException("refreshToken");

  29:  

  30:             _store = new Dictionary<string, TokenResponse>();

  31:  

  32:             // add new entry

  33:             StoreAsync<TokenResponse>(key,

  34:                 new TokenResponse() { RefreshToken = refreshToken, TokenType = "Bearer" }).Wait();

  35:         }

  36:  

  37:         /// <summary>

  38:         /// Remove all items

  39:         /// </summary>

  40:         /// <returns></returns>

  41:         public async Task ClearAsync()

  42:         {

  43:             await Task.Run(() =>

  44:             {

  45:                 _store.Clear();

  46:             });

  47:         }

  48:  

  49:         /// <summary>

  50:         /// Remove single entry

  51:         /// </summary>

  52:         /// <typeparam name="T"></typeparam>

  53:         /// <param name="key"></param>

  54:         /// <returns></returns>

  55:         public async Task DeleteAsync<T>(string key)

  56:         {

  57:             await Task.Run(() =>

  58:             {

  59:                 // check type

  60:                 AssertCorrectType<T>();

  61:  

  62:                 if (_store.ContainsKey(key))

  63:                     _store.Remove(key);

  64:             });

  65:         }

  66:  

  67:         /// <summary>

  68:         /// Obtain object

  69:         /// </summary>

  70:         /// <typeparam name="T"></typeparam>

  71:         /// <param name="key"></param>

  72:         /// <returns></returns>

  73:         public async Task<T> GetAsync<T>(string key)

  74:         {

  75:             // check type

  76:             AssertCorrectType<T>();

  77:  

  78:             if (_store.ContainsKey(key))

  79:                 return await Task.Run(() => { return (T)(object)_store[key]; });

  80:  

  81:             // key not found

  82:             return default(T);

  83:         }

  84:  

  85:         /// <summary>

  86:         /// Add/update value for key/value

  87:         /// </summary>

  88:         /// <typeparam name="T"></typeparam>

  89:         /// <param name="key"></param>

  90:         /// <param name="value"></param>

  91:         /// <returns></returns>

  92:         public Task StoreAsync<T>(string key, T value)

  93:         {

  94:             return Task.Run(() =>

  95:             {

  96:                 if (_store.ContainsKey(key))

  97:                     _store[key] = (TokenResponse)(object)value;

  98:                 else

  99:                     _store.Add(key, (TokenResponse)(object)value);

 100:             });

 101:         }

 102:  

 103:         /// <summary>

 104:         /// Validate we can store this type

 105:         /// </summary>

 106:         /// <typeparam name="T"></typeparam>

 107:         private void AssertCorrectType<T>()

 108:         {

 109:             if (typeof(T) != typeof(TokenResponse))

 110:                 throw new NotImplementedException(typeof(T).ToString());

 111:         }

 112:     }

 113: }

This sample uses the following Nuget package versions:

   1: <package id="Google.Apis" version="1.8.2" targetFramework="net45" />

   2: <package id="Google.Apis.Auth" version="1.8.2" targetFramework="net45" />

   3: <package id="Google.Apis.Core" version="1.8.2" targetFramework="net45" />

   4: <package id="Google.Apis.Drive.v2" version="1.8.1.1270" targetFramework="net45" />

   5: <package id="log4net" version="2.0.3" targetFramework="net45" />

   6: <package id="Microsoft.Bcl" version="1.1.9" targetFramework="net45" />

   7: <package id="Microsoft.Bcl.Async" version="1.0.168" targetFramework="net45" />

   8: <package id="Microsoft.Bcl.Build" version="1.0.14" targetFramework="net45" />

   9: <package id="Microsoft.Net.Http" version="2.2.22" targetFramework="net45" />

  10: <package id="Newtonsoft.Json" version="6.0.3" targetFramework="net45" />

  11: <package id="Zlib.Portable" version="1.9.2" targetFramework="net45" />

Using Google Drive API with C# – Part 1

We had a requirement to store a large volume of user files (for call recordings) as part of a new service. Initially it would be a few gigabytes of MP3 files, but if successful would possibly be into the terabyte range. Although our main database server has some space available, we didn’t want to store these on the database.

Storing them as files on the server was an option, but it would then mean we had to put in place a backup strategy. We would also need to spend a lot of money on new disks, or new NAS devices, etc. It started to look complicated and expensive.

It was then the little “cloud” lightbulb lit up, and we thought about storing the files on a cloud service instead. I have stuff on Dropbox, LiveDrive SkyDrive OneDrive and a Google Drive. However the recent price drop on Google Drive meant this was the clear favourite for storing our files. At $120 per year for 1TB of space that’s a no-brainer.

Google API

To do this we’d need to have the server listing, reading and writing files directly to the Google Drive. To do this we needed to use the Google Drive API.

I decided to write this series of articles because I found a lot of the help and examples on the web were confusing and in many cases out-of-date: Google has refactored a lot of the .NET client API and a lot of online sample code (including some in the API documentation) is for older versions and all the namespaces, classes and methods have changed.

API Access

To use the Google APIs you need a Google account, a Google drive (which is created by default for each user, and has 30GB of free storage), and API access.

Since you can get a free account with 30GB you can have one account for development and testing, and kept separate from the live account. You may want to use different browsers to set up the live and development/testing accounts. Google is very slick at auto-logging you in and then joining together multiple Google identities. For regular users this is great, but when trying to keep the different environments apart it’s a problem.

User or Service Account?

When you use the Google Drive on the web you’re using a normal user account. However, you may spot that there is also a service account option.

It seems logical that you might want to use this for a back-end service, but I’d recommend against using this for two reasons:

Creating a Project

I’ll assume you’ve already got a Google Account: if not you should set one up. I created a new account with it’s own empty drive so that I could use this in my unit test code.

Before you can start to write code you need to configure your account for API access. You need to create a project in the developer’s console. Click create project and give it name and ID. Click the project link once it’s created. You should see a welcome screen.

APIs

On the menu on the left we need to select APIs & Auth – this lets you determine which APIs your project is allowed to use.

A number of these will have been preselected, but you can ignore these if you wish. Scroll down to Drive API and Drive SDK. The library will be using the API but it seems to also need the SDK (enlighten me please if this is not the case!) so select both. As I understand it, the SDK is needed if you’re going to create online apps (like Docs or Spreadsheet), rather than accessing a drive itself. The usual legal popups will need to be agreed to.

The two entries will be at the top of the page, with a green ON button, and a configuration icon next to each

image

Configuration is a bit.. broken at present. Clicking either goes to configuration for the SDK on an older page layout. I don’t know if the SDK configuration is required or not. You could try accessing without it.

Credentials

The next step is to set up credentials. These are the identity of your login and there are different clientIDs based on the type of client you want to run.

By default you will have Client IDs set up for Compute Engine and Client ID for web application. To access drive from non-web code you need a Native Application, so click Create New Client ID. Select Installed application type and for C# and other .NET apps, select Other.

When this has completed you’ll have a new ClientID and a ClientSecret. Think of this as a username and password that you use to access the drive API. You should treat the Client Secret in the same way as a password and not disclose it. You might also want to store it in encrypted form in your application.

next: Part 2 – Authorising Drive API

Getting Rid of k__BackingField in Serialization

We have an application that uses WebApi to send out results for some queries. This has worked well and outputs nice JSON results courtesy of JSON.NET (thanks to James for that great library!).

Today I ran into a problem: the serialized JSON was corrupted with content that looks like this:

   1: {

   2:     "<Data>k__BackingField" : [{

   3:             "item1",

   4:             "item2"

   5:         }

   6:     ],

   7:     "<Totals>k__BackingField" : null,

   8:     "_count" : 2,

   9:     "_pageSize" : 10,

  10:     "_page" : 1,

  11:     "<Sort>k__BackingField" : "Date"

  12: }

My reaction was puzzlement: why on earth would a straightforward class with properies like Data and Count suddenly start spitting out weird JSON like this?

SO to the Rescue?

Obviously the first port of call was a search on StackOverflow

From this article we get some clues: the k__BackingField is created by automatic properties in C#, and that DataContractJsonSerializer does this.

But we’re not supposed to be using DataContractJsonSerializer, we’ve got WebApi which uses JSON.NET?

Solution

Turns out the cause is the SerializableAttribute – because I’d added that to the class, the object result from the WebAPI method got passed to DataContractJsonSerializer.

I had not seen this before because most of the results I had output didn’t have this attribute, even though the base class did. I removed this, and bingo, the results were fixed:

   1: {

   2:     "Data" : [{

   3:         "item1",

   4:         "item2"

   5:         }

   6:     ],

   7:     "Totals" : null,

   8:     "Count" : 2,

   9:     "PageSize" : 10,

  10:     "Page" : 1,

  11:     "Sort" : "Date"

  12: }