AndroidX, App Bundle and Profiled AOT for Xamarin Android

Recently I tweeted about some early observations on some updates I made for an Android project that's currently in the App Store. What was so amazing about this is that the App Bundle was showing a download size that was roughly around 18% of what the traditional APK per ABI would produce.

The truth is some of these numbers are a little misleading. For instance while Google shows a download size around 4.3 - 4.8MB, in practice I really am seeing around a 7MB download from Google Play. While it is about 50-60% larger than what I see the download should be from the Release Dashboard in Google Play, it's still only around 29% of the size of the existing stable release that customers are downloading right now. In some respects with Unlimited data being widely available in major markets like the US, and very large internal storage being available on most devices it's still very attractive to ship smaller app packages to customers. Surprisingly it's really not that difficult to update your project to use all of this, though the CI/CD gets a little trickier.

Android App Bundle

I'm not going to go into detail about the Android App Bundle as I'm sure you can find plenty of other articles that will educate you more on what they are. For the purposes of this post I am simply going to define them as an optimization of a single APK that contains the necessary resources for all of your target ABI's. In the case of the app that I referenced in my Twitter post, the generated Android App Bundle was about 10MB which you may notice by itself was less than half the size of the existing APK size of the app in Google Play. So how do you update your Xamarin.Android project to start generating an .aab? Well for starters your build environment will need to be using Xamarin Android 9.4 or later. At the moment the hosted agents in Azure DevOps / App Center do not have that out of the box. 

It really couldn't be easier to generate the .aab though as you simply need to add a single property to the csproj for your Xamarin.Android project.

<PropertyGroup>
  <AndroidPackageFormat>aab</AndroidPackageFormat>
</PropertyGroup>

Note that while this snippet shows the AndroidPackageFormat property being declared in a root PropertyGroup (without conditions), you could put this in a PropertyGroup that is conditioned for Release or Store builds if you still want to generate an APK for Debug or Release and then ship an .aab to Google Play. It's also important to note here that before you can upload an .aab to Google Play you must delegate signing to Google. If you have not previously done so you will need to export your keystore with the private key from Android Studio so that you can upload it to Google and enable Android App Bundles for your app.

Critical For Distribution

It is also worth noting here that as a gotcha when you version your builds the Version Name does not matter to Google. So if you're at version 2.0 and you release 2.1 even after rolling it out, it does not mean that your users will be able to download it. Google Play is very dependent on the Version Code. For those who have uploaded multiple APK's to Google Play you've probably given your build a Version Code like 98 only to see each APK show something like 200098, 300098, 400098, 500098 like shown in the picture I posted on Twitter. This becomes very important because if you notice the build of the Android App Bundle in that same picture shows the Version Code as 123. In order to download this update we had to offset our builds by 500000 so that the next build was 500124 which was therefore recognized as being a newer version than 500098 that was currently available for download.

AndroidX (Android JetPack)

I'm not even going to pretend I fully understand everything around AndroidX. I do however like some of the promises that it's supposed to simplify the dreaded Android Support libraries. While the process I'm going to outline here is going to show you how to manually migrate to AndroidX I should add that there is hope on the horizon to make this a little easier.

We will be shipping a migration wizard type experience in 16.3 which will do all the dirty work for you
- Jonathan Dick (via Xamarin.Android Gitter)

Before you start, if your project is still using packages.config to manage it's NuGet references be sure to use the migration wizard in Visual Studio to migrate to PacakgeReference. Projects using PackageReference will have a much easier time migrating as you only need to install the Top Level dependencies and not the entire dependency chain. To start open the NuGet Package Manager for your Xamarin.Android project. Be sure to enable preview packages and install the latest Xamarin.AndroidX.Migration package. Once you've installed that and rebuild you should start getting build errors. As you scroll through the various build errors that appear, you'll see each error lists a current Android Support package and the corresponding AndroidX counterpart. The important thing to consider here is that you may see some crazy number like 27 packages that need to be installed. This doesn't accurately represent what the top level packages are for your project. As an example here for the Moment app update we only had to install 3 AndroidX packages directly.

  • Xamarin.AndroidX.Legacy.V4
  • Xamarin.Google.Android.Material
  • Xamarin.AndroidX.Browser

If you see the first two listed here chances are you should install them right away as they'll bring down your dependencies really quick. Don't be afraid to go to NuGet.org and look at the dependencies of each of the AndroidX packages you need to install. It will help you identify which ones you need to reference specifically and which you'll get transitively. 

You can read more on AndroidX from Jon Douglas on the official Xamarin blog.

Startup Tracing

AOT tends to both solve and cause a lot of problems. One of the problems that many people do not realize that they are causing for themselves is app size bloat. While AOT will speedup performance on Android, it also is likely to double the size of your app. Startup Tracing or Profiled AOT is a newer thing from the Xamarin team promising to keep the app bloat from AOT down while optimizing the AOT around startup performance where people tend to be the most frustrated. I should probably start by saying before you use AOT or Profiled AOT, do not do this for a Debug build. Doing so may encourage behavior that is not overly productive from day drinking to banging your head on the desk asking where you went wrong in life. The answer of course was using AOT in a Debug build. 

To use Profiled AOT (which is a great thing in a Release build) it really couldn't be easier. Similar to the Android App Bundle it simply requires a new property be added. The property should only be added to a Property Group intended for Release or the Store.

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' "> 
    <AndroidEnableProfiledAot>true</AndroidEnableProfiledAot> 
</PropertyGroup>

Again you can read more on Startup Tracing from Jon Douglas on the official Xamarin Blog.

Secret Sauce

Since everyone thinks there is some secret sauce I'll say there's not exactly any secret sauce... you have what you need already. That is of course until you get to the topic of CI/CD. Right now basically every tool I might rely on is pretty far behind on supporting Android App bundles.

App Center

At the moment there isn't really anything you can do with App Bundles... in fact App Center has been pretty limiting for Android developers for a while with the fact they only supported a single artifact for distribution making it hard to generate a single APK per ABI... fast forward and they still don't have Android App Bundle support at all though the team says it should be available this month.

While you probably could install Boots and generate the .aab the problem here is that App Center wouldn't be able to do anything with the artifact and you'd probably fail the build.

Azure Pipelines

The Xamarin.Android build task is utterly useless. I eventually gave up on it and just wrote a simple PowerShell script to generate my .aab. Luckily with Azure Pipelines you have full control over what you want to have available as an artifact so this made it really easy to grab the .aab later on so I could do something with it. What I ended up with was something like this in my Android build stage. This downloads my Keystore from Secure Storage, then does a NuGet restore (via MSBuild), builds and signs the Android app all in one single step. It's worth noting here that since Xamarin.Android already signs the generated package/bundle with a default keystore and Google Play requires the ability to use your keystore to sign the apks they generate from the App Bundle, you probably don't need the complexity of signing.

- task: vs-publisher-473885.motz-mobi[email protected]1
  displayName: 'Bump Android Versions in AndroidManifest.xml'
  inputs:
    sourcePath: pathTo/AwesomeApp.Android/Properties/AndroidManifest.xml
    versionName: '1.1.0'
    versionCode: $(Build.BuildId)
    versionCodeOffset: 500000

- script: sudo $AGENT_HOMEDIRECTORY/scripts/select-xamarin-sdk.sh 5_18_1
  displayName: 'Select Xamarin SDK version'

- task: [email protected]
  displayName: Install Latest Android SDK
  inputs:
    uri: https://aka.ms/xamarin-android-commercial-d16-2-macos

- task: [email protected]
  name: androidKeyStore
  inputs:
    secureFile: $(KeystoreFileName)

- powershell: |
   if($env:SYSTEM_DEBUG -eq $true)
   {
     $extraArgs = '/bl:android.binlog'
   }
   $keystorePath = $env:KeystoreFilePath
   $keystoreName = $env:KeystoreName
   $keystorePassword = $env:KeystorePassword
   $project = $env:AndroidProjectPath
   $outputDirectory = "$($env:BUILD_BINARIESDIRECTORY)/$($env:BuildConfiguration)"
   Write-Host "Output Path = $outputDirectory"
   msbuild $project /t:SignAndroidPackage /p:Configuration=$($env:BuildConfiguration) /p:OutputPath=$outputDirectory /restore /p:AndroidKeyStore=true /p:AndroidSigningKeyStore=$keystorePath /p:AndroidSigningStorePass=$keystorePassword /p:AndroidSigningKeyAlias=$keystoreName /p:AndroidSigningKeyPass=$keystorePassword $extraArgs
  displayName: Build & Generate AppBundle
  env:
    AndroidProjectPath: 'pathTo/AwesomeApp.Android.csproj'
    Secret_AppCenterSecret: ${{ parameters.appcenterKey }}
    KeystoreFilePath: $(androidKeyStore.secureFilePath)
    KeystoreName: $(KeystoreName)
    KeystorePassword: $(KeystorePassword)

- task: [email protected]
  displayName: 'Publish BinLog'
  inputs:
    targetPath: 'android.binlog'
    artifactName: android-binlog
  condition: and(failed(), eq(variables['system.debug'], true))

- task: [email protected]
  displayName: 'Publish Package Artifacts'
  inputs:
    targetPath: '$(Build.BinariesDirectory)'
    artifactName: ${{ parameters.artifactName }}
  condition: eq(variables['system.pullrequest.isfork'], false)

 

Builds though are only the first part of it. This is where we kind of lose here. I'm sure if I spent some time I could write a script to handle this but for now manual uploads are where we're at. From Azure Pipelines there are two methods I tend to rely on to upload my artifacts to Google Play.

  • App Center Distribution task: Well as we already discussed App Center doesn't support .aab so we're out of luck there at the moment.
  • Google Play task: Since whoever at Microsoft is responsible for it doesn't seem to be responding to the community... I'm just not sure what to say on this front.

The important thing is that we do have a generated artifact though and with that we can at least manually upload the artifact to Google Play.

Getting Started with Azure Pipelines for Xamarin Developers

DevOps for Xamarin apps is a rather large topic. Rather than trying to go A-Z in one bite, I thought it might make more sense to divide this up into bite sized chunks. In this first article we'll take a look at how to get started with Azure DevOps (aka Azure Pipelines... aka I've lost track of what we're supposed to call it anymore)… Obviously the first thing you'll need to do is to create a new Azure DevOps organization (assuming you don't already have one). If you don't you'll want to head over to https://dev.azure.com and create one. You should see a dialog similar to this.

After creating your new organization you'll need to setup a project. This project could be whatever you need. For the purposes of this article we'll call it' the Xamarin Mobile project and make it private.

Once you've created the project you should see something like this:

But we're still not done yet. As you'll notice there's a repos section in here. We could choose to use the FREE private git hosting in Azure DevOps. But if our code lives somewhere else such as GitHub we can keep using Azure DevOps for our CI/CD pipelines. If that's the case you may want to head to the Project Settings. All of the Azure DevOps services will be on by default, however you can choose what you want to use to narrow it down and focus the view on only what you'll be using.

Remember that we've just set up a brand new Azure DevOps organization so we only have what's available out of the box, and that's really not sufficient for Xamarin Developers.

Marketplace for Azure DevOps

To really light up Azure DevOps and make sure that we have the full power we need for our Xamarin CI Builds you'll want to head over to https://marketplace.visualstudio.com and make sure you're looking at the Azure DevOps tab. You may find a number of great extensions for 

The first two extensions you'll want to install I honestly cannot figure out why they are not included out of the box with Azure DevOps particularly since they're released by Microsoft.

  • Apple App Store: As the name implies this will give us an ability to deploy directly to the App Store and makes it very easy to get builds into Test Flight
  • Google Play: Again this extension will make it very easy to integrate and deploy directly to Google Play. We can deploy to an internal testers group, alpha, beta, or production as we see fit. NOTE: this extension does not currently support the new Android App Bundles (.aab)

The next two extensions are from our great friends on the on the Xamarin team but they are published individually and not by Microsoft as an organization.

  • Mobile App Tasks for iOS and Android: This great extension from James Montemagno makes it very easy to provide a unique version for each build of our app. This is particularly important for those times where we may need to upload version 1.5 of our app to the app store 10 times to get through QA, and then the app review. By ensuring we have a unique Build number on each build of version 1.5 we can continue to upload new artifacts for testers and eventual release.
  • Boots: Boots is an awesome .NET CLI Tool from Jonathan Peppers. Thanks to a little help from Peter Collins on the Xamarin team, it's been made even easier to use with this great extension. With this simple utility you can provide a download link for any Xamarin SDK such as an Alpha or nightly build of Mono, Xamarin.iOS or Xamarin.Android and it will download and install the SDK so that it's ready to use for your build.

Installing the Extensions

It really couldn't be easier to install the extensions. You can navigate directly to each extension in the marketplace by clicking on the links above. Then you will want to click the Get it free button. When you are getting ready to install the extension you'll see a screen like the one below. Be sure to take note of the organization as it may not have the organization selected for which you want to install to.

Secrets and Secure Files

Now that we have the extensions set up, lets head back into Azure DevOps and click on the Pipelines, and then the Library. The Library will give us an easy to manage location for our resources.

To get started round up the Android keystore, and any provisioning profile and signing certificate you will need for your Android and iOS apps. Select the Secure Files tab and then start uploading your secure files. Note that there is a certain amount of insanity in this step as you just uploaded files that are currently unavailable to be used. You will need to open each file that you have just uploaded. You will see something like the following. Notice that the toggle switch for all of the pipelines to use this file is currently off, you will need to toggle it to the on position and then hit save.

Now we'll want to head back to the first tab we landed on and start adding some variable groups. How this looks for you may differ slightly based on your needs. In general though I tend to end up with 3 Variable Groups like the following:

  • Android-Signing
    • AndroidKeystoreFileName
    • AndroidKeystorePassword
    • AndroidKeystoreAlias
  • iOS-Signing
    • iOSDevelopmentCertificate
    • iOSDevelopmentPassword
    • iOSDevelopmentProvisioningProfile
    • iOSDistributionCertificate
    • iOSDistributionPassword
    • iOSDistributionProvisioningProfile
  • MyAppSecrets
    • AppCenterKey_Android_QA
    • AppCenterKey_Android_Store
    • AppCenterKey_iOS_QA
    • AppCenterKey_iOS_Store

Again you will want to be sure that each variable group is allowing access to all pipelines otherwise the variables will not be available for the build you will setup next.

Variable Group Take Aways

  • Android app signing is very different than for iOS. There is no need to track separate keystores for QA and Production.
  • iOS App Signing is very dependent on how to intend to consume the app. If you plan on side loading the app outside the App Store/Test Flight you will need a Development certificate. If you're ok with doing QA through Test Flight it may be worth it to simplify and use a single production certificate. That said you should change variables used such as where analytics are tracked, backend etc so you should be careful not to release QA builds into the wild where a customer could potentially use it.
  • You should have processes in place that ensure there is some manual validation that can occur before your app ends up on the Store.
  • You should also separate where Analytics/Crash Diagnostics are going between Stage and Production. While App Center is rather horrible at this currently, creating separate apps in App Center to track different environments can be a helpful technique in easily identifying Development noise verses what your customers are really doing and experiencing.
  • You are building the iOS and Android apps in more than one step. There is no reason to leak the Android app secret in the iOS build or lead the iOS app secret in the Android build... just provide the one you need at build.

Next Steps

In the next post we'll look at how we integrate all of this into a build using YAML and how we organize it.

Using Dependency Injection Everywhere

Recently I started putting together some extensions to make my life even easier with Dependency Injection. I really enjoy being able to use Prism's abstractions. This means I can write code today without any regard for which actual container I may choose 6 months from now. If you've been following my Twitch streams you may have seen me demo the Prism Container extensions. In talking with developers about them I realized it was about high time I wrote a blog post about what they are, and why you might want to use them.

Advanced Service Registrations

For 95%+ of your service registrations you're probably fine with registering a Service as a Transient (you get a new instance every time) or a Singleton (you get the same instance wherever you need it). 

containerRegistry.Register<IFoo, Foo>();
containerRegistry.RegisterSingleton<IBar, Bar>();

The truth though is that for real complex applications it isn't always that cut and dry. Sometimes you might want to have a single type that implements several services. While other times you might need to have some sort of Factory method to construct a new instance of your service, or still other times you may require some sort of Scoped Service. The Container Extensions provide a way that we can take these much more advanced concepts and utilize them as an addition to Prism without interfering with anything from the main Prism Library.

protected override void RegisterTypes(IContainerRegistry containerRegistry)
{
    // Registers IFoo & IBar
    containerRegistry.RegisterMany<FooBar>();

    containerRegistry.RegisterSingleton<IFooBuilder, FooBuilder>();
    containerRegistry.RegisterDelegate<IFooBar>(BuildFooBar);
}

private static IFooBar BuildFooBar(IContainerProvider containerProvider)
{
    var foo = containerProvider.Resolve<IFooBuilder>();
    return foo.Build("Some value");
}

You may be wondering why not add this to Prism proper... who knows if there is enough support maybe it will be. But in the mean time this keeps the Prism codebase light weight with a more heavy duty API in a separate codebase.

Using it Outside of Prism

So one of the neat things about this is that the only real dependency with Prism is Prism.Core which has a few side effects such as making the Container implementation for DryIoc completely platform agnostic, and making it easy to use outside of Prism. The PrismContainerExtension has a few other benefits:

  • It supports Splat
  • It implements IServiceProvider
  • It supports Microsoft.Dependency.Injection with an ability to create IServiceProvider from an IServiceCollection
    •  There's an additional support package for Shiny to make this even easier
  •  PrismContainerExtension implements a Singleton pattern meaning you can initialize it in native code and continue to access the same container later from shared code

Prism Forms Extended

Debugging can (though shouldn't) be hard. So how exactly does PrismApplication from the extended package make your life even easier? Well for starters we get global exception handling for:

  • AppDomain
  • TaskScheduler
  • ObjCRuntime
  • AndroidEnvironment

That's all pretty cool but that doesn't cover all of the errors we might encounter so we also get global handling for:

  • Module Load Errors
  • All Navigation Errors with the Navigation Uri, or the invoking method name (i.e. GoBackAsync, GoBackToRootAsync)

Now for as awesome as all of that is, it still doesn't cover one insanely important area... XAML! So for all of those times you've ever had an issue where a Binding wasn't working and you're looking at the UI with no clue where to begin, you'll get logging for FREE from Xamarin Forms Binding errors and more.

Platform Specifics & Uri Navigation

Ok so Uri based navigation is nothing new in Prism, but perhaps one of the last problem children has been how do I use a Xamarin.Forms.TabbedPage dynamically from Prism AND set the title for when you need the TabbedPage in a NavigationPage. For those times you run into this situation and you want to simply pass title a title parameter to the TabbedPage you can now do that and have the Title bound. 

What's more is that the addition of IPageBehaviorFactoryOptions. With these options you have the ability to control several Platform Specifics globally.

internal class DefaultPageBehaviorFactoryOptions : IPageBehaviorFactoryOptions
{
    public bool UseBottomTabs => true;

    public bool UseSafeArea => true;

    public bool UseChildTitle => true;

    public bool PreferLargeTitles => true;
}

Shiny

I have to admit among my favorite new libraries of 2019 is Shiny from Allan Ritchie. It makes a lot of complex tasks stupidly simple and reliable. Interestingly about the time that Allan first told me about his new project coming into beta I had also just started to stabilize the Container Extensions. It was such a perfect fit that it took almost no work to integrate the two. For a traditional Prism Forms application this wouldn't be the case as you wouldn't easily be able to initialize your container, register what you need with the Microsoft.DependencyInjection.Extensions and then get that container ready to go inside of Prism Application. However because of the design of the Container Extension you can now simply base your ShinyStartup on the PrismStartup from the Shiny.Prism.DryIoc package and use the PrismApplication from the Prism.DryIoc.Forms.Extended package and you're done. It requires zero code changes to your existing Prism Application and your startup is simply:

public class MyStartup : PrismStartup
{
    public override void ConfigureServices(IServiceCollection services)
    {
        // Register services with Shiny like: 
        services.UseGps<MyDelegate>();
    }
}

Next Steps

Be sure to try out the Prism.DryIoc.Forms.Extended package for your Xamarin.Forms app or Prism.DryIoc.Extensions in your Prism.Wpf app. Be sure to follow me on Twitch to see when I go live or have new videos available on more great Prism and Xamarin Forms development topics. If you try it out be sure to tweet at me @DanJSiegel and let me know how you like it or if there's something you'd like to see.

Using "Unsupported" DI Containers with Prism

Developers around the world rely on Prism to build some pretty amazing apps. When I first saw Prism I was amazed at how quickly and easily I could develop an application with complex needs, but with easy to follow, testable, and maintainable code. As is so often the case, developers tend to have very strong opinions. Which Dependency Injection Container to use is certainly no exception. To some extent developers choices come from what they have experience with.

For a variety of reasons the Prism team cannot support every container that developers may want to use. Prism 7 however made some major changes that make it easier than ever to use a container that isn't officially supported or shipped by the Prism team. Prism imposes very few requirements in order to use a container.

  1. The container must support Transient and Singleton registrations
  2. The container must support registering a specified instance
  3. The container must support keyed registrations / resolving by name
  4. The container must be mutable to support Prism Modularity

In the past when implementing support for your own container, you would still need a fair amount of knowledge of the container, and how Prism is supposed to work. Because of the container abstraction, this requirement has been reduced to only needing to understand the container you want to use.

Amazingly you can introduce support for your container by overriding one additional method from PrismApplicationBase in either Prism.Forms or Prism.WPF, and implementing a single class that handles the mapping between Prism's container abstraction and the container you want to use.

There are some extremely performant containers available such as Grace. As it turns out Grace is a fantastic example as it is virtually on par or slightly more performant than my favorite DryIoc. It's also a mature codebase with releases going back to 2013. It meets all of our "Must Support" items, and it's mutable so it even works with Prism Modularity. Unfortunately for Grace, over it's 5 year history, it has only accumulated around 76,000 downloads. Due to this low user adoption, no matter how performant it may be, it isn't a popular enough container to justify adding to Prism as a supported container.

Adding a Container Extension

Prism 7's IOC Abstraction simply provides a mapping for the most common Registration and Resolution methods. In the case of the Grace DI Container we simply need to add this single class:

public class GraceContainerExtension : IContainerExtension<IInjectionScope>
{
    public GraceContainerExtension()
        : this(new DependencyInjectionContainer())
    {
    }

    public GraceContainerExtension(IInjectionScope injectionScope)
    {
        Instance = injectionScope;
    }

    public IInjectionScope Instance { get; }

    public bool SupportsModules => true;

    public void FinalizeExtension() { }

    public void Register(Type from, Type to) =>
        Instance.Configure(c => c.Export(to).As(from));

    public void Register(Type from, Type to, string name) =>
        Instance.Configure(c => c.Export(to).AsKeyed(from, name));

    public void RegisterInstance(Type type, object instance) =>
        Instance.Configure(c => c.ExportInstance(instance).As(type));

    public void RegisterSingleton(Type from, Type to) =>
        Instance.Configure(c => c.Export(to).As(from).Lifestyle.Singleton());

    public object Resolve(Type type) =>
        Instance.Locate(type);

    public object Resolve(Type type, string name) =>
        Instance.Locate(type, withKey: name);

    public object ResolveViewModelForView(object view, Type viewModelType)
    {
        Page page = null;

        switch(view)
        {
            case Page viewAsPage:
                page = viewAsPage;
                break;
            case BindableObject bindable:
                page = bindable.GetValue(ViewModelLocator.AutowirePartialViewProperty) as Page;
                break;
            default:
                return Instance.Locate(viewModelType);
        }

        var navService = Instance.Locate<INavigationService>(withKey: PrismApplicationBase.NavigationServiceName);
        ((IPageAware)navService).Page = page;
        return Instance.Locate(viewModelType, new[] { navService });
    }
}

Once we've added this single class we only need to add to update our App as follows:

public partial class App : PrismApplicationBase
{
    protected override IContainerExtension CreateContainerExtension() =>
        new GraceContainerExtension();
}

As I mentioned, Prism's IOC abstraction only provides the most common functionality. This means that you could find an advanced scenario where you need direct access to the underlying container. To achieve a more complex registration, you can add an extension method like we provide in the Container specific packages:

public static class ContainerExtensions
{
    public static IInjectionScope GetContainer(this IContainerRegistry containerRegistry) =>
        ((IContainerExtension<IInjectionScope>)containerRegistry).Instance;
}

You can find a full working sample app on GitHub.

Demystifying the SDK Project

I am often, and rightfully, accused of living on the bleeding edge. It can be quite painful being there as ideas are not always fully flushed out, and tooling is often just not there yet. When Microsoft began the push towards .NET Core and .NET Standard, I knew this was an area that I needed to be. It was clear to me that this was a major shift that was going to make .NET development more appealing to a variety of developers, and businesses. As I set out to learn this new paradigm I both struggled and enjoyed the massive project system simplification that was introduced in the xproj format with a json configuration. For a variety of reasons though at the 11th hour Microsoft completely changed directions going back to the csproj and ditching the whole concept of a json configuration altogether. For months I struggled to understand what was going on. 

Why I struggled

There were a lot of reasons I struggled. The project system has a lot of loose but very important couplings with MSBuild. Frankly I had heard of msbuild, but I knew so little about it that I simply called it "The Compiler" (which is very inaccurate). Another reason that I struggled is that there isn't exactly a lot of documentation to explain how the project system works, or what elements mean. Then of course have you ever looked at the older style of csproj? There is a lot of nonsense xml going on there. You can kind of figure out some stuff. You can for instance figure out that any of your code files that need to be compiled needed a Compile tag to include it in the compilation, but what on earth is all of the other crazy stuff going on there?

Breaking Through

The new SDK Style projects really help make what's going on in the project system easier to understand and customize because it's not polluted by a lot of insanity. You don't need to add a bunch of duplicate settings for Debug vs Release since it's already assumed these build configurations exist and we have some standard assumptions about them, like Release builds need to be optimized, while Debug configurations need all of our debug symbols to be able to step into them. Then of course we make some standard assumptions like all of your code files should be compiled (known as File Globbing). What's left over is often a file that has a PropertyGroup with a single TargetFramework.

PropertyGroup vs ItemGroup

While this can get a little crazy when we start looking at creating custom build targets, we'll keep this simple for now. A PropertyGroup, is exactly what it sounds like. It's an area where you can declare Properties (think variable declarations), that will be used in the build process. There are a number of built in Properties (Well-Known & Common properties) that really come from msbuild, these include things like specifying the Assembly Name, where the build output should go, and some specialty variables that can be used to get things like the path to the Project File, or the current directory. While these properties can help us there are a number of other properties that can come into play from all over, and we can frankly make up properties as we see fit (more on that later).

Ok so now we have an idea about the PropertyGroup so what about ItemGroup's? Well ItemGroup's are all about grouping Items we need to do SOMETHING with. I admit that probably doesn't clarify what I mean. So let's look at the Xamarin Essentials csproj. It's a good use case where the decision was made to turn off the default file globbing.

  <ItemGroup Condition=" $(TargetFramework.StartsWith('netstandard')) ">
    <Compile Include="**\*.netstandard.cs" />
    <Compile Include="**\*.netstandard.*.cs" />
  </ItemGroup>
  <ItemGroup Condition=" $(TargetFramework.StartsWith('uap10.0')) ">
    <PackageReference Include="Microsoft.NETCore.UniversalWindowsPlatform" Version="6.1.5" />
    <SDKReference Include="WindowsMobile, Version=10.0.16299.0">
      <Name>Windows Mobile Extensions for the UWP</Name>
    </SDKReference>
    <Compile Include="**\*.uwp.cs" />
    <Compile Include="**\*.uwp.*.cs" />
  </ItemGroup>

There is a lot going on in this snippet so let's break this up. First you'll notice some conditions on these ItemGroup's. You do not have to ever use a Condition, but you can also put a Condition on Any Element. As I mentioned before, the EnableDefaultCompileItems Property was set to false, meaning that when this project is built, it will not compile ANY of the code unless we do something to include code. What you see here is that they have adopted a practice in which each file contains a platform identifier. This then allows them to have a condition in which the TargetFramework is evaluated and determine which C# files should be included in the compilation. Often times you may see multiple ItemGroup's in a csproj, with each Group containing a single set of Items, for instance only Embedded items, or Compile items, Project References. You'll notice here though that the ItemGroup can contain any set of Items, as the UWP ItemGroup contains a PackageReference, an SDKReference for Windows Mobile, and adds the UWP C# code.

Multi-Targeting

Perhaps one of my favorite features of the SDK Style Project is that it makes Multi-Targeting that much easier. As you may have noticed in the snippet above from the Xamarin Essentials csproj, they have a single Project that targets both UWP and netstandard. Honestly, Microsoft only get's partial credit here. The new Project system introduces the ability to specify TargetFrameworks rather than a single TargetFramework if we so choose. Unfortunately the team only thought about Full Framework Targets like net45 and netcore/netstandard targets, which is why I say they get partial credit. For the Xamarin Developer (or even the 3 UWP developers out there), this gets really frustrating. Luckily the community has Microsoft MVP/RD Oren Novotny, who developed a completely custom SDK that ships via NuGet which introduces support for all kinda of new targets including UWP, Xamarin iOS, Android, Mac, and even Tizen and WPF.

<Project Sdk="Microsoft.NET.Sdk" ToolsVersion="15.0">
    <!-- Standard SDK Sytle Project that doesn't support cool targets -->
    <<TargetFramework>netstandard2.0</TargetFramework>
</Project>

So what do we have to do to start Multi-Targeting more fun targets as a Xamarin Developer? Well it's actually pretty simple, again thanks to Oren, the Microsoft team added support so that all we need to do is replace the value in the Sdk attrubute of the Project. To start with let's look how you might do this if you only care about a single Project. 

<Project Sdk="MSBuild.Sdk.Extras/1.6.47" ToolsVersion="15.0">
    <!-- Single Multi-Targeting Project... You control the version here as part of the Sdk string -->
    <TargetFrameworks>netstandard2.0;xamarin.ios;xamarin.android;uwp10.0.16299</TargetFrameworks>
</Project>

Suddenly you have the ability to create a single project that targets all of the platforms you want. But what about those cases where you still need to break code up into multiple projects? Well again it's very simple. Simply add MSBuild.Sdk.Extras as the Sdk value and then drop in a file called global.json next to your solution. (NOTE: You'll notice that this is what I've done for Prism)

{
    "msbuild-sdks": {
        "MSBuild.Sdk.Extras": "1.6.47"
    }
}

Of course you could ask why should you care about Multi-Targeting? Well have you ever noticed that you have to do some crazy thing like:

global::SomeProject.Platform.CoolRenderer.Init();

Suddenly you're referencing a bunch of Init methods that look like this:

public static void Init()
{
    // The Linker Sucks
}

public static void Init()
{
    // Watch the build is going to warn me about a variable I'm not actually using for anything....
    var a = DateTime.Now;
}

To me this has always been code smell. Ultimately the real reason you're having to do this so often is to ensure that the Linker sees an actual reference in code that goes into the Platform specific binary. This was a problem with the old project system since we had to have a sharable project (PCL), and then platform projects each of which needed to be bundled into a single NuGet. By Multi-Targeting you've already made references into the assembly to keep it from being stripped out reducing the legwork you need to do, to tell the Linker to pay attention to something else.

Multi-Targeting Snafu

Multi-Targeting is a fantastic tool. Unfortunately for those working on a Mac there is a little bit of legwork you need to do. Visual Studio Mac does not currently support Multi-Targeting projects. It really should, and is very overdue in my opinion. If you agree, I suggest pinging Jordan Matthiesen (@JMatthiesen) to let him know this needs to be a top priority for the team (and in the next alpha release)... I did say there is a little bit of legwork you need to do though, I never said it doesn't work. MSBuild LOVES Multi-Targeting so you can build these projects from the command line all day long. In fact, as one of the nice things that Mac developers get for FREE, both MSBuild and NuGet are added to your PATH when you installed Visual Studio Mac making building from the command line very easy. Generally for these Multi-Targeting projects I simply move my workflow into Visual Studio Code where I can easily write code and build from the integrated terminal.

I should probably admit the pain doesn't entirely stop there. UWP is simply an unresolvable target. The solution? Earlier I mentioned Conditions can be applied to any element, which includes the TargetFramworks element. If you lookup the well known MSBuild variables there is an OS variable. Unfortunately it's a bit simplistic meaning you're not going to figure out if you're on a Mac or Ubuntu or Centos, or Windows 7 or Windows 10... but you can at least figure out one thing... are you on Windows or not. So what does that look like:

<Project Sdk="MSBuild.Sdk.Extras">
    <PropertyGroup> 
        <TargetFrameworks>netstandard1.0;netstandard2.0;Xamarin.iOS10;MonoAndroid71;MonoAndroid80;MonoAndroid81;uap10.0.16299</TargetFrameworks>
        <TargetFrameworks Condition=" '$(OS)' != 'Windows_NT' ">netstandard1.0;netstandard2.0;Xamarin.iOS10;MonoAndroid71;MonoAndroid80;MonoAndroid81;</TargetFrameworks>
    </PropertyGroup>
</Project>

Packaging

Some of you may be wondering why you should be packaging your code? I've talked with a number of developers over the years who are engaged in a process in which for each release the entire code base is pulled from source control and built and released in one go. I've heard some interesting arguments for the practice, though I completely disagree with them. To be clear obviously something has to be built, but all of your common support libraries should be built and packaged as they are updated. There are actually a few benefits to this:

  1. This reduces build times... Imagine you have a single support library that's used in 5 applications that are released across your organization. This literally eliminates 4 completely unnecessary builds of that single project. Of course the reality is that you probably have a bunch of support libraries making the results that much greater.
  2. Versioning... It is a little scary when you think about it, but so many companies NEVER version their code. I have literally seen projects that started 15 years ago that are still on version 1.0.0.0 (from the template). In my experience these are companies that are probably storing your passwords in clear text, prefer http over https, and think a 56k modem is high speed internet instead of a painful memory of the 1990's. If you aren't versioning your code you really have no idea when a problem was introduced, if a problem has been fixed, or a regression has been made... you only have guesses.
  3. Garbage In -> Garbage Out... because you've built and shipped that project independent of the rest of your monolithic applications, it means that you have had a chance to validate the code base before it finds it's way into use by others in your development team or a production environment. For many (realistically) this means that you are protecting yourself from that developer who checked in code that doesn't build. For others it means that you have ensured that all of your unit tests for that project have both run and passed.
  4. Testability... I know what you're thinking, you're perfect, and so are the rest of the developers on your team. I totally understand, that's why I like to wear the shirt declaring "I don't always test my code, but when I do, I do it in Production". But for that time when maybe you forgot to update that one repo that you aren't responsible for, but that repo is required to build the project you are responsible for. The simple truth is as a .NET developer you're used to looking at the package manager for Updates. When your support libraries are packaged and available to your team via a private or public NuGet feed, it becomes easy to discover that an update is available. Because the discoverability is actually going up, it means that the entire team really has a better opportunity to test the code in development before it ever sees production.

How do you get started?

Maybe you didn't need convincing, maybe you just need to know how to get started. Well for starters, let's completely toss out the idea of using a nuspec. They're annoying and frankly if you're multi-targeting... they are error prone. There are still a few monolithic projects out there like Xamarin.Forms that still require the use of a nuspec (largely due to the issues around Packing that the NuGet team needs to fix/implement), but the reality is that if you're using a Sdk Style project you probably don't need it.

Earlier I mentioned that you could completely make up properties to put in the PropertyGroup of your csproj. Well when the Sdk Style projects were created, the folks at Microsoft decided to make up some new Properties to help with the very common task of packaging your projects. Many of these properties can be found at the links below. Some are a little harder to discover such as the GeneratePackageOnBuild property. By default this is false, however when you set this to true, all you need to do is build your project and each build will generate a new NuGet package for any packable projects in your solution. You'll find there is no right or wrong way as much as there is a way that makes sense for the project you are working on. Many of my projects, including Prism include a Directory.build.props in the solution directory, this allows me to set this value in a single place. Since IsPackable is true by default this means that you need to have some logic to set IsPackable to false on projects that should not be packed such as Tests and Samples.

Project References... Package References Oh My

You may know a little bit about a Project Reference and Package Reference. The difference here being that a Package Reference comes in from a NuGet feed, while a Project Reference is a Reference to a local project in the file system. So what happens then when you want to build and package a project? What happens to those Project References? A friend of mine recently asked if I could come take a look at a project he had been working on. Well every Project Reference is assumed to be a Reference that will be needed by the generated package. This means that there is no need to have some crazy conditional includes for a Package configuration.

<Project Sdk="MSBuild.Sdk.Extras">
    <PropertyGroup>
        <TargetFramework>netstandard2.0</TargetFramework>
    </PropertyGroup>

    <ItemGroup Condition=" $(Configuration) != 'NuGetRelease' ">
        <ProjectReference Include="../AnotherProject/AnotherProject.csproj" />
    </ItemGroup>

    <ItemGroup Condition=" $(Configuration) == 'NuGetRelease' ">
        <PackageReference Include="AnotherProject" Version="$(Version)" />
    </ItemGroup>
</Project>

All you actually need is just your ProjectReference. When the project is built, this assumes that it is being Packed and will actually pick up whatever the version number is that, that specific project was packed with.

<Project Sdk="MSBuild.Sdk.Extras">
    <PropertyGroup>
        <TargetFramework>netstandard2.0</TargetFramework>
    </PropertyGroup>
    <ItemGroup>
        <ProjectReference Include="../Foo/Foo.csproj" />
    </ItemGroup>
</Project>

Because it will pick up the other project automatically it means that you just need to ensure that the Pack target is invoked.

> dotnet pack MyProject.csproj -c Release
> msbuild MyProject.csproj /p:Configuration=Release /t:pack

Earlier I mentioned the GeneratePackageOnBuild element which is false by default. All we need to do is set this to true like in the following example or add it to our Directory.build.props file and EVERY build will now generate a package automatically.

<Project Sdk="MSBuild.Sdk.Extras">
    <PropertyGroup>
        <TargetFramework>netstandard2.0</TargetFramework>
        <GeneratePackageOnBuild>true</GeneratePackageOnBuild>
    </PropertyGroup>
</Project>

Directory.build.props

For those paying attention, you've heard me mention the Directory.build.props... this is one of my favorite files, and in some ways a replacement for the nuspec*, in other ways it's just something I use to make my DevOps processes smoother. This is a slightly refined version of what I have published previously.

<Project>
  <PropertyGroup>
    <Product>$(AssemblyName) ($(TargetFramework))</Product>
    <DefaultLanguage>en-US</DefaultLanguage>
    <Authors>Your Name Here</Authors>
    <Copyright>© $([System.DateTime]::Now.Year) Your Name Here</Copyright>
    <PackageIconUrl>Uri to an icon image (png)</PackageIconUrl>
    <PackageLicenseUrl>Uri to the license</PackageLicenseUrl>
    <PackageProjectUrl>Uri to the project</PackageProjectUrl>
    <RepositoryUrl>Uri to clone the project</RepositoryUrl>
    <PackageRequireLicenseAcceptance>false</PackageRequireLicenseAcceptance>
    <RepositoryType>git</RepositoryType>
    <!-- Root control Version Prefix -->
    <VersionPrefix>1.0.0</VersionPrefix>
  </PropertyGroup>

  <!-- CI Helpers -->
  <PropertyGroup>
    <PackageOutputPath>$(MSBuildThisFileDirectory)/Artifacts</PackageOutputPath>
    <PackageOutputPath Condition=" $(BUILD_ARTIFACTSTAGINGDIRECTORY) != '' ">$(BUILD_ARTIFACTSTAGINGDIRECTORY)</PackageOutputPath>
    <IsPackable Condition=" $(ProjectName.Contains('Sample')) ">false</IsPackable>
    <IsPackable Condition=" $(ProjectName.Contains('Test')) ">false</IsPackable>
    <GeneratePackageOnBuild>$(IsPackable)</GeneratePackageOnBuild>
    <IS_PREVIEW Condition=" $(IS_PREVIEW) == '' ">false</IS_PREVIEW>
    <IS_RELEASE Condition=" $(IS_RELEASE) == '' ">false</IS_RELEASE>
    <VersionPrefix Condition=" $(BUILD_BUILDNUMBER) != '' ">$(VersionPrefix).$(BUILD_BUILDNUMBER)</VersionPrefix>
    <VersionSuffix>ci</VersionSuffix>
    <VersionSuffix Condition=" $(IS_PREVIEW) ">pre</VersionSuffix>
    <VersionSuffix Condition=" $(IS_RELEASE) "></VersionSuffix>
  </PropertyGroup>
</Project>

You'll notice that in this Directory.build.props file I have split it into two PropertyGroups to make it a little easier to read. So let's take a look at the first PropertyGroup.

  • Product: The Product line here gets updated to include both the Target Framework and Assembly Name instead of just the Assembly Name. This is particularly helpful for multi-targeting projects as it can help identify which framework specifically was being used when an error occurred.
  • You'll notice several elements here that contain placeholders for Uri's for specific to your project, and your name. These are all elements that came from the nuspec which are now taken care of and will be uniform across your entire solution helping to ensure that you don't have to duplicate values all over. 
  • VersionPrefix: This is the root version number that I want to control. Every single build will start with this version string.

Ok great, now let's take a little closer look at what's going on in the CI Group.

  • PackageOutputPath: Maybe you have just one project, or maybe you have 10 that are built and packaged for your solution. Even with one, it can get a little tedious to have to drill down into the Project's output folder {Path To Project}/bin/{Build Configuration} each time you want to get the generated NuGet. When you have multiple projects though this gets really annoying. By setting this value we ensure that all of the packages are created in a common location making it easier to find. By default we are creating an Artifacts folder under the current directory (where the Directory.build.props is located). On VSTS however we are defaulting that location to be in the Artifact Staging Directory defined by VSTS.
  • IsPackable: By default this is true, so we have a check to see if the Project Name contains either the word Test or Sample. If it contains either one we mark the project as NOT packable.
  • GeneratePackageOnBuild: By default this is false meaning you either need to explicitly invoke the Pack target. By setting this to true we will generate the Packages on each build. If this is too much, or you don't want to accidentally ship a Debug build, you could add a condition to only set it to true when the build Configuration is Release.

*NOTE:
Just to prevent some confusion here, there is still a nuspec in the process, only it is automatically generated by the build task rather than you having to maintain it as part of your project.

Helpful Links

For more information and to see how I build many of my tools and support libraries see this post I wrote on the new Project Format. Still have questions feel free to leave a comment or reach out on Twitter.

Azure Active Directory B2C for Xamarin Applications

You may have heard about Azure Active Directory B2C before. There have been a number of posts on the topic previously, including an episode with Matthew Soucoup on the Xamarin Show. So why yet another blog post? Well to be honest the documentation can be a little confusing, and there is more to the setup of a tenant than you may have read about. There is absolutely nothing difficult about it in any way. However if you miss some critical configuration steps you'll struggle to ever authenticate with Azure Active Directory.

Why Azure Active Directory B2C

Well for starters most of our apps today need some sort of authentication. There is a huge liability with storing user credentials, so while you might be able to use an OSS solution to implement your own OAuth flow, you're now taking the direct responsibility for properly maintaining the security of your users. If you're a large Enterprise that may not be a huge problem for you. For the rest of us (and even those large Enterprises), there is is a lot of benefit in offloading these tasks to 3rd parties like Microsoft. Not to mention that with the B2C offering we can further push off the responsibility to a number of common OAuth providers like Facebook, Twitter, LinkedIn, Google, etc.... and by checking a box you can enable 2-Factor authentication.

Ultimately what this means is that remove identifying user information from your own database storing only the Active Directory ObjectId for the user in your own database making breaches inherently less damaging as there is nothing more than a Guid in your database.

Beyond the security topic, cost is also important, particularly to a small business. While some of my larger clients have had projected user bases that may be in the hundreds of thousands or millions, for the vast majority of my clients Azure Active Directory B2C represents an Enterprise Grade OAuth service that will cost them absolutely NOTHING as their realistic projected user base ranges from a couple of users to less than 5,000. Since Azure Active Directory B2C gives you 50,000 users and 50,000 authentications per month for free, this results in the service being 100% free for them to use.

Basic Concepts

It is important to remember the Azure Active Directory B2C is built on top of Azure Active Directory. This means that you do not have some magical new offering from Microsoft, but an existing, trusted, enterprise grade offering with some extensions that make a Business offering suitable to use directly with your Customers. This also means that IAM for your staff is handled through standard Active Directory user groups. 

Azure Active Directory / B2C both follow some basic OAuth concepts. Among these concepts is that you may have 1 - * Client Applications that are authenticating with the service. It can be a little confusing and this is probably where you're likely to go wrong in the configuration (more on that in a minute).

Working with Azure Active Directory B2C might be a little confusing for Xamarin developers who are looking for that Native approach. Since we are working with an OAuth service we are forced to use a web view to actually authenticate. This means you cannot create that fully native view that makes a rest call as the user will have to Register or Login using the web view from the MSAL library. 

Configuring the Application in Azure

When you first open your B2C tenant you should see something like the following (be sure to grab the tenant name circled as you will need it later for your Xamarin app):

You'll need to begin by setting up an application. While there is no specific requirement that you set up more than one, my personal preference is to secure each application with it's own Application Id and limit what it has access to. For the purposes of this post I'll be setting up two applications, one for the Web API, and one for the Mobile App. It's worth noting here that this configuration is going to be a lot of back and forth as we set things up.

We'll begin by adding a new Web API  application. To start we'll give this the name Awesome API, because let's face it... it's awesome. Then be sure to add a Reply URL. For now we'll add a localhost, this can be updated later. Next be sure to completely ignore the horribly wrong optional comment for the App ID... it's not... For this we'll give it the ID api.

Now let's add an application for our Mobile App. Since this is for our mobile app, we just want the Native Client.....

Ok, I lied... so this is one of the things that isn't very obvious, but in order to be able to add scopes (which we'll need in our app), we actually have to enable the Web App / Web API section. Remember I said earlier to ignore the complete lie that App ID is optional. If we don't set the App Id we won't be able to add a scope, and we can't actually do that unless Web App / Web API is enabled.

IMPORTANT: After you've created the application for the Mobile app, be sure to open it back up, copy the Application Id, and set the Custom Redirect URI, under the Native Client. The Redirect URI will always be:

msal{Your Application Id}://auth

Now both of our applications have a basic configuration, we can go into the applications and set up the scopes. You'll see by default we have the user_impersonation scope. This is apparently used for the Microsoft Graph, and we'll want to set up our own scope. I personally haven't seen any documentation on right or wrong scopes, but as I understand it we actually will want to at least add a read scope here for our applications.

With our scopes set in both applications, we can now take a look at the API Access, and here is one of those areas where again things aren't really obvious.

You might be thinking as I did, that your application has permission to talk to itself... because why wouldn't it, right? Well it actually doesn't unless you add permission for each application to have access to itself under API Access. What happens if you skip this step? Well for starters you won't get an Access Token, and then what is the point of using Azure Active Directory B2C because you went through a bunch of setup to not have a token...

Finally for the Mobile App, be sure to add both the Mobile App itself with all permitted scopes, and the Web Api with all permitted scopes. When you're done, it should look something like this.

Adding a Policy

Earlier I mentioned that you will be using a Web View from the MSAL library to work with your B2C tenant. The way that this is exposed is with Policies. At the time of this article there are currently 6 policies that you can configure. These are fairly straight forward. To start, you'll likely want to add a policy to allow users to either Sign Up or Sign In. You can alternatively separate these tasks so that in your underlying UI you have separate Login and Register buttons with each one bringing you to a UI from B2C that makes sense for that work flow. If you have configured any Identity Providers (not covered by this article), you will be able to choose whether to use those and/or the local account. 

Setting up the Xamarin App

To use Azure Active Directory or Azure Active Directory B2C in your Xamarin or Xamarin Forms application you will need to install the Microsoft.Identity.Client (which is still only available as a preview package on NuGet). There is actually fairly decent setup instructions for considerations you'll want to have setting up the client in your Xamarin iOS and Android applications in the official sample app on GitHub. While I would hardly consider the sample anywhere close to demonstrating best practices as it fully exposes the Tenant and Application Id in source control, not to mention I am very much against static references, I will say it is a great sample to validating that you have correctly configured your new B2C Tenant. 

Going back into the Azure portal grab the Application Id for the Mobile App, along with the tenant name from the first step. Finally grab the read scope from the mobile app. If you're using the Azure sample app, you just need to update these couple of fields and run the app. If you put a break point in the MainPage.xaml.cs after the call to AcquireTokenAsync you should be able to evaluate the AuthenticationResult and see both an IdentityToken and AccessToken along with the expiration. If you see both, your tenant is correctly configured and you can now use Azure Active Directory B2C to correctly authenticate. 

Developer Toolkit for Visual Studio Mac

Several years ago when I was still just a web developer wanting to break into mobile development, I asked myself how does anybody do this? You have to learn Java for Android, Objective-C or Swift for iOS.... of course then I learned about Xamarin. Without a doubt Xamarin makes the tedious tasks of mobile app development far easier by centralizing your code in one common language, and even further with Xamarin Forms by abstracting the UI into reusable code. That doesn't mean that creating a new app is by any means easy. In fact setting up a new app can take a lot of time.

Nearly a year and a half ago I introduced the Prism QuickStart Templates which were the first .NET Standard templates using the new project format available for Xamarin apps. The project took on a life of it's own and was loved by many even despite it's limited availability in the CLI. As I set out to bring the templates into Visual Studio for Mac, it again took on a life of its own. A number of developers and MVP's were gracious enough to give me their feedback on things they would like to see, and while it may have delayed my ability to release, what we have today is simply stunning.

Prism Template Studio and Developer Toolkit

Ok I admit it, it's a mouthful, and if you have a better name feel free to tweet it to @DanJSiegel. Why the mouthful, because it is absolutely jammed pack with so many tools, so many helpers, and so many templates, that every time I explain it someone asks, "well what about...", and I keep either responding yeah it does that too... or yeah we could add that. It's probably that second one that has admittedly  generated the most delay in getting this out. Whether you use vanilla Xamarin Forms or Prism you'll want to install the Prism Template Studio and Developer Toolkit. 

Templates

As the name suggests it contains the a Template Pack. This Template Pack isn't quite like anything you've seen before. There are 14 new project templates that ship in this Template Pack, including 7 projects for Unit and UI Testing, 3 more for building Prism Modules, and another 3 for Prism Applications, plus a new basic Xamarin Forms project template.

Each of the templates bring something special for different developers. You can still go with the traditional flat "Official" template, or one includes PropertyChanged.Fody with projects and tests separated into src and tests folders. You can also take advantage of the powerhouse QuickStart Template or the App Center Connected App. Both of these provide the to setup a project in VSTS and automatically configure a Build in App Center.

Tools

To start there is some integrated tooling for all of your Xamarin projects to enable support for the Mobile.BuildTools, you can connect an existing project to an app in App Center, and even get some quick links to the Prism Docs, GitHub issues, and StackOverflow. Over time you can expect to see additional tooling for App Center, and refinements to do more with VSTS and better expand on your ability to get started with Unit and UI Tests.

Get started today by making sure Visual Studio Mac is up to date, and then simply install the Prism Template Studio and Developer Toolkit from the Extension Manager.

Prism 7.1 Preview 3

Today we released the 3rd Preview for Prism 7.1. This is a very significant release for us and contains some very exciting changes.

Forms Dependency Resolver

At Build we released a special preview for Xamarin Forms. In that preview we released the much awaited ability to use your Application's DI Container to resolve types inside of Xamarin Forms such as your Renderers or Platform Effects. It was a great feature but there were some issues caused on Android by the transition from a default constructor to one that requires the Android Context. Preview 3 fixes this by adding a specific Android target to each of the DI packages to handle passing the Context to the Container while resolving your Renderers.

We asked the community and while people were very torn on the subject, it was very overwhelming that using the DependencyResolver is something that should be configurable. As a result we've updated PrismApplication's constructor to accommodate this. First we removed the optional parameter for IPlatformInitializer and simply provided both a Default constructor and one that takes IPlatformInitializer. Second we added a new constructor that takes both IPlatformInitializer and a boolean to control whether we should set the DependencyResolver. As you may have guessed both of backwards compatible constructors call the new constructor, and by default will pass false to prevent PrismApplication from setting the DependencyResolver. 

Just as with Preview 2, you can still override SetDependencyResolver in order to provide your own logic.

Known Issues

FFImageLoading is a very popular image handling library. Unfortunately the CachedImageRenderer as 2 completely unnecessary constructors as they will never be used by Xamarin Forms and they only serve to confuse a DI Container. By default most DI containers will attempt to resolve a type based on the constructor with the most arguments. In the event that more than one constructor exists with the same "Highest" argument count, it will select the first one. This is the case in the CachedImageRenderer which results in the container attempting to resolve the Renderer with CachedImageRenderer(IntPtr, JniHandleOwnership).

In order to handle this you will need to add a specific registration for the Renderer in your Android Initializer. This will look different based on which container you are using. If you're using DryIoc, you can easily add either of the following examples to fix the issue:

public class AndroidInitializer : IPlatformInitializer
{
    public void RegisterTypes(IContainerRegistry containerRegistry)
    {
        containerRegistry.GetContainer().Register<CachedImageRenderer>(made: Made.Of(() => new CachedImageRenderer(Arg.Of<Android.Content.Context>())));
        // OR with Reflection
        containerRegistry.GetContainer().Register<CachedImageRenderer>(made: Made.Of(typeof(CachedImageRenderer).GetConstructor(new[] { typeof(Android.Content.Context) })));
    }
}

Modularity

Modularity has been on my personal hit list for a while now, and Preview 3 makes some major changes to help accomplish the final goal of aligning the Modularity API between WPF and Xamarin.Forms as well as making it available for UWP. To start with we've moved the Modularity Exceptions from WPF to Prism.Core, along with ModuleInfo, IModuleInitializer, and IModuleManager. Among the changes, this means that Xamarin.Forms apps will be able to listen for the ModuleLoaded event and capture any exceptions that may occur during the Module's Initialization. 

For WPF developers defining their ModuleCatalog in XAML this means a break as ModuleInfo is now in Prism.Core. Rest assured though that some care has been taken to ensure that any API changes are addative and beneficial. An example of this is that the constructor's from Prism.Form's version of ModuleInfo were added to ModuleInfo in Prism.Core. For Prism.Forms developer's a bigger break in ModuleInfo was introduced by changing ModuleType from Type to string. While I say bigger break, this should only affect developers who are directly referencing ModuleInfo's ModuleType property. Since the Modularity API in Prism.Forms favor's generics this should have a minimal impact for most developers.

XAML Navigation

Yep, I said it... This was an amazing idea and PR that came from the community. Over the past week I've had a chance to dogfood this one, and I have high hopes for what this will empower developers to do. Up until now you would need to have a ViewModel, inject the NavigationService, create a command, and an action for the command that uses the NavigationService to navigate to some other page, and then bind that command to some Button. Now with XAML Navigation you can simply have:

<Button Text="Continue"
              Command="{prism:NavigateTo ViewB}" />

Just as an example this could be used to completely eliminate a ViewModel on a Page that is simply a landing page that simply prompts the user to continue. This could be very useful for pages used in OnBoarding new users where you are explaining how to do something in your app. There is full documentation on this feature in the Prism Docs. Be sure to read more there on how to use NavigationParameters which would again make it easier to handle scenarios where you are try to Navigate based on a Cell in a ListView or a RepeaterView that so many of us have implemented.

Additional Notes

Perhaps one of the first things that developers have had to learn with Prism.Forms is that you must name INavigationService 'navigationService' in your constructor otherwise it won't be injected into your ViewModel. While that hasn't been the case in DryIoc for a while, this is now a thing of the past for Autofac and Unity as well. 

While you may not have heard of this one before, SourceLink is an amazing new tool for OSS libraries to empower developers debugging experience. Sadly this only works in Visual Studio on Windows and is only in the Backlog for Mac for now, but for those working in Visual Studio 2017 this will let you step into Prism's code while you debug. 

Last but certainly not least. A few months ago Oren Novotny began talking to me about signing NuGet packages. After hearing what he had to say, I had to agree wholeheartedly that it was something we should do for our user base. Beginning with Preview 3, we are now signing each NuGet package as part of our release pipeline. This means that when you consume our packages you will be able to verify that the package actually comes from the Prism team and has not been altered between us and you.

Be sure to try out the new Prism 7.1 preview today and let us know what you think. Be sure there is more great stuff to come.

Prism 7.1 Preview 1

Maintaining a library can be exceptionally difficult. As time progresses new demands arise that weren't there when the API was first created. Sometimes simple work arounds can be found to prevent breaking library consumers when they upgrade. Sometimes the changes are no brainers that have no negative affects. Sometimes changes simply aren't made because the potential breaks are simply too risky. Other times the benefits simply outweigh the break and changes are made.

Prism 7.1 is largely the result of changes that the Prism Team has come to realize had to be made. As part of the overall Prism 7.X effort, the team has been working on bringing the API closer together across each platform target where possible. Currently this is perhaps most evident with the introduction of the Prism.Ioc namespace allowing developers to more easily port from one DI Container to another, and even create Prism Modules that are sharable across projects with different DI Containers.

In this release have made some major changes to better unify the API between Xamarin Forms and our ongoing work with Jerry Nixon to bring Template 10 to Prism for UWP. This effort though represented the need to create a binary incompatibility, a need to create some breaking changes, and an opportunity to greatly improve the API for Xamarin Forms developers. So what are the changes? For starters we've migrated most of the Prism.Navigation namespace from Prism.Forms to Prism.Core. After a lot of deliberation we ultimately decided that these changes should not be available to WPF developers as it just doesn't make sense for WPF applications. 

In addition to the binary incompatibility caused by moving the classes from one binary to another, this creates a secondary break in the WPF will NOT be supported for Xamarin Forms developers.

I mentioned that there are breaking changes and some opportunities for improvements as well. The break that you will encounter should be fixable with a simple Find/Replace in your IDE or text editor, as NavigationParameters is now INavigationParameters which changes the method signatures for INavigatingAware, INavigatedAware, INavigationAware, IConfirmNavigation, IConfirmNavigationAsync, and INavigationService. While that may provide you with some unique opportunities for testing, that isn't the exciting change. The change is the return type from INavigationService from a simple Task to Task<INavigationResult>. Why is that so great? Well for one thing if the Navigation failed for some reason you'll have access to a Boolean to more easily execute that logic. It's also great though because until now the NavigationService could make it harder to determine what type of exception may have been thrown. INavigationResult fixes this by returning the actual exception that was thrown allowing you greater control over what to do with it.

var result = await _navigationService.NavigateAsync("BadPage");

if(!result.Success)
{
    await _pageDialogService.DisplayAlertAsync(result.Exception.GetType().Name, result.Exception.Message, "Ok");
}

 

Perhaps my single favorite improvement in Prism 7.1 for Xamarin Forms developers is the inclusion of the ContainerProvider. This was born out of a desire to be able to declare types such as a TypeConverter that may rely on some service in your application. The ContainerProvider will allow you to declare in XAML, types that do not have a default constructor and inject any of your applications services into that type. 

<?xml version="1.0" encoding="UTF-8" ?>
<ContentPage
    xmlns="http://xamarin.com/schemas/2014/forms"
    xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
    xmlns:prism="clr-namespace:Prism.Ioc;assembly=Prism.Forms"
    xmlns:converters="using:Prism.Forms.Tests.Mocks.Converters"
    Title="{Binding Title}"
    x:Class="Prism.DI.Forms.Tests.Mocks.Views.XamlViewMock">
    <ContentPage.Resources>
        <ResourceDictionary>
            <prism:ContainerProvider x:TypeArguments="converters:MockValueConverter" x:Key="mockValueConverter" />
        </ResourceDictionary>
    </ContentPage.Resources>
    <Entry x:Name="testEntry"
           Text="{Binding Test,Converter={StaticResource mockValueConverter}}" />
</ContentPage>

Unity

Unity has been one of the most popular containers for Prism Developers. I have no doubt that this has a lot to do with the fact that it is the container Brian has used for years, has used in his demos and that was most widely available in the Prism Template Pack. The Unity team has made some major design changes in Unity 5.X. For Prism developers using Unity we have long since had a dependency on the Unity NuGet package. In it's current state, this actually broke Prism.Unity.Forms for netstandard1.0. 

The Unity team has redefined the Unity NuGet package to be an all inclusive package that presents several problems. For Xamarin Forms developers, it introduces references to 6 more assemblies than what you actually need or would use. For WPF developers it creates a secondary, and hidden reference to CommonServiceLocator, as well as the inclusion of 5 more assemblies than what you need or Prism uses. To continue depending on this NuGet package represents an additional issue that it could continue to break Prism developers. To resolve this, Prism 7.1 has changed it's target from Unity to Unity.Container. This change will be unnoticeable to anyone who uses the new PackageReference to include NuGet's in their projects, particularly when you have your dependency on Prism.Unity or Prism.Unity.Forms and not Unity itself. For all other Unity developers, you should uninstall Unity from your projects before upgrading to Prism 7.1.

.NET Standard & the New Project Format for Xamarin Developers

.NET Standard has really changed the ballgame for .NET Developers. In large part because the entire project system has experienced a revamp. Lately I've found myself really encouraging developers to update their PCL libraries to .NET Standard 2.0. For developers who haven't made the jump it's easy to find yourself saying "no we can't do it". In reality it doesn't take as much effort as you think it does to update your projects. Why should you update your projects though? Well for starters PCL is painful, you lookup how to do something only to find out that it's not supported and sometimes there's no workaround. With .NET Standard the missing API's that lead to weird workarounds is a thing of the past.

Upgrading

Upgrading really isn't as hard as you may think. For starters your csproj is going to start out about as simple as:

<Project Sdk="Microsoft.NET.Sdk" ToolsVersion="15.0">
  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
  </PropertyGroup>
</Project>

Then of course we need to start adding in your dependencies. Now this is where it gets "Hard". It's hard because it means you need some familiarity with your project. You need to know what are the top level dependencies that your project has. For example if you're using Prism there are generally 3 Prism packages you're referencing, Prism.Core, Prism.Forms, and Prism.{Some Container}.Forms. It's only the last one that you actually need to reference in the new project format. You can of course add this from either Visual Studio or Visual Studio Mac or update it manually so that your project file now looks like:

<Project Sdk="Microsoft.NET.Sdk" ToolsVersion="15.0">
  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Prism.DryIoc.Forms" Version="7.0.0.396" />
  </ItemGroup>
</Project>

Assuming you wanted to get started with Prism for Xamarin Forms this would be all you would actually need as all three Prism packages are automatically brought in along with Xamarin Forms. Now let's say that you wanted to target a newer version of Xamarin Forms than 2.5.0.122203, such as the 3.0 preview that's now available. You simply need to add a new PackageReference for that version of Xamarin Forms or install it in the IDE's Package Manager.

That may seem too easy, and it is. Of course you need to make some more changes. You'll need to find the packages.config or project.json and delete those files... If you have a standard Properties/AssemblyInfo.cs you'll need to go ahead and send that one to the trash as well. With that your project is upgraded, and you're wondering why you didn't do this sooner....

Multi Targeting

Around 5 years ago I first started trying to Multi-Target. My earliest attempts were pretty bad with a csproj file for each framework I wanted to target, all part of the same solution, and it generally resulted in build errors due to file locks as I had no clue how the build system worked back then. Honestly I've never found much documentation that made it very easy, and while I eventually figured out I could do lots of MSBuild trickery to make it work, and then manually develop a nuspec to pack my library, it was always really painful. The new Project system gives us some real advantages for Mutli-Targeting that make it a real breeze.

I suppose though I should start with why on earth should you multi-target... and when would you want to? If you're a Xamarin developer chances are you want to Multi-Target. Internally and for all of my clients I generally start off with a common library. This is something that is really helpful to give me extensions, and custom controls that I may want to use across all of my apps or components like a Prism Module. A lot of that code is truly portable and I could easily handle it with a simple netstandard2.0 class library. However sometimes I'm implementing Platform Effects and Renderers for my controls that instantly require that I have a native binary for my iOS and Android projects. This is where multi-targeting really becomes very powerful. By Multi-Targeting I maintain a single Project which generates a single binary, native to the platform I need to target. Now if we expand on the basic project structure we saw above and now update our csproj to look like the following we can target both .NET Standard 1.3 & 2.0, along with Android, iOS, Mac, and UWP. It's worth noting that the non .NET Standard targets are really getting a lot of help due to the MSBuild.Sdk.Extras package from Oren Novotny.  

<Project Sdk="MSBuild.Sdk.Extras/1.5.4" ToolsVersion="15.0">
  <PropertyGroup>
    <TargetFrameworks></TargetFrameworks>
    <TargetFrameworks Condition=" '$(OS)' == 'Windows_NT' ">netstandard1.3;netstandard2.0;Xamarin.iOS10;Xamarin.Mac2.0;MonoAndroid80;uap10.0.16299</TargetFrameworks>
    <TargetFrameworks Condition=" '$(OS)' != 'Windows_NT' ">netstandard1.3;netstandard2.0;Xamarin.iOS10;Xamarin.Mac2.0;MonoAndroid80</TargetFrameworks>
  </PropertyGroup>
  <ItemGroup>
    <Compile Remove="**/Platform/**/*.cs" />
    <None Include="**/Platform/**/*.cs" />
  </ItemGroup>
  <ItemGroup Condition=" $(TargetFramework.StartsWith('MonoAndroid')) ">
    <None Remove="**/Platform/Droid/**/*.cs" />
    <Compile Include="**/Platform/Droid/**/*.cs" />
  </ItemGroup>
  <ItemGroup Condition=" $(TargetFramework.StartsWith('Xamarin.iOS')) ">
    <None Remove="**/Platform/iOS/**/*.cs" />
    <Compile Include="**/Platform/iOS/**/*.cs" />
  </ItemGroup>
  <ItemGroup Condition=" $(TargetFramework.StartsWith('Xamarin.Mac')) ">
    <None Remove="**/Platform/macOS/**/*.cs" />
    <Compile Include="**/Platform/macOS/**/*.cs" />
  </ItemGroup>
  <ItemGroup Condition=" $(TargetFramework.StartsWith('uap10.0')) ">
    <None Remove="**/Platform/UWP/**/*.cs" />
    <Compile Include="**/Platform/UWP/**/*.cs" />
  </ItemGroup>
</Project>

So what's going on here anyway? Well for starters we're establishing some conventions for our code. We are saying that anywhere in our project that we have a folder named Platform we are going to change the inclusion of those files from Compile to None. This means that the IDE will display our code while MSBuild will ignore our code. Then, we start conditionally adding code back in so that when MSBuild is compiling for iOS and it encounters code that has a path that includes Platform/iOS, that code will be added back in for compilation. 

Generating a NuGet

If you're trying to generate a library that you can easily consume in your projects, or if you're trying to make it available for the community at large, these new SDK style projects are make generating a NuGet easier than ever. You just need to worry about what targets you want to compile for, and the NuGet largely takes care of itself with very little that we actually need to add. It's really just a few properties that we need to add to our project. Of course, if you take a look at any of my projects you'll notice a recurring theme, most of my NuGet configurations aren't even in the project file at all. Along the way I've come to realize the power of a file called Directory.build.props. This is a little bit of a magic file. If it exists anywhere from the solution folder to your project folder it will automatically be picked up by MSBuild. 

Looking at a real world example

Prism has more than 15 NuGet packages that have to generated on every build. Honestly for WPF we still use the older style projects which is a painful process, but the rest of the projects all share a lot of common logic.

  • If there is a project that isn't a test project we don't want it to Generate a NuGet. 
  • The package authors are always going to the members of the Prism Team.
  • The source is always located on GitHub in the same repository.
  • We always want to provide symbols packages.

Without using the Directory.build.props in our solution directory we would have to replicate this information in every single project file. 

Setting your project up for NuGet Packaging

If you want to pack your project all you really need to do is to add the following Directory.build.props to your project:

<Project>
  <PropertyGroup>
    <Product>$(AssemblyName) ($(TargetFramework))</Product>
    <NeutralLanguage>en</NeutralLanguage>
    <Authors>Your Name Here</Authors>
    <VersionPrefix>1.0.0</VersionPrefix</VersionPrefix>
    <VersionPrefix Condition=" '$(BUILD_BUILDID)' != '' ">$(VersionPrefix).$(BUILD_BUILDID)</VersionPrefix>
    <IS_PREVIEW Condition=" '$(IS_PREVIEW)' == '' ">false</IS_PREVIEW>
    <IS_RELEASE Condition=" '$(IS_RELEASE)' == '' ">false</IS_RELEASE>
    <VersionSuffix>ci</VersionSuffix>
    <VersionSuffix Condition=" $(IS_PREVIEW) ">pre</VersionSuffix>
    <VersionSuffix Condition=" $(IS_RELEASE) "></VersionSuffix>
    <PackageProjectUrl>https://github.com/USER/PROJECT_NAME</PackageProjectUrl>
    <PackageLicenseUrl>https://github.com/USER/PROJECT_NAME/blob/master/LICENSE</PackageLicenseUrl>
    <RepositoryType>git</RepositoryType>
    <RepositoryUrl>https://github.com/USER/PROJECT_NAME</RepositoryUrl>
    <IncludeSymbols>True</IncludeSymbols>
    <IncludeSource>True</IncludeSource>
    <PackageOutputPath>$(MSBuildThisFileDirectory)Artifacts</PackageOutputPath>
    <PackageOutputPath Condition=" '$(BUILD_ARTIFACTSTAGINGDIRECTORY)' != '' ">$(BUILD_ARTIFACTSTAGINGDIRECTORY)</PackageOutputPath>
    <IsTestProject>$(MSBuildProjectName.Contains('Test'))</IsTestProject>
    <GenerateDocumentationFile>!$(IsTestProject)</GenerateDocumentationFile>
    <GeneratePackageOnBuild>!$(IsTestProject)</GeneratePackageOnBuild>
  </PropertyGroup>
</Project>