2 Comments

Image of an escalator going towards the light

This post is part of the Xamarin Month, which is about community and love. Looking after a nice UI and User Experience is one way how a product team or developer can show love to its user. So let focus on a small detail which always makes me smile when done right 🙂

Xamarin Forms apps have a reputation for taking their time to load. While quicker loading times are always preferred and are an excellent place to start. Sometimes there is no way around letting the user wait, while a background process is doing it's best to do its task. However, there is an alternative to speed: Distraction. Distraction is what Airplanes do with their onboard entertainment, and it is what some apps like Twitter do on startup with an animated logo. Since Xamarin Apps fall into the latter category, let's see how we can improve our startup experience with some fancy animated Xamarin Hexagon.

However, before we get started with the animation part, I'm afraid we have to take a quick look into one of our platform projects - into the Android project that is.

The empty feeling when starting Xamarin.Forms on Android

Have you ever wondered why the startup screen experience of your Xamarin app on Android differs from iOS or UWP? While we are greeted instantly with a logo when starting up our Xamarin.iOS app, when starting the same app on Android, a blank screen stares at us. Why is that so?

Screenshot_1550416214

Just point it out: this is not the fault of Xamarin Forms, it is more a difference in the two platforms. While iOS forces you to provide a startup storyboard (a single view), there is no such thing under Android. At least that may seem so at first. However, from where is this blank screen? You probably already know that the starting point of a Xamarin.Forms app on Android is the MainActivity.cs or to be more precise that one activity which has the following attribute set:

[Activity( ... Theme = "@style/MainTheme", MainLauncher = true, ... ]
public class MainActivity : global::Xamarin.Forms.Platform.Android.FormsAppCompatActivity
{
	// ...
}

One attribute that is getting set is the theme. This theme is where Android "draws it's inspiration" for the splash screen. We can find it defined under Resources\values\styles.xml. Now to replicate the startup image, we first have to define a layout in Resources\drawables\splash_screen.xml along the following lines:

<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
  <item>
    <color android:color="@color/colorPrimary"/>
  </item>
  <item>
    <bitmap
        android:src="@drawable/SplashScreen"
        android:tileMode="disabled"
        android:gravity="center" />
  </item>
</layer-list>

Now we can modify styles.xml by adding new style with the following lines:

<?xml version="1.0" encoding="utf-8" ?>
<resources>

  <!-- ... -->

  <style name="SplashTheme" parent ="Theme.AppCompat.Light.NoActionBar">
    <item name="android:windowBackground">@drawable/splash_screen</item>
    <item name="android:windowNoTitle">true</item>
    <item name="android:windowFullscreen">true</item>
  </style>
</resources>

Starting the app and we see the Xamarin logo while starting up. Unfortunately, it does not go away when we get to our Hello World page in Xamarin Forms. The reason being that we have overwritten the default style which is also used by our Xamarin.Forms app. However, we can fix this by adding an activity solely to display this new style, once the new SplashActivity.cs is rendered we switch over to the current MainActivity.cs. The MainActivity.cs uses the original style and starts the Xamarin.Forms part of our app.

Screenshot_1550416896

If we let the app run the app now. We do see a splash screen which disappears after starting up the app. So now that we have Android on par with iOS and UWP let's shift gears and implement that bouncy startup animation.

Bouncy startup animation

Drawing some inspiration from the Twitter app, let's let our logo bounce similarly. We implement the animation of the hexagon in Xamarin.Forms. The animation could - in a real app - buy us some time while we are starting up. So what we need is again a splash screen but this time a Xamarin.Forms view. The XAML displays an image in the centre:

<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
             xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
             x:Class="CustomSplash.SplashPage"
             BackgroundColor="#2196F3">
    <ContentPage.Content>
        <Grid>
            <Image x:Name="SplashIcon"
                   HorizontalOptions="Center"
                   VerticalOptions="Center"
                   Source="SplashScreen.png" />
        </Grid>
    </ContentPage.Content>
</ContentPage>

The XAML is ready. However, this would solely extend the static native splash screens. The animation takes place in the code behind. We can override the OnAppearing method and add some animation logic to it:

await SplashIcon.ScaleTo(0.5, 500, Easing.CubicInOut);
var animationTasks = new[]{
    SplashIcon.ScaleTo(100.0, 1000, Easing.CubicInOut),
    SplashIcon.FadeTo(0, 700, Easing.CubicInOut)
};
await Task.WhenAll(animationTasks);

First, we shrink the image, then we expand and simultaneously let it go transparent. Combining the animations gives our app a nice fluid effect. While we could now put this puppy in a loop and endeavour it forever and ever and ever and... well most probably we only want to show it once and then move on to our main page. The following lines achieve this:

Navigation.InsertPageBefore(new MainPage(), Navigation.NavigationStack[0]);
await Navigation.PopToRootAsync(false);

The above lines insert the main page as the first page in the navigation stack. In other words, we insert the main page before the splash screen. Then we PopToRoot so the splash screen is no longer present on the navigation. So while the lines might look a bit odd at first. They prevent the user from navigating back to the splash page. Further, it allows the splash page to be garbage collected. Bottom line all the things we want to do with a splash screen once it has served its purpose.

The resulting app looks something like this:

Animation splash screen on iOS

I am a firm believer that these little things can go a long way and show your user right from the get-go that you care about your app. While the native splash screen is a good start. The animated load screen can buy you a bit of extra time to start up your app while distracting the user. You can find the entire demo app on GitHub.

Be sure to check out the other blog posts in the Xamarin Universe and happy coding!

1 Comments

Showing Wall-E infront of a yellow VW bus with taxi stripes

I have taken quite a liking into Fabulous - a wrapper around Xamarin.Forms allowing you to write functional UIs with F#. When first looking at the project I noticed that is was being built on AppVeyor and Travis. I asked myself: Why use two CI Systems for compiling one project? After some further digging I found out that there are no hosted macOS Agent on AppVeyor. Travis on the other hand did come with agents for Windows and macOS but did not have the Xamarin Toolchain installed on the agents. Installing the Xamarin Toolchain on every run lead to a build time of over 30 minutes. Since Azure DevOps supports building on Windows and macOS I thought I would give it a go and setup a Pipeline to build Fabulous - I mean how hard can it be? Well hard enough to write a blog post to sum up the steps to get over the pitfalls

TLDR: How to run your FAKE scripts on Azure DevOps

Fabulous uses FAKE to execute the build, tests and create the NuGet packages. FAKE is a power full tool for writing build scripts. FAKE is also a .Net Core CLI tool which is designed for being installed and executed from the command line, so it should be a great fit for running on any build server.

Installing FAKE

Azure DevOps build agents do not come with FAKE preinstalled. Since FAKE is a .Net Core CLI tool this is no problem. The following command should solve this issue:

dotnet tool install fake-cli -g

Unfortunately executing FAKE after installation fails. This is because the installation directory on the Azure DevOps build agents differs from the standard installation location of .Net Core - Why? you ask, well the answer given is security. On Windows we can circumvent this fact by installing FAKE into the Workspace directory:

dotnet tool install fake-cli --tool-path .

Under macOS (and Linux) this approach still fails. The suggested solution is to set DOTNET_ROOT. I ended up with the following lines to be executed on the macOS agent:

export DOTNET_ROOT=$HOME/.dotnet/
export PATH=$PATH:$HOME/.dotnet/tools:/Library/Frameworks/Mono.framework/Versions/Current/Commands
dotnet tool install fake-cli -g

On Linux the approach had to adopted again - go figure. I ended up with these lines:

export PATH=$PATH:$HOME/.dotnet/tools:/Library/Frameworks/Mono.framework/Versions/Current/Commands
dotnet tool install fake-cli -g

Now you should be able to run your FAKE script on Azure DevOps

Using NuGetFallbackDirectory

This part is not directly related to FAKE but is something I stumbled over while running on Azure DevOps. One test script was referencing the NuGet packages via the global NuGetFallbackDirectory and was looking for them under the default location. Under macOS the location is in the users home directory, so adopting the path as follows did the trick:

let tfsEnvironment = Environment.GetEnvironmentVariable("TF_BUILD")
if (String.IsNullOrEmpty(tfsEnvironment)) then
    "/usr/local/share/dotnet/sdk/NuGetFallbackFolder"
else
    let homepath = Environment.GetEnvironmentVariable("HOME")
    Path.Combine(homepath, ".dotnet/sdk/NuGetFallbackFolder")

Note that the variable TF_BUILD is expected to only be set on TFS/VSTS/Azure DevOps. This will allow the script to fall back to the default location should it be executed on a developers machine.

But why even bother?

What is the motivation of migrating from a working CI to another? Are you doing because you are a Microsoft MVP?

These were questions I got when talking with colleagues about my endeavors to build Fabulous on Azure DevOps. I think AppVeyor and Travis are great tools and they have shown that they are up to the task building and testing Fabulous. Other than because I was curious how hard it could be, there were two aspects why I wanted to try to migrate the build to Azure DevOps:

  1. Merging the builds, having two places doing one thing always comes with overhead.
  2. The other one was seeing how much the build time would be reduced by not having to install Xamarin.

So here is a comparison between the build times before and after:

CI PlatformAgent OSBuildTestTime (minutes)
TravismacOS~30-32
Azure DevOpsmacOS~13-14
AppVeyorWindows~4
Azure DevOpsWindows~6

Now to keep in mind, the build on macOS and Windows are ran in parallel. So in case of AppVeyor and Travis the resulting build time would be 30-32 minutes. With Azure DevOps this can be brought down to 13-14 minutes.

I would argue that merging two build scripts into one and cutting build time roughly in half are good arguments for why Azure DevOps seems to be a better fit for Fabulous. Then again there was some pain on getting the .Net CLI tools running, which I hope the Azure DevOps team will solve in the future - being products from the same company and all cough

Another aspect was having a build on Linux in the future, since Fabulous supports GTK since 0.30 it would be nice to also compile it on Linux. At the time of writing there were still a few kinks in the build process of Fabulous, but nothing that can't be solved in the future.

Thanks

Thank you Timothé Larivière and Stuart Lang for all the tips and hints along the way

0 Comments

Title Image showing a factory

Azure DevOps, formerly known as Visual Studio Team Services or VSTS for short, allows you to create automated release pipelines for all different kind of projects. One of the nice things is that you get free build time for opensource projects. So why not give it a spin and look if I can set up the build pipeline for my open source project PureLayout.Net. The PureLayout.Net library is a wrapper of the PureLayout iOS library written in Objective-C which allows you to quickly layout your UI in code. So it differs a bit from your standard Xamarin project as it involves the step of building the project, creating the bindings to C# and then packaging all up in a NuGet package. Since this is an iOS-only project, we will, of course, have to build this on a Mac.

Choosing a build agent

Good thing then that you Azure DevOps (could we all agree on ADO for this in the future? ) provides a Mac build agent hosted on Azure. Now the question left is, will the hosted Mac provide all the tooling that we need? If no, we would have to fall back onto the option of creating our own Mac build agent, i.e. renting it from a third party. For PureLayout.Net we require the following tools to be installed:

Luckily on GitHub the agents OS and tools are all listed. So we see that there is the Xamarin Toolchain and XCode. Unfortunately, objective-sharpie is not installed and while this is a bit of set back what we see installed on the hosted macOS agent is homebrew.

Homebrew

Homebrew is THE package manager for macOS, and it allows opensource projects to provide their tools as packages. A quick search on the interwebs shows there is a keg for objective-sharpie - yes they are going all the way on that brewing analogy. So we could install the tool while before we run the build. So let's go ahead, and set up the build with the hosted macOS agent from ADO.

Hosted vs setting up your Build Agent: There are a few things to consider when choosing between setting up an agent on your own and then register it to ADO or opting for a hosted build agent. While it always depends on your situation. Generally speaking, setting up your build agent brings you more control over the setup and installed tools. You decide when updates happen and can give you have the option of having databases/files and so forth pre-setup and ready for reuse. On the other hand, you must maintain your agent and while this might work okay at first. Consider having to maintain multiple agents. How do you ensure that the agents are all equally setup? MInor differences in the setup could lead to an unstable build infrastructure which is something no one wants. If you do not have any requirement that prevents you from using hosted agents, I would recommend going the easy route and using hosted agents. Hosted agents come with the added benefit of being able to adjust the number of agents according to your workload - if you are a consultancy, this can be a huge bonus since project/build load might vary from time to time.

Configuring the build

The browser is all you need for setting up your build configurations or pipelines as ADO calls them these days. While ADO does offer templates for specific builds such as Xamarin.Android, there is no template for Xamarin.iOS wrapper projects - other than the blank template that is. The first step is to connect the repository which in case of PureLayout.Net is on GitHub. For the first time, one has to link GitHub with the ADO account by following the instructions.

When building PureLayout.Net manually. The the following steps create a new version of PureLayout.Net:

  1. Execute make
  2. Build the solution
  3. Pack the artefact into a NuGet package
  4. Enjoy the new NuGet package

The Makefile creates the native binary (and generate the required wrapping code). To create the wrapper, we require objective sharpie which we can install via Homebrew. To execute all the required commands in ADO a Command Line build step with the following instructions is used:

echo install objective sharpie
brew cask install objectivesharpie
echo Performing make
make

If you ever want to go down this rabbit hole of creating your wrapper project. Be sure to check out this article by Sam Debruyn - after reading this post of course .

Next up is building the solution of the wrapper project, which can be done with an MSBuild step and defining the path to the csproj file: PureLayout.Binding/PureLayout.Binding/PureLayout.Net.csproj - there is even a handy repo browser. Under Configuration, you can set the build configuration to Release, but instead of hard coding it, consider using the environment variable $(BuildConfiguration). More about environment variables in a bit.

ADOVisualBuildConfiguration

Next up is packing the compiled output into a NuGet package. By adding a NuGet build step, setting the Command to pack, the path to the csproj file and the output folder to the ADO environment variable $(Build.ArtifactStagingDirectory).

The final step is to publish the artefacts, i.e. the NuGet package. There is again a standard build step to use here called Publish Build Artifacts.

Are you still wondering about those environment variables above and how they come together? Environment variables are a great way to reduce duplicate hard written configurations in build steps. There are two kinds of environment variables those that you can define yourself and those that are predefined.

For creation purposes or if you are exploring what ADO has to offer the web UI is a great choice. However, if you talk with the grown-up DevOps engineers, they usually voice some concern over maintainability or the lack of sharing configurable definitions. One of the ways to overcome these issues is to define the entire build steps in a YAML file.

Configuring the build with YAML

YAML Ain't Markup Language or YAML for short is the file format supported by ADO to store the build configuration alongside your code. While versioning also is done by ADO whenever you change the build steps, in the UI. YAML definitions further allow you to create templates for other similar projects - which can be real time savers.

When selecting the build agent, we can view the YAML generated out of the build steps defined via the web UI.

YamlExtract

In the projects root folder, we can now create a file, e.g. builddefinition.yaml which we can then check in to Git. Once the Git Repo contains the YAML build configuration, we can create a new pipeline based on the build config. Unfortunately, there is currently no way to use the visual designer and YAML configuration in the same pipeline.

Though not automatically exported the build trigger can also be set in the YAML file. According to the docs - which include some inspiring samples this is the resulting block for PureLayout.Net:

trigger:
  batch: true
  branches:
    include:
    - master
  paths:
    exclude:
    - README.md
pr:
  branches:
    include:
    - master
  paths:
    exclude:
    - README.md

The above configuration ensures that all pushes to master are triggering a build. The pr section is in regards to Pull Requests. Note the neat trick you can do with paths which allows you to prevent triggering a build should only the README.md or similar change.

Lesson learnt: Any configuration changes regarding git you still have to do in the visual designer. In the case of PureLayout.Net, it was ensuring that git submodules are checked out.

Is YAML better than the visual ADO web-based editor? Well, it depends. If you are just getting started with setting up automated pipelines, the visual editor allows for faster results. It also provides a better experience when discovering what is available on ADO. However, if you are looking for a way to share configurations, store your build definition alongside your code and know what you want to get done. YAML probably is the better solution for you.

You can see the full YAML configuration for PureLayout.Net on here.

Done?

When I initially set out to automate the build for PureLayout.Net, it was because I wanted to reduce the hassle for me to ensure that commits or PRs would not break anything. With the steps above the manual steps left are testing the NuGet package, adjust some metadata if needed and then deploy it to NuGet.org. Since it is a visual library, I feel comfortable manually "testing" the result - which means as much as looking at a screen and checking the layout with my own eyes. The manual release steps are also totally fine with me.

Nevertheless, you could automate a lot more. For instance, push the artefacts automatically to Azure Artifacts or other places, adjust the release metadata pending on various parameters or setting the Version number on the fly. Another possible area to automate would be the testing side and and and... For me I left it here, well not entirely, I still published the build status to GitHub - and this blog :

Build Status

2 Comments

00_TitleImage

I recently upgraded my developer machine from a Surface Pro 3 to a Surface Book 2. I have been using it now for a good month and wanted to write down my experiences, and how it holds up to the daily tasks, I throw at it as a mainly mobile developer. That is at the office and during leisure time.

Disclaimer: I did not receive any money from Microsoft for writing this article nor was I asked by them to do so. I am doing this out of the sole purpose that at the time when I was looking for a new dev device I could not find an article on the SB2 for the mobile dev.

Now when looking for a new computer, it always depends on what field you are active in. In my role as (mobile) developer, team lead, occasional speaker and human being I tend to use my device in multiple different ways. Further, I always fancied the idea of owning one device that can handle all kind of different workloads, so while the Surface Book might not be the ideal tablet, it can be a tablet which should eliminate the need for owning an additional tablet, e.g. an iPad or the new Surface Go. But more on that later.

The version I am using is the 13.5" model sporting the i7, 16 GB RAM, 512 GB SSD and an NVIDIA GeForce GTX 1050. While not on the cheap side it comes with the marketing statement of being the ultimate laptop.

Overall impressions

The 13.5" inch model comes with a beautiful screen. Having a resolution of 3000 x 2000 (267 PPI) is great and with a bright display which means you can also work outside if you want to. What I like is the 3:2 screen resolution which is great for developing, writing and being productive.

The keyboard is excellent to type on. It has a nice feel, the spacing of the keys is nice, and the travel of the keys is nice, and I do not get the stares as some MacBook Pro owners get for their clackidiclack keyboards. Only thing bothering me here is the background lighting of the keys. This is my biggest personal dislike of the machine. For one the keys are unevenly light, which is just odd for a device at this price. Further, it has no automatic sensor for turning it on or off. You can set it manually to 3 brightness levels (or off). So far okay I can live with that, but what I found irritating was that in some lighting environment it was quite hard to read the keys with the background lighting on as they seemed to start matching the magnesium-silver colour of the keys. I never had this issue with other laptops so far, but that might be because I only had black keys so far? Anyway, if the keyboard backlighting is the worst thing on your machine, you know you are starting to get old I guess

The trackpad is okay. It is a glass trackpad and allows you to get where you want to on the screen. Overall I still do think that the MacBook Pro has a better trackpad, but I never feel the need to get a mouse when working with the machine in laptop mode. So nothing to worry there.

Coming with a 16-hour battery life and coming with dedicated GPU power does make the Surface Book 2 a touch heavier than other devices at 1.642 kilograms (for you imperialists that's 3.62 lbs). Especially compared to a Surface Pro. But the 13.5" is still a portable machine which you can carry around in your bag and coming with a large battery it does mean you can leave that bulky charger at home. The charger does come with a handy USB-A port which allows you to charge your phone/smart-watch/kindle etc. without requiring additional chargers to be brought along.

adapter

Using it as my daily development machine

Being a 2-in-1 device is great, but at the end of the day, I primarily will use it as a laptop and dev machine. During my day job at Noser Engineering, I work mainly on mobile solutions (mostly Front-End) and other projects in the .Net space. So the daily load usually involves Visual Studio, some Browser (Firefox or Edge), PowerShell (doing Git via UI just feels wrong - that is after forcing your brain to learn the commands ), Outlook, Teams and last but not least Spotify or Groove Music (second when I feel in the mood for some Music to Code By).

I use the Android Emulator quite a lot when working on mobile projects. The emulator tends to use quite a bit of RAM, and on my old device, I was often struggling to get decent performance. But on the Surface Book 2 all is well. Coming with two USB-A Ports means I do not have to worry when plugging in any dongle or accessory such as LAN-Adapter, Presenter stick etc..

Having a touchscreen is a bonus when using emulators/simulators as you can use the app as intended with your finger. Plus making gestures with a mouse is cumbersome.

In some projects, I also have to fire up some VM for various tasks. While the 16 GB of RAM holds up to a certain point, I know of many backend developers and C++ developers that need more. So keep that in mind since you can't upgrade any of the hardware of the Surface Book 2, at least iFixit convinced me with this clip.

Having a dedicated GPU is a great thing if you are into games or Machine Learning. Since I started to play around with machine learning, this allows me to play around with ML on the go. Which is nice and something a Surface Pro does not offer.

Being at your desk

While I do have to travel from time to time, I also do quite some work at an office. At work, I have a 27" and a 24" Dell monitor (daisy chained) and only have to connect the dock to my surface. The dock works excellent with the Surface Book 2. I also have used it with a Dell docking station over the USB-C port which also worked just as expected. But since I already have a Surface Dock, I rarely use any USB-C dock, which would also mean slower charging time.

I do not experience any problems with the displays (used to have a weird blackout with my Surface Pro 3 at times on the monitors), and I can wholeheartedly recommend it as being a productive setup.

Doing the meeting kabuki

No office life would be complete without the occasional meeting, right? So how does the Surface Book 2 fare when it comes to this office discipline? Really well, the keyboard is excellent for taking notes, if you decide to add a surface pen you will be able to doodle erm take notes in meetings. And have I mentioned Face ID yet? Face ID lets you unlock your Surface Book 2 with your face - bet you did not see that one coming! No, but seriously the Face ID works pretty fast with a delay of 1-2 seconds. So when my device nods off during a meeting after a couple of minutes of inactivity, I used to have to type in my Password. The delay added took so long that I gave up taking notes with my surface pen. But with Face ID it almost feels like the instant-on with a tablet. Which makes the pen and OneNote the ideal combo for taking notes during a meeting.

These days a lot of meetings are remote - period. I am a huge proponent of using the camera during conf-calls since non-verbal communication is a thing. I was delighted to see that the Surface Book 2 sports a 5 MP front facing camera which is capable of streaming 1080p HD video.

Hooking up the Surface Book 2 to an external monitor works great. You will most likely require an adapter I use a USB-C to VGA, HDMI and DVI which should solve most problems - fingers crossed. The only thing to keep in mind is the 3:2 aspect ratio. Since most presenters and Monitors are 16:9 you will have a lot of blacked out space when duplicating your display. Extending solves the problem but might be an issue if you are facing the people and not the secondary screen. I can live just fine with black bars.

On the Surface Pro 3, the fan was always on when connected to a second monitor. As soon as having the camera on and sharing the screen you could listen to the fan going from silent to ready for takeoff. The Surface Book 2 stays nice and quiet, that is if you are not training any neural net. In that case, you will get that what-are-you-doing-again glance from your spouse

On the go

The Surface Book 2 has been a great travel companion. I tend to travel mostly by train and very rarely by plane. In either case, I like to use the idle time to get some stuff done. Including writing emails, blogs, presentations and often also code. The Surface Book 2 has never let me down on any of those tasks. The only thing to remember is that it might be a bit heavier than other laptops. But for me, this is not an issue. And the fulcrum hinge allows to carry it comfortably in your hand.

The 16-hour battery life means you will stay productive for at least half a day, even when doing some heavy workloads.

Downtime

During downtimes, I tend to use the Surface Book 2 more in tablet mode or in the view mode which is excellent for streaming a movie while doing some household chores. The tablet is light but is still sporting you with everything apart from the NVIDIA GPU, so you are holding a full blown pc in your hands. The battery in the tablet is not as big as the one in the base. You can, however, connect the tablet part directly to a charger/dock connector if you want to.

04_ViewMode

You quickly notice that there are not many well-designed apps on Windows 10 for tablet use. There are good (PDF) readers, Netflix, OneNote etc. but compared to iOS and Android there is still space for some well thought out and implemented apps. Which is a shame since it would make a great tablet and with Face ID it comes close to the instant feel.

Bottom line

I am happy with the Surface Book 2. It get's all my work stuff done and works well enough as a tablet that I do not feel the need of getting another device for couch surfing and watching a series.

I would recommend it to friends and repurchase it. The most significant negative point from my end is the backlighting of the keyboard and price tag. Then again you do get great value for your money in my opinion.

0 Comments

Title Image showing a library

When it comes to file handling and Xamarin Forms you can find all you need in the official Documentation. However, when it comes to where the data should be stored the documentation leaves some points open. Moreover, might even lead to, dear I say it, your rejection in the App Store...

Also when writing Cross-Platform Code, with .Net Standard, it does depend on the Operating System (OS) that the app is being executed on, where to store your data. So let's dive into the platforms which are primarily supported by Xamarin Forms.

Xamarin.Essentials

The most comfortable way is using Xamarin.Essentials. Currently still in preview and requires Android 7.1 or higher. However, if that does not make you blink, these are the options:

  • Local Storage which is also backed up.
  • Cache Storage which is, well for caching files but are more on the non-permanent side.
  • Files bundled with the app, i.e. read-only files.

You can access the folder paths as follows:

var rootDirectory = FileSystem.AppDataDirectory;

Currently Xamarin.Essentials supports Android, iOS and UWP. So if all the platforms your app requires are named, and you do not need any other location. Be sure to check out Xamarin.Essentials and check the documentation for more details.

Do it yourself

In most cases the folders offered by Xamarin.Essentials will suffice, but if you require a different folder, i.e. the document folder under iOS, you can always set the path to the location on your own. So let's have a look at how you can achieve this.

Android

Now when it comes to Android, you can follow the documentation from Microsoft and use the following path for storing your files:

_rootDirectory = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments);

Looking to cache some files, then we can take the path and with Path.Combine set the path to the cache folder:

_cacheDirectory = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments), "..", "cache");

If you want to store files to the SD-Card, i.e. external storage under Android. You have to pass in the path from a platform Project to the .Net Standard project or use multitargeting to achieve this. You can get the path as follows:

var sdCardPath = Environment.ExternalStorageDirectory.AbsolutePath;

Note that the Environment here is Android.OS.Environment and not System.Environment. So if Android is this easy how hard can iOS be?

iOS

When storing files under iOS, there is a bit more documentation to read. The reason being that Apple uses multiple subfolders within the Sandbox container. The ones that are important for file storage are the following:

  • Documents: In this folder, only user-created files should be stored. No application data, which includes that JSON file of your app, should be stored here. This folder is backed up automatically.

  • Library: The ideal spot for any application data you do not want the user should have direct access to. This folder is backed up automatically.

    • Library/Preferences: A subdirectory which you should not directly access. Better use the Xamarin.Essentials library for storing any key/value data. This data is backed up automatically.
    • Library/Caches: This is an excellent place to store data which can easily be re-created. This data is not backed up.
  • tmp: Good for temporary files, which you should delete when no longer used.

For a more detailed listing check out the docs. So when storing files, you are under iOS this code will point to the Documents folder:

_rootDirectory = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments);

If you intend to store information into the Library folder you can change the directory as follows:

_rootDirectory = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments), "..", "Library");

Equally, you can change your path to one of the other locations described above.

UWP

UWP apps usually also store their data in a sandbox. Only if in the app's metadata the permission is set and a good reason was given, which Microsofts validates on submission to the store, can the app access other file locations outside of the sandbox. UWP apps also live in a sandbox. The ApplicationData offers to store data locally, roaming or in a temporary location. The local folder can be accessed as follows:

_rootDirectory = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData);

Similar the other locations can be accessed for example the roaming folder (which is synced automatically across all of your different devices):

_rootDirectory = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), "..", "RoamingState");

The following directories are present in a UWP apps sandbox:

UWP Container folders: AC, AppData, LocalCache, LocalState, RoamingState, Settings, SystemAppData, TempState

However, what if I am targeting other platforms?

Since Xamarin Forms is no longer limited to Android, iOS and UWP, you might find yourself wanting to write files on another system such as Tizen. The best solution is to check the documentation of the given platform where data should be stored and then run the following code on the platform:

class Gnabber 
{ 
    // ... 
    private IEnumerable<DirectoryDesc> DirectoryDescriptions() 
    { 
        var specialFolders = Enum.GetValues(typeof(Environment.SpecialFolder)).Cast<Environment.SpecialFolder>(); 
        return specialFolders.Select(s => new DirectoryDesc(s.ToString(), Environment.GetFolderPath(s))).Where(d => !string.IsNullOrEmpty(d.Path)); 
    } 
    // ... 
} 
 
class DirectoryDesc 
{ 
    public DirectoryDesc(string key, string path) 
    { 
        Key = key; 
 
        Path = path == null 
            ? "" 
            : string.Join(System.IO.Path.DirectorySeparatorChar.ToString(), path.Split(System.IO.Path.DirectorySeparatorChar).Select(s => s.Length > 18 ? s.Substring(0, 5) + "..." + s.Substring(s.Length - 8, 8) : s)); 
    } 
 
    public string Key { get; set; } 
    public string Path { get; set; } 
}

The code above lists all used folders from the System.Environment.SpecialFolder for the given environment, and also provides you with the absolute path. If none of the special folders is the target location, you desired. Try using Path.Combine and a path nearby to get to your desired location.

Conclusion

Storing files is not tricky but put some thought into where to store your applications data can go a long way. Xamarin.Essentials may provide all the functionality you need, but if not you usually use the System.Environment.SpecialFolders and the System.Environment.GetFolderPath to get access to different folders offered by the platform you are on.