0 Comments

TitleImage

A while back, I used the F# type providers to create a conversion table. That post gave me the idea if it were possible to write an app that gets its data from a website. Perhaps you have also received the request for an app. Nothing expensive and actually all the data that should be displayed is already present on the web, i.e. this website right over here. So the question is: Could such an app be created without having to write a single line of backend code?

In this blog post, I will try to create an app for one of my favourite Xamarin conferences - the Xamarin Expert Day. So let's see if we can create our Fabulous Xamarin Expert Day App.

Getting the data

Type providers are a delightful feature of F#. During compilation type providers generate the data models represented in a data source. I have written a blogpost before on the topic of parsing HTML. This time we will use another toolset that comes with the F# Data NuGet package.

Since we want to get the information about the Xamarin Experts Day conference, we can try parsing the website directly. So we could use the following line to do this:

HtmlDocument.Load "https://expertday.forxamarin.com/"

Unfortunately, we live in the modern ages of JavaScript. I don't want to go on a tangent here, but just state the fact that the Xamarin Experts Day website seems to be loading the information about the talks and the tracks after the initial HTML has been loaded. Luckily when loading the page in a browser, we get an HTML version which contains all of the information we are looking for. So instead of loading the data directly from the website, we can load the data from a file. Budget projects have their limitations... 🙃

HtmlDocument.Parse("ExpertXamarin.html")

When we look at the HTML of the website (in the browser) we can see that the speakers are listed under the following HTML structure:

<li data-speakerid="df2bc5ca-5a6b-48a9-87ac-71c817d7b240" class="sz-speaker sz-speaker--compact ">
<div class="sz-speaker__photo">
  <a href="#" onclick="return ...');">
    <img src="...894db1.jpg">
  </a>
</div>
<h3 class="sz-speaker__name">
  <a href="#" onclick="return ...');">Mark Allibone</a>
</h3>
<h4 class="sz-speaker__tagline">Lead Mobile Developer Rey Automation, Microsoft MVP</h4>
</li>

So we know that we can get the image, name, tagline and id of every speaker. So let's create a record to store that information:

type Speaker = {Id:string; Name:string; Photo:string; Tagline:string}


Creating the record is not strictly necessary. But it does make working with the data a bit easier later on. Another plus is that we could capsule the type provider code in a .Net Standard library and then share it with non F# .Net code. No, you can't access type provider data types directly from C# and while some features in C# get inspired by F#. From what I have heard, I would not hold my breath, hoping to see type providers in C# anytime soon...

With the record in place, all that is left to be done is extracting the data from the HTML. Good thing that F# Data comes along with the HTML CSS selector. The HTML CSS selector allows filtering after ids, classes and tag types. So if we wanted to get the speakers name, we can filter the component and then extract the value as follows:

// .. other parsing methods
let private getName (htmlNode:HtmlNode) =
    htmlNode.CssSelect("h3.sz-speaker__name > a") |> Seq.map (fun h -> h.DirectInnerText()) |> Seq.head

let getSpeakers (html:string) =
    HtmlDocument.Parse(html)
        .CssSelect("li.sz-speaker")
        |> Seq.map (fun s -> {Id = (getId s); Name = (getName s); Photo = (getPhoto s); Tagline = (getTagline s)})

Similarly, the rest of the data can be accessed to fill the other fields in our record. Same goes for the tracks, again we would first create a record where we store the name, time, room and speaker id of every track. We will be able to link a track to a speaker with the id:

type Track = {Room:string; Time:string; Title:string; SpeakerId:string option}

let getTracks (html:string) =
    HtmlDocument.Parse(html)
        .CssSelect("div.sz-session__card")
        |> Seq.map(fun s -> {Room = (getRoom s); Time = (getTime s); Title = (getTitle s); SpeakerId = (getSpeakerId s) })

Now that we have all the data in place, it is time to get cracking on the app.

Fabulous Xamarin Experts App

Before we start writing our UI code, there is still that shortcut we took above with loading the information out of a file. While this works great when using a script in the mobile world, this means we have to pack that HTML doc into the app. There are two approaches: either put it into the Assets folder on Android and in the Resources folder (you can also use XCAssets...) on iOS or make an Embedded Resource in the .Net Standard library. While the first option would be what Apple and Google intended you to use when adding docs, you want to ship with your app. You will have to jump through some hoops to access the document. So let's again save some time and just pack the file as an Embedded Resource in our .Net Standard project. Embedded Resources are packed into your apps binary. This results in an awkward fashion of accessing the data. While described in the official docs here, this is how it is implemented in the Xamarin Experts Day Conference App (we need a shorter name...):

let loadFile filename =
    let assembly = IntrospectionExtensions.GetTypeInfo(typedefof<Model>).Assembly;
    let stream = assembly.GetManifestResourceStream(filename);
    use streamReader = new StreamReader(stream)
    streamReader.ReadToEnd()

With that out of the way. Let's create a list of all the talks with the title, time and room. When selecting a track, we will display the Information of the presentation along with the speaker info.

So the list we can put together like this:

let showTrackCell track =
    View.ViewCell( view =
        View.StackLayout(children = [
            View.Label (text = track.Title, 
                        fontSize = FontSize 22.)
            View.Label (text = track.Time + " in " + track.Room, 
                        fontSize = FontSize 14.,
                        fontAttributes = FontAttributes.Italic)
            ]))

let view (model: Model) dispatch =

    View.ContentPage(
        content = match model.SelectedTrack with 
                    | Some track -> showTrackInfo track model dispatch
                    | None -> View.ListView(
                                    rowHeight = 80,
                                    hasUnevenRows = true,
                                    margin = Thickness(8.,0.,0.,0.),
                                    items = (model.Tracks |> List.map showTrackCell),
                                    selectionMode = ListViewSelectionMode.Single,
                                    itemSelected = (fun args -> dispatch (TrackSelected args))
                                    )
        )

And the "detail view" would be done like this:

let showTrackInfo track (model:Model) dispatch =
    let speaker = match track.SpeakerId with
                  | Some speakerId -> model.Speakers |> Seq.tryPick(fun s -> if s.Id = speakerId then Some s else None)
                  | None -> None

    let addSpeakerInfo (speaker:Speaker) =
        View.StackLayout(margin = Thickness(0.,32.,0.,0.), children = [
                View.Label (text = "Speaker", fontSize = FontSize 22. )
                View.Image (source = (Image.Path speaker.Photo))
                View.Label (text = "Presenter: " + speaker.Name)
                View.Label (text = "Tagline: " + speaker.Tagline)
            ])
        
    let speakerViewElements = match (speaker |> Option.map addSpeakerInfo) with
                              | Some speakerInfo -> speakerInfo
                              | None -> View.Label(text = "Brought to you by the Organizers");

    View.Grid (margin = Thickness(8.,8.,8.,16.),
                rowdefs = [Star; Auto],
                children = [
                    View.StackLayout(children = [
                        View.Label (text = track.Title, fontSize = FontSize 22.)
                        View.Label (text = "In: " + track.Room, fontSize = FontSize 14.)
                        View.Label (text = "At: " + track.Time, fontSize = FontSize 14., margin = Thickness(0.,-4.,0.,0.))
                        speakerViewElements
                        ])
                    (View.Button (text = "Back", command = (fun () -> dispatch (TrackSelected None)))).Row(1)
                ])

You might have noticed that the talk description is missing. The website has a JavaScript function retrieve that additional information. I think it would be possible to replicate the JavaScript call to the backend and then parse through the answer

JSON/HTML answer. But INSERT-LAME-STATEMENT-WHY-I-AM-NOT-LAZY-HERE.

The app still feels a bit ruff I think I might have to follow up in another blog post and make it pretty 😎

Conclusion

In this little experiment, we set out to see if it would be possible to write a mobile app that displays the same information as already present on a website. And while there were some bumps in the road - I am looking at you JavaScript. It was indeed possible to write an app for the Xamarin Experts Day that runs on Android and iOS.

Though I really should get started on my next post and make the app pretty 😇

You can check out the entire app on GitHub.

This post is part of theF# advent calendar. Be sure to check out the other posts.

HTH

0 Comments

So a while back I posted about a little pet project I am working on along the lines of how hard can it be. To see the big picture, you can read the overview here. The first step is writing an Internet of Things (IoT) client app that runs on a device and sends sensor readings to the cloud. In the cloud, the Azure IoT Hub manages the client and also receives the data from the client.

PCB board

As the name implies, IoT devices connect to the internet. So not so different than your client app you might think at first. But when you start thinking about how IoT devices are deployed and run in the wild, there quite a few differences setting them apart. For one, the devices are usually not operated by a person. It's not a bring your own device (BYOD) scenario, the creator of the IoT device is generally in control of the software running on the device. Being in control also means that the device gets set up by the manufacturer and connects to the backend without or little human interaction. Since the device is out in the open and connected to the internet. It is generally a good idea if not mandatory to have a plan to update the device should a security breach such as Heartbleed ever occur. Though generally speaking the problems are often a lot more homegrown as this post from Troy Hunt nicely summarises. Another aspect that often arises with IoT solutions is the volume of data that has to be processed. While you can process data on the edge (on-site) - usually the desire is here to aggregate the data at a central point and act upon the live data or analyse the data in hindsight to find insights. It is often the data processing scenarios that bring the most significant business value and therefore are an essential part of the solution one tries to create with IoT solutions. Providing the developers with the challenge of creating a system that accomplishes to scale to meet the high data volumes running through the system. In short it is a different world than your traditional Xamarin, WPF, WinForms or Web client app.

The backend plays a significant role in IoT device scenarios. Therefore it comes at no surprise that you will find a lot of companies wanting to help you with your IoT endeavours. One of the solutions is the Azure IoT Hub which was created with the IoT challenges in mind. It provides many great features such as scalability to receive data from many million devices simultaneously, different messaging patterns to accommodate always connected vs IoT devices that may only have a connection now and then again. You can not only receive data from your devices with Azure IoT Hub but also send information to the device. It furthermore provides the means to inform your IoT devices that a new software/firmware update is available. And since it is running in the cloud, scalability is baked into the product.

For setting up the IoT Hub, I recommend you follow these instructions.

You can create one free IoT Hub instance per account, which is the one I will be using for this blog post.

With the IoT Hub setup, let's set up our local environment. You can manage your IoT Hub all from within PowerShell - oh yes feel that power of the shell (trying to improve my godfather jokes here...) all you will have to do is install the Azure CLI. Then install the IoT extension. If you did not have the tools already installed, be sure to first login to Azure on PowerShell with the following command:

az login

After being logged in and having the extensions you can do all sorts of stuff right from within PowerShell. For instance, we can create a new device like this:

az iot hub device-identity create --hub-name IoTEndpoint --device-id test-device-01

Note that the --hub-name must equal the name you gave your IoT Hub on Azure. You are free to choose a different name to register your device after --device-id. Once you have a device registered. You can send messages as the registered device:

az iot device send-d2c-message -n IoTEndpoint -d test-device-01 --data 'Hello IoTHub'

No error means success but, another handy command is seeing which messages the IoT Hub is receiving:

az iot hub monitor-events -n IoTEndpoint

If you now open a second PowerShell to send a message as a registered device, you will see IoT Hub receiving the message.

Showing received message

With all the tools installed and having a device registered, we are ready to implement our client. There are more commands which you can use with Azure CLI, you can find the full list of commands here.

Implementing an IoT Client

The Azure IoT Hub provides an SDK which clients can use to communicate. The SDK is available for .Net, Node.js, Python, C and Java. However can't or don't want to use the SDK you can manually connect to the IoT Hub over HTTP, MQTT or AMQP.

For my first endeavour, I used the Azure IoT Dev Kit. It is a fantastic value for money - perhaps you even have received one as a gift at an event you attended? The dev kit comes with many sensors, and an RGB LED, display, AUX, a USB- and WiFi-connection. That being said if you do not own a device, you can still get started with the SDK for .Net. The SDK runs on .Net Standard. So you can write a .Net Core client which using the NuGet package.

So we could implement a .Net Core Console app to implement our solution. Note since the library can be installed in a .Net Standard project you could extract all the IoT logic code into a .Net Standard library. Being lazy aka keeping the solution of this blog simple I will implement all of the code directly in the .Net Core project. The first thing we will want to do with our app is to connect to the backend. There are different ways how we can achieve this with the IoT hub. For getting started, we will take the easiest route, which is usually not what you want to deploy to production - i.e. using a connection string with an API key as an authentication mechanism:

var device = DeviceClient.CreateFromConnectionString("HostName=IoTEndpoint.azure-devices.net;DeviceId=test-device-01;SharedAccessKey=THIS-IS-WHERE-THE-SHARED-KEY-WOULD-BE-DISPLAYED");

You can retrieve this connection string from within the Azure Portal under your IoT Hub, IoT devices, in my case test-device-01 and then select either the Primary or Secondary Connection String.

Once connected, we can start to send sensor readings serialized to JSON:

var json = JsonConvert.SerializeObject(measurement);
var message = new Message(Encoding.UTF8.GetBytes(json));

await device.SendEventAsync(message);

Since .Net Standard runs on nearly every OS you can think of, you could extract the code above to run on a Raspberry Pie or Android or INSERT-YOUR-TARGET-HERE using actual sensors. And since the SDK is also available for other languages such as C version, we can use it for writing apps for the Azure Dev Kit.

Can I have the rest of the client code? Yes - if you stick around until the end you will find a link to the full source code of the client on GitHub.

Azure IoT Dev Kit Client

I have to recommend following the official docs by Microsoft on how to set up the Azure IoT Dev Kit with Visual Studio Code, which will show you how to initially set up the device and connect it to the IoT Hub. Word of advice make sure you follow all the steps of the documentation and do not leave out any parts such as installing the USB driver on your Windows machine. Talking out of experience from a friend here of course

Microsoft offers quite a few samples for the Dev Kit. The samples can be loaded directly onto the device using Visual Studio Code. I took some inspiration for the .Net Core Client above from the remote monitoring tutorial - I skipped the Azure stuff since I will want to process the data differently. Only changes I made to the C code was the changing the unit of the atmospheric pressure (atm) to hPa. Oh and of course removing the conversion from Celsius to Fahrenheit- my brain will not compute the imperial system

Note that a lot of IoT devices do not come with an operating system installed due to the restricted resources available on the device. These restrictions are often the reason why many programs are written in C. If you are a .Net developer like myself, you will probably miss a lot of conveniences. Then again you might find some consolidation in the knowledge that your code will be very efficient

Another device that I am looking forward to checking out soon is the Meadow by Wilderness Labs. The great thing about the Meadow is that you can write your IoT client code with C# for the device. Good news is they are open for pre-orders, but you might have to wait in line until all of the Kickstarter campaign supporters have received theirs.

Once you have installed the app on the dev kit and connected it to WiFi, you will be able to see the messages arrive on the IoT Hub dashboard.

Graph showing the messages being received by the Azure IoT Hub.

And by using the CLI tools command from before we can start a listener that will receive every message sent to the Azure IoT Hub:

ReceivingDeviceMessages

You will be able to see the raw JSON coming in on the IoT hub.

Conclusion

Setting up the first IoT device can be a bit of a daunting task since there are a few moving parts. First getting to know the client or even just knowing how to implement a client in .Net Core, then the Azure IoT Hub part. For me, it was good to notice that it is okay to start small and gradually learn the possibilities provided by Azure IoT Hub. But there is still so much more packed into the Azure IoT Hub. For instance updating a device, device twins or communicating from the cloud to a device - if you are interested in a detailed list, be sure to check out the official page here.

In the next blog post of the series, we will look at how to process the data in the cloud. And how to forward it to clients. Where we will visualise the data for the user. So stay tuned and check out all of the client code on GitHub.

HTH

0 Comments

Black Metal Padlock

Update: So after posting this my colleague and friend Danielapproached me and showed me the Azure Artifacts Credentials Provider by Microsoft which automates the steps bellow. Be sure to check it out. Thanks, Daniel for showing me this and making my life easier 😃

So lately I was playing around with one of Azure DevOps many features. Namely pushing freshly created NuGet packages to your private feed. Bringing up the question how can I access the feed and authenticate during a NuGet restore process via dotnet restore?

While this blog post shows steps to be taken for Azure DevOps - the same actions are required in the NuGet.config for other sources.

While I knew how to click my way around Visual Studio to do this. Under my Ubuntu Shell, this was not an option. Luckily adding a NuGet feed is quite common knowledge, while the paths differ under Windows and Unix systems you will find it in your home directory under:

~/.nuget/NuGet/NuGet.config

Or for Windows that would be:

%appdata%\NuGet\

You can add the feed to your NuGet.config file:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <packageSources>
        <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
        <add key="NameOfYourFeed" value="path to your nuget/index.json" />
    </packageSources>
</configuration>

Now for accessing a private NuGet feed, you will have to provide a username and password. You can add them to the config file:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <!-- packageSources -->
    <packageSourceCredentials>
        <NameOfYourFeed>
            <add key="Username" value="gnabber"/>
            <add key="ClearTextPassword" value="YourPassword"/>
        </NameOfYourFeed>
    </packageSourceCredentials>
</configuration>

The only issue being you probably do not want to store your Azure DevOps password in plain text on a computer. And you shouldn't do that either. So let's head back over to Azure DevOps, click on your profile picture and select "Security". Now generate a new token. Be sure to select "Show all scopes" then under Packaging choose the "Read" permissions.

Image showing the dialog to generate a token for readonly access to your package feed

Copy the generated token and store it in the NuGet.config within the PlainTextPassword field. You can now dotnet restore your packages from the private Package feed.

HTH

1 Comments

pexels-photo-292426

Whether writing a mobile app with Xamarin Forms or native Xamarin.iOS/Xamarin.Android sometimes the requirements demand that your app updates as quickly as possible when something changes on the server. This might be a chat application, stock ticker or any monitoring app showing live data to a user of a process currently running in the backend.

Let's say we wanted to create a simple chat app. If we would go down the path of a traditional Create Read Update Delete (CRUD) app over HTTP, we might choose to poll the server every few seconds or so to read the latest value. While this approach delivers the results, it comes with some drawbacks. To name a few: making requests even though nothing has changed, climbing the leader board on the battery usage category plus your server gets hammered with requests only to tell them: "Sorry no news yet..." - so if not poll lets push. And this is where WebSockets come in.

The only problem with WebSockets is that the implementation in .Net is close to the metal. This results in an additional implementation effort having to be done by the developers. Luckily for .Net developers, there is SignalR which comes with all the boilerplate code you want around WebSockets. A web developer will also tell you about fall back options backed into SingalR. As a Xamarin developer, you will most probably never use those features. But the chances are good that you will be delighted by the ease of handling connections, channels or writing to specific connected clients.

SignalR is around for quite a while; it has been ported over to .Net Core and is .Net Standard compatible. This is excellent news since we can add the SignalR client directly into our Xamarin Forms app - no platform-specific/wrapper code required. But before we can start implementing our client, we will first have to create a SignalR enabled backend.

If you are new to SignalR, be aware that there is quite a big difference between SignalR and SignalR Core. If you are writing a new app today using ASP.NET Core or Azure Functions. You will want to use SignalR Core or else you will go on a nasty error hunt ending with your palm slapping against your forehead ‍🤦‍♂️

The Server

When implementing the backend, we can choose between two options. Either we implement SignalR using ASP.NET Core, or we decide to go with Azures SignalR Core Service. The later can be integrated into ASP.NET Core or an Azure Function app. The Azure option also comes with scaling capabilities - in other words, you get up to 1 Million simultaneous connections using SignalR out of the box.

For more detail on how to set up the Azure Functions and SignalR combo, you will find instructions in the official documentation.

For our simple chat application, we will want a way for our clients to send and receive messages. To achieve this, we will have to create a SignalR Hub that provides a method for sending messages:

[FunctionName("SendMessage")]
public static Task SendMessage(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post")]string message,
    [SignalR(HubName = "SignalRDemo")]IAsyncCollector<SignalRMessage> signalRMessages)
{
    return signalRMessages.AddAsync(
        new SignalRMessage
        {
            Target = "NewMessage",
            Arguments = new[] { message }
        });
}

On invocation, the method will send the message to all connected clients. Which brings us to our next point. Every chat participant will first have to connect to the hub so that messages can be received. So let's implement that registration method:

[FunctionName("Negotiate")]
public static SignalRConnectionInfo Negotiate(
[HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req,
[SignalRConnectionInfo(HubName = "SignalRDemo")] SignalRConnectionInfo connectionInfo)
{
    // connectionInfo contains an access key token with a name identifier claim set to the authenticated user
    return connectionInfo;
}

The naming of the method is a convention from SignalR. In other words, you must name the method Negotiate, or your code will not work. No, I do not want to elaborate on how I found this one out the hard way 😉

With the function and SignalR Service in place, we can now turn our focus to the client.

The mobile client

On the mobile client, we want to be able to receive messages and type responses to the group. Our simple app will have to live with the limitation of only receiving messages while being connected. At least for the moment. But here is the chat running in all of its glory.

SignalRChat

Now let's have a look at the ChatService which connects us to the backend and receives messages:

public async Task Connect()
{
    if (_connection.State == HubConnectionState.Connected) return;

    _connection.On<string>("NewMessage", (messageString) =>
    {
        var message = JsonConvert.DeserializeObject<Message>(messageString);
        _newMessage.OnNext(message);
        Debug.WriteLine(messageString);
    });

    await _connection.StartAsync();
}

Note that we register the receiver method before we connect to the backend. This way, we start receiving updates as soon as being connected to the SignalR Service. Now when implementing a receiver method, you must ensure that the type signature matches the method we defined earlier on the server. If the types or the name do not match, you will never receive any messages.

Since reading is only half the fun, let's implement the send message:

public async Task Send(Message message)
{
    var serializedPayload = JsonConvert.SerializeObject(message);

    var response = await _httpClient.PostAsync("https://gnabbersignalr.azurewebsites.net/api/SendMessage", new StringContent(serializedPayload));
    Debug.WriteLine(await response.Content.ReadAsStringAsync());
}

And if you ever had enough from the stream of messages but want to give your eyes the joy of staring at a bare app here is the disconnect method:

public async Task Disconnect()
{
    await _connection.DisposeAsync();

    _connection = new HubConnectionBuilder()
        .WithUrl(backendUrl)
        .Build();
}

I hope you could see that using SignalR it is a breeze to implement a bidirectional communication layer to your server. Which will allow your (mobile) clients to send and receive data in near real-time. Another side effect of using SignalR is that you could easily extend the app with a web client. Since your favourite JavaScript framework will allow you to use the SignalR client. If you are ready to get started with SignalR, be sure to check out the docs.

You can find the entire sample, including all the UI code on GitHub.

This blog is part of the October Xamarin Challenge. So be sure to check out the other posts for more best practices when writing Xamarin apps.

HTH

1 Comments
  •   Posted in: 
  • F#

A picture containing colour crayons

I recently was faced with the task to render RAL Colours on an app that I was developing. RAL Colours are used mainly in industrial colour appliances - i.e. powder coating. So while well known in the powder coating industry it was quite an exciting read on Wikipedia to find out how RAL colours came to life. The good thing is that there is a finite set of classic RAL colours. Even better there is a table on Wikipedia which contains a good enough approximation for the classic RAL colours.

So instead of copy & pasting the table into an editor and slashing away at the data. I was wondering if there would be a better way to extract the information from the website. And store the data in a more handy form such as a JSON file.

In the last couple of months, I have been dabbling with F# in my free time. Data providers are a powerful tool which is available in the F# language. In short, you can point a data provider at a data source, and during compilation, the types used in that source get generated for you. So for your typical JSON response from a website. You can use the JSON type provider to create a type based on that stream which you then can use throughout your F# program. Now, this is a more than your "create a C# POCO from JSON" Visual Studio feature. You also get methods to slice and dice through your data. In other words, it is an excellent tool for exploring new data sources or just parsing new data sources and processing that data.

As with LINQ extensions it is is possible to write your type providers. But for most general use cases the type provider already exists and can be added to your project as a NuGet package. The NuGet package we will be using is the FSharp.Data which provides type providers for JSON, XML, CSV and HTML (plus the World Bank ‍🤷‍♂️).

Using an F# script, we will first have to reference the type provider:

#I "./packages"
#r "FSharp.Data/lib/netstandard2.0/FSharp.Data.dll"

open FSharp.Data


Side note: I am using paket for my dependency management because it installs the package right into the project folder. You do not need to use paket, but you will have to make sure that the #r ... line points to the dll.

Now type providers create a type based on a data source. In our case I can point it at the Wikipedia website listing all the RAL colours:

type wikipedia = HtmlProvider<"https://en.wikipedia.org/wiki/List_of_RAL_colors">

We could have also provided a local file:

type wikipedia = HtmlProvider<"list_of_ral_colors.html">

The local file is excellent if you do not always want to hit the remote site. But you run the risk of having an older version locally than on the server which can lead to ugly problems. As far as we are concerned for the script. I will point it at Wikipedia and be sure to make my again annual donation at the end of year

In the JSON file, we will want to store the RAL, RGB and colour name. So let's create a record type for that quickly:

type ralColor = { ral: string; hex: string; name: string}

Now that we have our types, we are all set to extract that data. By looking at the website, we can see in which section the table is located:

Picture showing part of the Wikipedia RalColor List

Knowing this location, we can scan the site and hone in on the data we are looking for:

let ralColorSection = wikipedia.Load("https://en.wikipedia.org/wiki/List_of_RAL_colors")
                                .Tables
                                .``All RAL Colours in a single listing``

If you have never used type providers, you will be probably reading the lines above and go: "Okay... I guess..." - so let's just quickly look at what happened underneath those lines of code. As the name suggests, type providers provide a type based on a data source. This is where we nod. So what do we get with the lines above? We get a type which represents the table of RAL colours. We can access all of the rows via ralColorSection.Rows. When iterating over each row we can read the value in a column by using its name. So we could print out all colour names as follows:

ralColorSection.Rows |> Seq.iter (fun r -> printfn "%s" r.``Colour name``)

I know this is freaking cool right! So if we wanted to extract the RAL, RGB and name from the table we could use our previously defined type and the values as follows:

let ralColors = ralColorSection.Rows
                    |> Seq.map((fun r -> 
                        {ral = r.``RAL Number``; 
                            hex = r.``HEX Triplet``; 
                            name = r.``Colour name``}))

Note the two ticks are how F# variables with spaces in them can be accessed. Now we have all the data we wanted. So now all that is left to do is the boring bit of storing it into a JSON file:

#I "./packages"
#r "Newtonsoft.Json/lib/netstandard2.0/Newtonsoft.Json.dll"

// ...

open Newtonsoft.Json

// ...

let writeToJsonFile ralColors =
    let filePath = Path.Combine(__SOURCE_DIRECTORY__, "ral_colour_map.json")
    let jsonString = JsonConvert.SerializeObject(ralColors)
    File.WriteAllText(filePath, jsonString, Encoding.UTF8) |> ignore

And that wraps up this blog post. I hope you have seen that F# 's type providers can be a great way to scan through data sources and extract the information you need. One thing to be aware of when using type providers: You can't directly share the generated type with other .Net languages such as C#. You would have to wrap the data in a record type - by the way: the type we created to hold the subset of data is a record type. So while there might be some additional effort up ahead when writing fully-fledged enterprise applications, they are a no brainer for scripting. And will provide you with a significant productivity boost when exploring new datasets.

Be sure to check out the official documentation on the HTML provider used in this post. As always you can find the entire code on GitHub.

HTH