September 29, 2016 9:13 GMT

Adding Bindable Native Views Directly to XAML

Xamarin.Forms ships with over 40 pages, layouts, and controls for you to mix-and-match to build great native cross-platform user interfaces. One of my favorite features of Xamarin.Forms is that we have access to 100% of the controls and APIs for iOS, Android, and Windows. For implementing platform-specific features such as geolocation or bluetooth, we can take advantage of the DependencyService. For adding platform-specific controls such as the FloatingActionButton on Android, we have many different options at our disposal, from custom renderers to effects to native embedding.

To help you easily build beautiful user interfaces with platform-specific controls, we’re proud to introduce native view declaration, allowing you to add bindable iOS, Android, and Windows views directly to XAML. Rather than having to write a custom renderer, you can simply add the control directly to XAML with no additional configuration. In this blog post, you’ll learn how to add bindable iOS, Android, and Windows views directly to Xamarin.Forms XAML with native view declaration.

An example of using native view declaration with two-way data binding to create a color picker.

Introducing Native View Declaration

Getting Started

Native view declaration requires that you have the latest Stable channel release, Service Release 0, which also allows you to take advantage of the new iOS 10 and Android Nougat APIs in your mobile apps. Xamarin.Forms 2.3.3-pre introduced support for native view declaration and bindings, so you must be using at least that version of Xamarin.Forms to use this feature. As with all versions of Xamarin.Forms, don’t update any of the Xamarin.Android.Support packages; Xamarin.Forms will automatically update these packages if a newer compatible version is available.

Adding Native Views to XAML

Adding native views couldn’t be easier! To make native views consumable via XAML, we must first add XML namespaces (markup xmlns) for each platform we’ll be embedding views from. This helps Xamarin.Forms find the native control added to XAML. Controls defined for XAML namespaces other than the specified target platform are ignored. The target platform is automatically selected depending on the platform the app is running on. Next, we can add native view directly to our XAML:



	
	    
        
        
	

Native View Declaration

We can access all native properties for the control directly in our XAML as XML attributes, as well as view these properties via XAML IntelliSense. If the native control requires use of arguments, we can pass these to the native control using the Arguments XML attribute.

IntelliSense for Xamarin.Forms XAML with native view declaration.

Data Binding Native Views

The ability to add native views directly to XAML is great, but many controls also require user interaction, such as entering text. Data bindings allow properties of two objects to be linked so that a change in one causes a change in the other. Native view declaration supports data binding out of the box, so you can bind to properties of native views from within XAML.

Native view declaration not only supports OneWay data bindings, where changes are propagated from source to target object, but also TwoWay data bindings. Two-way data binding propagates changes in both directions, allowing us to ensure that two views are always synchronized. This allows us to build very complex views, such as this color picker, with native view declaration in Xamarin.Forms.

An example of using native view declaration with two-way data binding to create a color picker.

Unlike native embedding, native view declaration works in Portable Class Libraries (PCLs) as well. Native data binding will also work in PCLs, though you will need to ensure you have the XAML compiler (XAMLC) turned off for pages that use native view declaration.

Built on Open Source

In addition to the Xamarin SDKs being open-sourced on GitHub, Xamarin.Forms is also open source! We want it to be as easy as possible for developers to contribute to Xamarin.Forms. Being part of an open source project means much more than just writing code—there are many ways you can contribute. We would love to hear your feedback on native view declaration and bindings on the Xamarin Forums and Bugzilla if you experience issues.

Wrapping Up

In this blog post, we learned how to use the new native view declaration and native bindings features in Xamarin.Forms 2.3.3 and above to add platform-specific controls directly to XAML without requiring a custom renderer. For more information about native view declaration, visit the Xamarin Forums. For samples of native view declaration, visit GitHub.

The post Adding Bindable Native Views Directly to XAML appeared first on Xamarin Blog.

September 28, 2016 6:56 GMT

A Step-by-Step Guide to Building a Profitable Mobile Services Business Through Mobile DevOps

Mobile is unlocking new strategic competitive advantages and revenue streams for businesses, which in turn is driving businesses to spend billions in mobile investments, creating tremendous opportunity for Systems Integrators, Consulting Partners, and Digital Agencies as clients turn to outside experts for strategic guidance on how to execute their mobile initiatives.

According to Gartner the market demand for mobile app development services will grow at least five times faster than internal IT organization capacity to deliver them through 2017.

The Enterprise App Explosion: Scaling One to 100 Mobile Apps, Gartner, May 7, 2015

Xamarin Partners are uniquely positioned to help them spend these investments wisely and achieve mobile success. In this white paper, you’ll learn:

  • What the three core service opportunities are for technology partners today
  • How these three service lines align with the unique DevOps approach that mobile development requires
  • How you can start implementing these practices today to help grow your and your clients’ mobile businesses

 

Get the white paper
 

The post A Step-by-Step Guide to Building a Profitable Mobile Services Business Through Mobile DevOps appeared first on Xamarin Blog.

September 28, 2016 12:18 GMT

Back It On Up! Android and Xamarin and Backups!

Oh Android – you never make life easy do you? Recently I needed to add the ability to backup and restore Android shared preferences. And as any good Xamarin developer would do, I was using the Settings plugin from the prolific James Montemagno.

That library abstracts each individual platform’s complexities of saving preferences into an easy to use set of APIs – and it’s awesome. On the Android side, it uses the system’s default shared preferences for the given context. Sounds perfect… and it is, until you try to backup some data.

Wait! Stop the presses! Originally this blog was to be about how to get Android to backup the default system shared preferences the Settings Plugin uses, because one has to do some work arounds. HOWEVER … in between the time I started it and before I finished it, James has gone & updated the Settings Plugin to work in a way that would make the original point of this blog unnecessary. Instead of scrapping this post, I’m going to use it as an opportunity to go through setting up Android backups with Xamarin – and point out the changes James made with the Settings Plugin along the way.

Android and User Preferences

Let’s back up a bit (get it, “back up a bit”?) and explore how Android manages preferences. It uses XML files to store key/value pairs of preferences. You’ll notice I used the word “files” instead of “file” … Android also gives you the ability to specify multiple files to hold preference settings. So you can slice and dice the user’s preferences by whatever means makes sense for your app.

Because multiple files can be used, you need to specify names when creating those files. Makes sense. You do not, and cannot, specify a name of the default shared preference file – the file that’s used by the Settings plugin. (However … in version 2.6 of the Settings Plugin you will be able to specify the preference file name if you choose … FYI).

No big deal, right? Well, let’s take a look at how the backup system works on Android.

The Android BackupAgentHelper

There are two parts to doing backups in Android: 1) creating the code that will run in response to backup and restore operations and 2) setting up the administrative overhead of registering everything. Let’s take a look at the code portion first.

We can go one of two ways to backup files on Android – you can either roll everything yourself, handling all the backup and restore events manually and verifying everything gets to where it needs to go… the Hard Way. Or we can extend the BackupAgentHelper class that takes care of a lot of the plumbing for us … which is the Easy Way. We’re developers, by nature we’re lazy, let’s take a look at the Easy Way (and really for shared preferences there’s no compelling reason to do it any other way).

To do it the easy way, one extends the

BackupAgentHelper
class. When doing that we need to override one function, the
OnCreate()
function. In there we create subclasses of the
BackupHelper
class. We then add those subclasses to the
BackupAgentHelper
class we just extended.

Those are a lot of words – let’s take a look at an example, it’ll make things more clear.

public class PrefsBackupAgent : BackupAgentHelper
{
    public override void OnCreate()
    {
        // Create a new subclass of BackupHelper meant for shared preferences - this one looking for one called test
        var helper = new SharedPreferencesBackupHelper(this, "test");

        // Add that class so BackupAgentHelper knows to back it up (and restore it)!
        AddHelper("prefs", helper);

        base.OnCreate();
    }
}

What this class is doing is that it is first creating a new

SharedPreferencesBackupHelper
class (a subclass of
BackupHelper
) – and we’re telling it to look for a preferences file named “test” (that we created elsewhere in our code). It then registers that new backup helper with the overall backup agent, so the system’s backup manager knows to backup and restore that file.

The key to the above is the fact that

SharedPreferencesBackupHelper
takes the name of the shared preferences file it needs to backup … and the system’s shared default preferences (the one the Settings Plugin uses) doesn’t have a name!

The fix for it is actually kind easy … the file name for the default settings file is the application’s package name plus “_preferences”. So if we would take the code from above and modify it to work with the system’s default shared preferences, we get:

public class PrefsBackupAgent : BackupAgentHelper
{
    public override void OnCreate()
    {
        // Create a new subclass of BackupHelper meant for shared preferences
        var helper = new SharedPreferencesBackupHelper(this, ApplicationContext.PackageName + "_preferences");

        // Add that class so BackupAgentHelper knows to back it up (and restore it)!
        AddHelper("prefs", helper);

        base.OnCreate();
    }
}

Wow – that was pretty easy… actually super duper easy! Take a look at the other functions the

BackupAgentHelper
class provides as well, you can hook into other callbacks (such as
RestoreFinished
) as needed to do whatever customizations your app may need.

I should also note that we’re no limited to only backing up shared preferences this way … we can also do any file we created from our app, using the

FileBackupHelper
class. It takes more or less the same parameters as the
SharedPreferencesBackupHelper
, the context and an array of files (full paths) to backup.

Now let’s take a look at how we can test it (and add all the administrative overhead of hooking our BackupAgentHelper subclass up to the system knows to use it to perform backups and restores).

First The Admin Stuff

There are a couple of steps that we need to do in order to make sure we register our

BackupAgentHelper
subclass with the OS, so it gets invoked during backup operations, and also to register our app with the Android Backup Service.

First, the easy one, registering our app so the data can go to our users’ accounts in the “Google Cloud” or the Android Backup Service. To do so, go here, enter your app’s package name, agree to the terms and conditions, and out pops an ID that we’ll need in a bit.

Then, go into your app’s AppManifest.xml and add the following node within the

<application>
node.

<meta-data android:name="com.google.android.backup.api_key" android:value="THE KEY FROM THE LAST STEP GOES HERE" />

Finally, and this is the part that’s trips people up sometimes, open up the AssemblyInfo.cs and add the following line:

[assembly: Application(BackupAgent=typeof(PrefsBackupAgent), RestoreAnyVersion=true)]

What we’re doing there is programically adding more info the AppManifest.xml’s

<application>
node – namely the
BackupAgent
attribute. However – since Xamarin puts all generated classes into a top level package with an MD5 hash name … there’s no way we can know that at design time … so it’s easier if we use the typeof() operator at runtime to specify this.

The then

BackupAgent
attribute is telling the OS that the
PrefsBackupAgent
class (that we created) should be invoked during backup and restore operations.

Speaking of backup and restore operations … let’s check out how to invoke those!

The Android Debug Bridge

In order to test out the backup and restores – we’re going to need a way to manually force those operations to occur, rather than to wait on the normal operations generated by the OS. And to invoke those manually we use the Android Debug Bridge – or ADB.

The easiest way to start the ADB up, if you’re using Xamarin Studio on the Mac, is to go to Tools->SDK Command Prompt. If you’re using Visual Studio, Tools -> Android -> Android Adb Command Prompt.

The adb tool that we’ll be using is bmgr or backup manager.

Here are the steps we need to follow in order to force a backup operation.

  • Start debugging your app
  • Start the adb session and issue
    adb start-server
    command
  • adb shell bmgr enable true
  • adb shell bmgr backup 'your.package.name'
  • adb shell bmgr run

That will first three steps you only need to do once, and they get you app, the adb, and the backup manager up and running.
The fourth step will queue any changes that you made for backup. The fifth runs the backup operation. If you were to have a breakpoint set in the

BackupAgentHelper
subclass, it would get hit at this point.

Make some changes to your app that will get persisted to the shared preferences, but don’t run a backup. Let’s restore them to where they were before. First stop debugging your app. Then issue

adb shell bmgr restore 'your.package.name'
. Run your app again, and you should see that the preferences are back in a state when the last backup was taken.

Back Up To The Top

We ran through a lot here, especially since I only wanted to talk about how to specify the Android’s default shared preferences file name for backup purposes. But, along the way we did see how Android can use many different files to store user preferences in and how we can specify each of those files to get backed up or not. We also saw how easy it was to extend the basic

BackupAgentHelper
class … and with hardly any work on our part have it take part in backup and restore operations. There was a little bit of a work around in order to get Android to recognize that class as the one that should be invoked during backup operations due to Xamarin putting our classes into a top level MD5 hashed package name. But invoking the backup and restore operations was no trouble at all using the Android Debug Bridge.

September 27, 2016 7:03 GMT

Android Archiving and Publishing Made Easy

With the release of Xamarin for Visual Studio 4.2 and this week’s Service Release 0, archiving and publishing your Android applications directly from Visual Studio just got a whole lot easier and more streamlined. The new Archive Manager inside of Visual Studio enables you to easily package, sign, and directly ship your Android apps for Ad-Hoc and Google Play distribution.

Archiving and Packaging

Creating your first archive for distribution is as easy as right-clicking on your Android project and selecting Archive:

openarchivemanager

This will automatically build your Android application, create an APK using the version name and code from your Android Manifest, and create the first Archive. This Archive is in a pre-release state that allows you to write release notes, check app size, browse app icons, and distribute your application.

firstpackage

Distributing the App Ad-Hoc

Clicking on the Distribute… button will open the new Distribute workflow automatically in Ad-Hoc mode and will enable us to create, import, and store a keystore that will be used for signing the package.

distribute1

Since this is our first project, we can create a new keystore and fill in the required fields. Once this is done, or if we are importing an existing keystore, it will be saved in secure storage so I can easily sign my applications in the future without having to search my machine for it.

create

Now, we can use the keystore by tapping on it and then clicking Save As, which will sign the app and allow us to save it to disk, which we can then send to a distribution service such as HockeyApp.

Distributing to Google Play

While we are often creating development and test builds, there are times we may want to publish directly to Google Play for production, which the Archive Manager also enables us to do during the distribution flow. Assuming that we have already created our app inside of the Google Play developer console and that we have turned on Alpha or Beta testing and published at minimum one release, back in the Archive Manager, select an archive to distribute and then click on the Distribute… button. This brings up an Ad-Hoc distribution flow, but we can click the back button and will then see an option for Google Play distribution:

googleplaychannels

Selecting Google Play will bring us back to our keystore selection to sign the app, but this time we’ll see that there is a new Continue button that will allow us to add our Google Play account when clicked.

add-google-play

To set up a Google Play API, it’s as easy as signing into our Google Play developer account, going to API Access in settings, and creating a new OAuth Client. This will give us our Client Id and Client Secret to enter into the dialog.

registergoogle

Click Register to finish registration, which will launch a web browser to finalize the oAuth flow to add your account.
oauth

Once the account is registered we can select it and continue to selecting a distribution channel to publish our application in:
distribute

There you have it: now you can create a keystore, package an Android app for Ad-Hoc distribution, and take it all the way to production on Google Play without ever leaving Visual Studio!

Learn More

To learn more about preparing an Android application for release, be sure to read through our full documentation. You can find an in-depth overview of each step of the archiving and publishing process for both Visual Studio and Xamarin Studio in our documentation for Ad-Hoc and Google Play distribution.

The post Android Archiving and Publishing Made Easy appeared first on Xamarin Blog.

September 26, 2016 9:16 GMT

Speech Recognition in iOS 10

Speech is increasingly becoming a big part of building modern mobile applications. Users expect to be able to interact with apps through speech, so much so that speech is developing into a user interface itself. iOS contains multiple ways for users to interact with their mobile device through speech, mainly via Siri and Keyboard Dictation. iOS 10 vastly improves developers’ ability to build intelligent apps that can be controlled not only via a typical user interface, but by speech as well through the new SiriKit and Speech Recognition APIs.

Prior to iOS 10, Keyboard Dictation was the only way for developers to enable users to interact with their apps through speech. This comes with many limitations for developers, namely the fact that it only worked through user interface elements that support TextKit, is limited to live audio, and doesn’t support attributes such as timing and confidence. Speech Recognition in iOS 10 doesn’t require us to use any particular user interface elements, supports both prerecorded and live speech, and provides lots of additional context for translations, such as multiple interpretations, confidence levels, and timing information. In this blog post, you will learn how to use the new iOS 10 Speech Recognition API to perform speech-to-text in a mobile app.

Introduction to Speech Recognition

The Speech Recognition API is available as part of the iOS 10 release from Apple. To ensure that you can build apps using the new iOS 10 APIs, confirm that you are running the latest Stable build from Xamarin in the updater channel in Visual Studio or Xamarin Studio. Speech recognition can be added to our iOS applications in just a few steps:

  1. Provide a usage description in the app’s Info.plist file for the NSSpeechRecognitionUsageDescriptionKey.
  2. Request authorization to use speech recognition by calling SFSpeechRecognizer.RequestAuthorization.
  3. Create a speech recognition request and pass the speech recognition request to a SFSpeechRecognizer to begin recognition.

Providing a Usage Description

Privacy is a big part of building mobile applications; both iOS and Android have recently revamped the way apps can request user permissions such as the ability to use the camera or microphone. Because the audio is temporarily transmitted to and stored on Apple servers to perform translation, user permission is required. Be sure to take into account various other privacy considerations when deciding to use the Speech Recognition API.

To enable us to use the Speech Recognition API, open Info.plist and add the key NSSpeechRecognitionUsageDescription as the Property, String as the Key, and a message you would like to display the to the user when requesting permission to use speech recognition as the Value.

Info.plist for requesting user permissions.

Note: If the app will be performing live speech recognition, you will need add an additional permission with property value `NSMicrophoneUsageDescription`.

Request Authorization for Speech Recognition

Now that we have added our key(s) to Info.plist, it’s time to request permission from the user by using the SFSpeechRecognizer.RequestAuthorization method. This method has one parameter, `Action>`, that allows us to handle the various scenarios that could occur when we ask the user for permission:

  • SFSpeechRecognizerAuthorizationStatus.Authorized: Permission granted from the user.
  • SFSpeechRecognizerAuthorizationStatus.Denied: Permission denied from the user.
  • SFSpeechRecognizerAuthorizationStatus.NotDetermined: Awaiting Permission approval from user.
  • SFSpeechRecognizerAuthorizationStatus.Restricted: Device does not allow usage of SFSpeechRecognizer

Recognizing Speech

Now that we have permission, let’s write some code to use the new Speech Recognition API! Create a new method named RecognizeSpeech that takes in an NSUrl as a parameter. This is where we will perform all of our speech-to-text logic.

public void RecognizeSpeech(NSUrl url)
{
    var recognizer = new SFSpeechRecognizer();
    // Is the default language supported?
    if (recognizer == null)
        return;
    // Is recognition available?
    if (!recognizer.Available)
        return;
}

SFSpeechRecognizer is the main class for speech recognition in iOS 10. In the code above, we “new up” an instance of this class. If speech recognition is not available in the current device language, the recognizer will be null. We can then check if speech recognition is available and authorized before using it.

Next, we’ll create and issue a new SFSpeechUrlRecognitionRequest with a local or remote NSUrl to select which prerecorded audio to recognize. Finally, we can use the SFSpeechRecognizer.GetRecognitionTask method to issue the speech recognition call to the server. Because recognition is performed incrementally, we can use the callback to update our user interface as results return. When speech recognition is completed, SFSpeechRecognitionResult.Final will be set to true, and we can use SFSpeechRecognitionResult.BestTranscription.FormattedString to access the final transcription.

// Create recognition task and start recognition
var request = new SFSpeechUrlRecognitionRequest(url);
recognizer.GetRecognitionTask(request, (SFSpeechRecognitionResult result, NSError err) =>
{
    // Was there an error?
    if (err != null)
    {
        var alertViewController = UIAlertController.Create("Error", $"An error recognizing speech occurred: {err.LocalizedDescription}", UIAlertControllerStyle.Alert);
        PresentViewController(alertViewController, true, null);
    }
    else
    {
        // Update the user interface with the speech-to-text result.
        if (result.Final)
            SpeechToTextView.Text = result.BestTranscription.FormattedString;
    }
});

That’s it! Now we can run our app and perform speech-to-text using the new Speech Recognition APIs as part of iOS 10.

Performing More Complex Speech & Language Operations

The Speech Recognition APIs from iOS 10 are great, but what if we need something a bit more complex? Microsoft Cognitive Services has a great set of language APIs for handling speech and natural language, from speaker recognition to understanding speaker intent. For more information about Microsoft Cognitive Services language and speech APIs, check out the Microsoft Cognitive Services webpage.

Wrapping Up

In this blog post, we took a look at the new Speech Recognition APIs that are available to developers as part of iOS 10. For more information on the Speech Recognition APIs, visit our documentation. Mobile applications that want to build conversational user interfaces should also check out the documentation on iOS 10’s SiriKit. To download the sample from this blog post, visit my GitHub.

The post Speech Recognition in iOS 10 appeared first on Xamarin Blog.

September 25, 2016 6:56 GMT

Improving layout performance on Android

I've been working on improving performance of some of my Xamarin Android apps recently. One of the things I've been hunting down and improving on is GPU overdraw. What this means is how many times the same pixels on the screen are drawn per frame. Minimising this improves drawing performance on Android and in the end means smoother scrolling, faster drawing of views and generally makes your app perform more smoothly. The other thing is to hunt down nested layouts and flatten them to improve performance of the view laying itself out on the screen.

Now there are actually quite a lot of things you can do to your app to improve on, to reduce GPU overdraw and how long it takes to layout your views. I will try to cover some of them in this blog post.

Flattening your Layouts

In order to improve how fast your views are laid out on the screen on the device, you can do a very important thing. Flattening your layout. What does this mean? Let me show you an example!

Consider the following page layout, which is very much made out of nested LinearLayouts.



The problem with the nested layout above is the amount of measure passes which has to be done in order to lay it out. For each child with a layout_weight tag, it needs to measure itself twice. Other layouts need to measure themselves too. Imagine having a more complex layout, with a lot of nesting; would resolve in excessive measure passes and make displaying of your layout slow. Especially in cases where you use the layout is used as a row layout. This would hit the app performance quite a lot and there would be noticeable slowdowns in your app.
The layout above, can be flattened using RelativeLayout. You will notice, that layout_weight is quite powerful for layout out equal sized views, so some tricks have to be used to achieve the same with a RelativeLayout. The performance in the end is much better though.


As you see. The above layout employs two additional layouts, which are used to center the other views. An alternative to RelativeLayout to make percentage sized views is PercentRelativeLayout from the Android.Support.Percent package. With it you can set aspect ratios, use percents for widths and heights and more. I would recommend keeping your layouts as simple as possible.

If you simply wish to stack views on top of each other, you can use FrameLayout, which is a real good performer as well.

You can read more about optimizing layouts in the official Android documentation, which also shows how to use the Hierarchy viewer to inspect slow layouts.

GPU Overdraw

GPU overdraw is another problem you may encounter. What is this overdraw? It tells us about how many times the same pixel gets drawn on during a frame. Why is this bad? The more times the same pixels get drawn on, there more time we are wasting. In order to get fluid animations and scrolling etc. you need to have as high a frame rate as possible. A good goal is to try to hit 60 FPS (Frames Per Second) all the time. In order to do this, we need to spend as little as possible time drawing a frame and below 16ms. That is not a long time! Let's explore some things you can do as a developer to improve on this.

Enable showing GPU overdraw

You can enable an overlay on your device or emulator, which will show you the GPU overdraw done by your app. You can find it in developer settings.

 This will give you a funny looking screen with colors laid on top of it. Now these colors actually mean something.
Overdraw chart from Android Documentation
The purple-ish blue means pixels have been over drawn once, green means twice, light red means thrice and dark red 4 times or more. You will also see stuff showing in its original color, this means that the pixels have not been overdrawn. The aim is to have no overdraw at all. However, this can be very hard to accomplish, unless you just have a background drawn on the screen.

Removing backgrounds from views

 A simple thing to reduce overdraw is to just remove backgrounds from views. Let us consider the first layout I showed you, now with everything having a background.

This will gives us this when showing it with GPU overdraw debugging enabled.

Our layout is RED just because we added backgrounds to our layouts. Removing these backgrounds reduce overdraw significantly and in turn improves performance of your app.

Just removing the outermost background reduces overdraw and in this layout the change won't be visible anyways.
The two nested LinearLayouts use the same color, what if we use that as our theme background and remove the color from the layouts?
Again less overdraw. Here is the view without GPU overdraw enabled.
In this case, since the buttons themselves have a background, there will be some overdraw and that can we sometimes cannot do anything to prevent. However, simply reducing a layout from being red all over to green or light blue, means a lot towards performance. Especially in cases where the layout is used in a ListView or RecyclerView or similar Adapter type of view as less time is used to draw the row.

Hence, try to avoid using backgrounds, especially if you can't see them at all. Also a good idea is instead of adding a background to each layout, add that background to your theme, which is pretty simple.

You can also opt to actively remove backgrounds from views with android:background:"@null" for views you don't really care about their background.

As for shadows, borders and the like, which you could do as a background. If you really have to have them use 9-patches with transparency in the areas you don't show anyways. Android will optimize the drawing of these for you and will not overdraw here.

Reducing overdraw in custom views

You might have views that override the OnDraw method where you draw stuff to the Canvas it provides. Here overdraw matters as well. Using OnDraw is what normal views essentially end up using in the end when they draw them selves on the screen. So you have to be careful here as well.

One way to eliminate all overdraw is to draw to a Bitmap first and then draw that to the canvas. This is normally know as double buffering. Be careful with using this, it gives the overhead of first drawing to the Bitmap then to the canvas which draws it to the screen.


The above code shows a simplified version of it. It does indeed result in just 1x overdraw if draw on a background. However, if your draw code is slow, you may encounter flickering if you need to redraw your view a lot.
SurfaceView in Android does it a bit differently. It does all the buffered drawing on a separate thread. You could do this as well. Then call Invalidate() or PostInvalidate() when you need the buffer to be shown on the screen

The technique I ended up using is a modified version, where I delay the drawing until everything is drawn on the Bitmap. Then I signal with PostInvalidate(). It looks something like this the code below.


Now this could probably made a bit simpler. However, what I achieve with this is, whenever I manually signal with Refresh() I cancel any current drawing operations to the buffer as I am not interested in what it provides as it is old data... PostInvalidate() triggers the OnDraw method whenever the GPU is ready. In here I kick off a new Task which draws to the buffer. When that Task is done it calls PostInvalidate() to signal that the buffer has changed and that will be drawn. It is a variation of double buffering, which allows draw operations to take a long time. This has resulted in smooth scrolling of the RecyclerView I am using with these graphs as rows and no overdraw.

Maybe you can use some of these techniques in your app. Let me know what you find out in your application.
In general you want to
  • Flatten your layouts to reduce measure calls
    • Use RelativeLayout instead of LinearLayout with weights
    • Even better FrameLayout for stacked views
  • Remove backgrounds that are not shown anyways
    • Use theme background where applicable
    • android:background:"@null"
  • Use 9-patch for borders and shadows
  • Reduce overdraw in your own OnDraw calls

Resources

https://medium.com/@elifbon/android-application-performance-step-1-rendering-ba820653ad3
https://www.hackerearth.com/practice/notes/rendering-performance-in-android-overdraw/
http://www.xenomachina.com/2011/05/androids-2d-canvas-rendering-pipeline.html
http://developer.android.com/tools/performance/debug-gpu-overdraw/index.html
http://developer.android.com/reference/android/graphics/Canvas.html
https://www.udacity.com/course/android-performance--ud825


September 25, 2016 6:49 GMT

Creating Slack Slash Commands With Azure Functions

One of the nice features of Slack is how easy they make it to add custom slash commands simply by providing an HTTP endpoint for them to call. This is the kind of scenario where a "serverless" architecture shines, allowing you to create these HTTP endpoints without having to maintain any actual infrastructure for it. In this post I'll show how easy it is to write a custom Slack slash command that is backed by Azure Functions. The command will allow the user to say /logo {query} to look up the logo for a given company.

Create The Function

To start off we'll implement this function in C#. There's not much code, so I'll just include it all at once:

#r "Newtonsoft.Json"
using System.Net;  
using System.Net.Http.Formatting;  
using System.Collections.Generic;  
using Newtonsoft.Json;

public class Company  
{
    public string Logo { get; set; }
    public string Name { get; set; }
}

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)  
{
    var text = req.GetQueryNameValuePairs()
        .FirstOrDefault(q => string.Compare(q.Key, "text", true) == 0)
        .Value;

    using (var client = new HttpClient()) 
    {
        var json = await client.GetStringAsync($"https://autocomplete.clearbit.com/v1/companies/suggest?query={Uri.EscapeUriString(text)}");
        var companies = JsonConvert.DeserializeObject<IList<Company>>(json);
        var company = companies.First();

        var output = new 
        {
            text = $"Here's the logo for *{company.Name}*:",
            attachments = new[] 
            {
                new { image_url = company.Logo, text = company.Logo }
            }
        };

        return req.CreateResponse(HttpStatusCode.OK, output, JsonMediaTypeFormatter.DefaultMediaType);
    }
}

Using the string passed through from Slack we query Clearbit's Autocomplete API, pick the first company returned, and construct a message in the format Slack expects it, and returns that as a JSON response. We also get to use all the nice things we're used to in C# development like async/await, HttpClient, and Json.NET.

Finally, we'll need to update function.json to set the HTTP input and output:

{
  "bindings": [
    {
      "type": "httpTrigger",
      "name": "req",
      "authLevel": "anonymous",
      "direction": "in"
    },
    {
      "type": "http",
      "name": "res",
      "direction": "out"
    }
  ],
  "disabled": false
}

That's everything needed for the function, which should now be fully operational.

Create the Slash Command

In your Slack settings, choose to create a new slash command, set it to use a GET, and point it at your Azure Function:

Slash Command Config

Save that and you should be good to go! Let's try it out:

/logo olo

Olo Logo

/logo microsoft

Microsoft Logo

Easy!

Make That Function More Functional

For extra credit, let's recreate this function in F# because F# is awesome. First we'll need a project.json file to bring in some dependencies:

{
    "frameworks": {
        "net46": {
            "dependencies": {
                "FSharp.Data": "2.3.2",
                "Newtonsoft.Json": "9.0.1"
            }
        }
    }
}

This function can use the same function.json settings as the C# version. Finally, the code:

open System.Net  
open System.Net.Http.Formatting  
open System.Text  
open FSharp.Data  
open Newtonsoft.Json  

type Company = { Name: string; Logo: string; }  
type Attachment = { image_url: string; text: string; }  
type Message = { text: string; attachments: Attachment[]; }

let getSearchQuery (req: HttpRequestMessage) =  
    req.GetQueryNameValuePairs() 
    |> Seq.find (fun pair -> pair.Key.ToLowerInvariant() = "text")
    |> fun pair -> pair.Value

let getCompany query =  
    Uri.EscapeUriString query
    |> sprintf "https://autocomplete.clearbit.com/v1/companies/suggest?query=%s"
    |> Http.RequestString
    |> JsonConvert.DeserializeObject<Company[]>
    |> Array.head

let Run (req: HttpRequestMessage) =  
    getSearchQuery req
    |> getCompany
    |> fun company -> { text = (sprintf "Here's the logo for *%s*:" company.Name)
                        attachments = [| { image_url = company.Logo; text = company.Logo } |] }
    |> JsonConvert.SerializeObject
    |> fun json -> new HttpResponseMessage(HttpStatusCode.OK, Content = new StringContent(json, Encoding.UTF8, "application/json"))    

Now our function is nice and functional.

September 23, 2016 12:17 GMT

Pro-tip: copy/paste URL to the iOS simulator

Just a quick tip if you need to copy and paste text to your iOS simulator.

For an app we are developing we send out verification links through e-mail with a hash. After finding out that cmd + C & cmd + V weren’t working if actually did type if over more then once..

For whatever reason I decided to try and drag-and-drop the text onto the Simulator, and what do you know?! It worked! For URLs that is.. Plain text doesn’t seem to be picked up unfortunately.

via GIPHY

September 23, 2016 12:17 GMT

Realities of Cross-Platform Development: How Platform-Specific Can You Go? - Part 1

My personal beliefs on cross-platform development were formed in November 1993. I worked at The Coca-Cola Company at the time, and a few colleagues and I were discussing how to provide Mac users with the same set of applications that we were building on Windows 3.1 with PowerBuilder.

The discussion centered around the UI. The concern was with providing a Windows-centric interface to Mac users. I remember one of the great lines from the meeting: "If you provide Windows help to a user expecting Mac balloon help, you are going to have users that hate you." After this and some personal experiments, I came to agree with this viewpoint. If developers want to make users as happy as possible, they need to call the native APIs of the platform.

Fast-forward to 2009, when I fell in love with the Xamarin iOS product from the first announcement. Xamarin had created a way for C# developers to use the Microsoft .NET Framework and to call into the native platform APIs. Xamarin Android operated the same way. While I would often discuss the need for a cross-platform UI with Xamarin management, I never forgot the lessons of cross-platform from many years ago. At the same time, I knew that there was a need for a cross-platform toolset to create an application. I had talked to enough people to understand the pain and agony. Quite honestly, I was a fence sitter in this area. I liked the idea of XF, but I was never quite ready to make the jump and, honestly, try XF for anything more than some personal experiments and helping a couple of customers.

Url: Platform Specific with Xamarin-Forms

September 22, 2016 8:19 GMT

Debugging provisioning profiles on the command line

Raise your hand if you’ve ever struggled with getting your app’s bundle identifier, info.plist, and entitlements.plist to match up with your provisioning profile.

I tried to explain provisioning profiles using the ten-hundred most common words, but in slightly-less-common words, a development prov-pro associates: A team, a developer, an application identifier, privacy and security entitlements, and development devices.

While there’s no silver bullet, there is a way to dump the contents of a provisioning profile into a readable plist format. From the command-line, run:

security cms -D -i some.mobileprovision

Here, for instance, is the output of a provisioning profile for an app that uses SiriKit to trigger a workout:

napkin

As you can see, this is a convenient way to confirm the associations in the prov-pro, particularly entitlements, the app ID, and provisioned devices.

September 22, 2016 5:41 GMT

Iowa Caucuses Launch Inaugural Polling Apps with Xamarin

As the 2016 election continues to heat up, we’re putting a spotlight on where it all began: the Iowa Caucuses. The February 1, 2016 Iowa Caucus kicked off the US Presidential nominations, and early poll results traditionally play a huge role in the Republican and Democratic Parties’ candidate selection. This year, both parties partnered with Microsoft and InterKnowlogy, a Microsoft Gold Partner, to create Xamarin-based mobile apps, boosting the accuracy and security of the Caucus, as well as making it easier for precinct voters to cast their ballots.

Iowa Caucus AppsDuring the 2012 Iowa Caucuses, the Republican Party incorrectly reported its winning candidate, and the complex caucus voting rules and reporting process made the true outcome almost impossible to determine. The touchtone-phone based system was prone to error, most notably precincts submitting duplicate entries that skewed results.

Determined to avoid issues and increase public confidence in election results, both Parties realized mobile technologies offered the best solution, but delivering apps that met the standards required for such an important event weren’t without challenges.

The Iowa Caucus Apps’ criteria, at a glance:

  • As consumer-facing apps, both Parties needed phone and tablet versions to distribute via all major public app stores, resulting in 12 apps across Android, iOS, and Windows.
  • Security and fidelity were a must, especially user authentication. While the app was publicly available, only registered Caucus Precinct Chairs were granted access to the reporting functionality. Timing was also important: Precinct Chairs needed to access reporting immediately when voting opened, but not a moment beforehand. To validate user identity, InterKnowlogy incorporated two-factor authentication.
  • Since Iowa Caucus participants cover all demographics, including less tech-savvy citizens, the apps needed to be highly intuitive and responsive, requiring little training and eliminating the ability to mistakenly report information.
  • The apps needed to handle complex logic, calculate and validate results according to party rules, catch invalid entries, and include prompts for conditional voting processes. Before results were submitted and announced to the public, they needed to be validated with any anomalies flagged for analysis.

After a diligent requirements gathering and user experience design process, the InterKnowlogy team faced an extremely aggressive four month timeline. However, using Xamarin, Microsoft Azure, and their deep Microsoft expertise, they successfully delivered apps across all platforms with just five .NET developers dedicated to the project. On Caucus day in Des Moines, the final apps captured 90% of caucus results within three hours in a secure, accurate, and trusted manner.

 

View the Case Study

 
Start building your own native Android, iOS, and Windows apps with Xamarin today at xamarin.com/download.

The post Iowa Caucuses Launch Inaugural Polling Apps with Xamarin appeared first on Xamarin Blog.

September 22, 2016 1:55 GMT

iPhone 7+ – First Look

What can I say that you don’t already know?  phone7plus

[Updated Sept 23, 09:19:

To reset the iPhone 7, hold the power button and the volume down button until you see the apple

The battery does seem to last much longer, but beware, it takes much longer to charge ]

The packaging is Apple standard (read beautiful), upgrading from my 6S+ was a piece of cake, activating the phone could not have been easier, and all my stuff was restored without a hitch.

Verizon is buying my old phone for $300, which offsets the new price by quite a bit.

One feature Apple is highlighting is that you can get the phone wet (and it is dust resistant as well).   Not swimming wet.  Not salt-water wet, but you can be out in the rain, or drop it in the toilet and it keeps on ticking.  That  actually matters a lot.

For me, however, the real killer feature is the camera.  This is not just a convenient camera in a phone, this is a beauty.  If you’re serious about the phone as a camera, you really want the plus (get bigger pockets).  Not only do you have a 2x optical zoom and 10x digital zoom, but more important you have a better lens,and spectacular images.  Both the 7 and the 7+ have greatly enhanced ability to take great pictures in darker places.

They did change the earbuds to use the lightning connector, which helps make the phone water resistant, but which is also a real pain.  The phone comes with one set of the new ear buds and one converter to allow you to use your old buds.  You can get more converters. for $9 each, which isn’t too bad.   The biggest problem is that I sometimes like to charge my phone when listening to it, and you can’t do that.

Note however that the 7+ is not only bigger than the 7, it is considerably heavier (6.63oz vs. 4.87oz)  Two ounces is noticeable, but not significant for what you get.  I don’t think I could go back to the 7 size… reading and images on the plus are just too superior.

The phone now offers 32G, 128G and 256G storage. I bought the 128.

The retina display is beautiful.  The button is a bit funky, but you get used to it pretty quickly.

Oh, yeah, it is faster, using an A10 chip rather than an A9.

Watch out though; even though it is the same size you can not use your iPhone 6 protective case.  The camera is wider now and the old cases block it.

I bought an “Extra Tough” case on Amazon for $14… I’ll let you know.  (I dropped the phone literally five minutes after putting the case on.  It landed face down but was fine)

The new phone every year program is terrific.  You pay a bit more each month, but then again AppleCare+ is included, so it is close to a wash.  Your payments are spaced out over 24 months, but after 12 payments you are eligible to upgrade (trading in your old phone to offset the cost).  The phones are unlocked, so feel free to pick your carrier.  The iPhone 7+ with 128GB is $41.58/month.  Verizon charges a $20 line fee.  Not cheap but not insane.

My phone came with iOS 10, so I had to immediately upgrade to 10.0.1; not a big deal.

This post will be updated as I learn more.  Please leave your experiences in the comments section.

Thanks.

 

 

September 21, 2016 6:38 GMT

Xamarin at Microsoft Ignite

Xamarin will be in full force at Microsoft Ignite September 26–30!

If you’re heading to Georgia, you can find us at the “Mobile Development & Xamarin” totem in the Developer Tools section of the Cloud + Enterprise area of the expo floor.

You’ll also have the opportunity to attend Pierce Boggan’s Pre-Day Training session, “Build Cross-platform Enterprise Mobile Apps with Visual Studio and Xamarin” on Sunday, September 25. Additionally, Xamarin’s Dan Waters will present a theater session on how to “Ship Better Mobile Apps Faster with Continuous Delivery”. Other sessions include:

Visit Microsoft Ignite to view the full agenda and add a calendar reminder to join the event online if you won’t be attending in person.

The post Xamarin at Microsoft Ignite appeared first on Xamarin Blog.

September 21, 2016 1:24 GMT

MacOs Sierra – first look

[Updated 11:26 EDT]

macOS Sierra is heresierra (goodbye Mac OSX)

It has only a few new features, most notable of which is Siri on the Mac.

Siri works ok, not great.  And even when she is fully  understanding my requests, there is only so much she can do.  Great for looking up faces in your photos, good for looking up things on the web, not so great for making phone calls, and why would I want her to open Word when I can do so myself in 1/2 the time?

A perhaps more important feature is the ability to copy and paste from the Mac to the phone.  That is very cool — not needed often, but nice when you do need it.

There is better file sync’ing through iCloud (for files on the desktop or the documents folder),  There are some file space optimization features (it can be set to delete videos you’ve watched) and everyone’s favorite useless feature: bigger emojis!

A really nice, if small feature is that clicking on the speaker in the menu bar allows you not only to set the volume but to switch among your output options.  Excellent

So far, no negative effects on Xamarin Studio or other programming tools.

If you are being too productive, you’ll love the picture in picture feature, which lets you watch a pop out video in a small window while trying to debug your program.

Oh! yes, I almost forgot, you can unlock your Mac from your watch.  Why would you want to? I have no idea.  And it works from 5-9 feet according to CNET, which means you can “lock” your phone but if you’re 5′ away, it won’t be locked at all.

Bad News

It broke my  Evernote scanner, so I had to plug the scanner into a non-upgraded laptop (grrr).  This apparently is a known problem with this particular scanner, however, so you should be fine with printers.

It does not support my Bose Companion 5 speakers and there is no word from Bose on this.  Very annoying.

On the other hand, the upgrade is easy and free, but I’m not sure I’d rush.

September 20, 2016 9:00 GMT

Enhanced Notifications in Android N with Direct Reply

One of my favorite parts of Android has to be it’s notification system enabling developers to directly connect to their users outside of the main application. With the launch of Android N, notifications are getting a visual make-over including a new material design with re-arranged and sized content to make them easier to digest and some new details specific for Android N such as app name and an expander. Here is a nice visual overview of the change from Android M to N:

Android N Notification

Visuals aren’t the only thing getting updated in Android N, as there are a bunch of great new features for developers to take advantage of. Bundled Notifications allow developers to group notifications together by using the Builder.SetGroup() method. Custom Views have been enhanced and it is now possible to use the system notification headers, actions, and expanded layouts with a custom view. Finally, my favorite new feature has to be Direct Reply, allowing users to reply to a message within a notification so they don’t even have to open up the application. This is similar to how Android Wear applications could send text back to the main application.

2016-09-19_1810

Getting Started

In previous versions of Android, all developers could handle was notification and action tap events, which would launch an Activity or a service/broadcast receiver when using an action. The idea of Direct Reply is to extend an action with a RemoteInput to enable users to reply to a message without having to launch the application. It’s best practice to handle responding to messages inside of an Activity as the user may decide to tap on the notification or may be on an older operating system.

A pre-requisite to implementing Direct Reply is that we must have a broadcast receiver or service implemented that can receive and process the incoming reply from the user. For this example, we’ll be launching a notification from our MainActivity that will send an Intent with a value of “com.xamarin.directreply.REPLY” that our broadcast receiver will filter on.

First, ensure that the latest Android Support Library v4 NuGet is installed in the Android application to use the compatibility mode for notifications.

In our MainActivity, we’ll create a few constant strings that can be referenced later in the code:

int requestCode = 0;
public const string REPLY_ACTION = "com.xamarin.directreply.REPLY";
public const string KEY_TEXT_REPLY = "key_text_reply";
public const string REQUEST_CODE = "request_code";

Create a Pending Intent

An Android Pending Intent is a description of an Intent and target action to perform with it. In this case, we want to create one that will trigger our reply action if the user is on Android N, or that will launch the Main Activity if the user is on an older devices.

Intent intent = null;
PendingIntent pendingIntent= null;
//If Android N then enable direct reply, else launch main activity.
if ((int)Build.VERSION.SdkInt >= 24)
{
    intent = new Intent(REPLY_ACTION)
			.AddFlags(ActivityFlags.IncludeStoppedPackages)
			.SetAction(REPLY_ACTION)
			.PutExtra(REQUEST_CODE, requestCode);
    pendingIntent = PendingIntent.GetBroadcast(this, requestCode, intent, PendingIntentFlags.UpdateCurrent);
}
else
{
    intent = new Intent(this, typeof(MainActivity));
    intent.AddFlags(ActivityFlags.ClearTop | ActivityFlags.NewTask);
    pendingIntent = PendingIntent.GetActivity(this, requestCode, intent, PendingIntentFlags.UpdateCurrent);
}

Create and Attach RemoteInput

The key to direct reply is to create and attach a RemoteInput, which will tell Android that this action that we’re adding is a direct reply and thus should allow the user to enter text.

var replyText = "Reply to message...";
//create remote input that will read text
var remoteInput = new Android.Support.V4.App.RemoteInput.Builder(KEY_TEXT_REPLY)
						        .SetLabel(replyText)
                                                        .Build();

After we have the RemoteInput we can create a new action and attach it to it a new action:

var action = new NotificationCompat.Action.Builder(Resource.Drawable.action_reply,
                                                   replyText,
                                                   pendingIntent)
                                                  .AddRemoteInput(remoteInput)
                                                  .Build();

Build and Send Notification

With our action with remote input created, it’s finally time to send the notification.

var notification = new NotificationCompat.Builder(this)
					 .SetSmallIcon(Resource.Drawable.reply)
					 .SetLargeIcon(BitmapFactory.DecodeResource(Resources, Resource.Drawable.avatar))
					 .SetContentText("Hey, it is James! What's up?")
					 .SetContentTitle("Message")
					 .SetAutoCancel(true)
                                         .AddAction(action)
					 .Build();
using (var notificationManager = NotificationManagerCompat.From(this))
{
	notificationManager.Notify(requestCode, notification);
}

Now our notification is live with the remote input visible:
notification

Processing Input

When the user inputs text into the direct reply, we’re able to retrieve the text from the Intent that is passed in with just a few lines of code:

var remoteInput = RemoteInput.GetResultsFromIntent(Intent);
var reply = remoteInput?.GetCharSequence(MainActivity.KEY_TEXT_REPLY) ?? string.Empty;

This should be done in a background service or broadcast receiver with the “com.xamarin.directreply.REPLY” Intent Filter specified.

Here’s our final BroadcastReceiver that will pop up a toast message and will update the notification to stop the progress indicator in the notification:

[BroadcastReceiver(Enabled = true)]
[Android.App.IntentFilter(new[] { MainActivity.REPLY_ACTION })]
/// 
/// A receiver that gets called when a reply is sent
/// 
public class MessageReplyReceiver : BroadcastReceiver
{
	public override void OnReceive(Context context, Intent intent)
	{
		if (!MainActivity.REPLY_ACTION.Equals(intent.Action))
			return;
		int requestId = intent.GetIntExtra(MainActivity.REQUEST_CODE, -1);
		if (requestId == -1)
			return;
		var reply = GetMessageText(intent);
		using (var notificationManager = NotificationManagerCompat.From(context))
		{
			//Create new notification to display, or re-build existing conversation to update with new response
			var notificationBuilder = new NotificationCompat.Builder(context);
			notificationBuilder.SetSmallIcon(Resource.Drawable.reply);
			notificationBuilder.SetContentText("Replied");
			var repliedNotification = notificationBuilder.Build();
			//Call notify to stop progress spinner.
			notificationManager.Notify(requestId, repliedNotification);
		}
		Toast.MakeText(context, $"Message sent: {reply}", ToastLength.Long).Show();
	}
	/// 
	/// Get the message text from the intent.
	/// Note that you should call 
	/// to process the RemoteInput.
	/// 
	/// The message text.
	/// Intent.
	static string GetMessageText(Intent intent)
	{
		var remoteInput = RemoteInput.GetResultsFromIntent(intent);
		return remoteInput?.GetCharSequence(MainActivity.KEY_TEXT_REPLY) ?? string.Empty;
	}
}

Learn More

To learn more about the great new features in Android N, including Notification enhancements, be sure to read our full Android N Getting Started Guide. You can find a full example of Direct Reply and other notification enhancements in our Samples Gallery.

The post Enhanced Notifications in Android N with Direct Reply appeared first on Xamarin Blog.

September 19, 2016 6:49 GMT

New iOS 10 Privacy Permission Settings

If you’ve ever built an iOS application, you’ll already be familiar with requesting app permissions (and mostly likely are familiar with Android, too, since the Marshmallow release). If an app wanted access to a users location or to use push notifications prior to iOS 10, it would prompt the user to grant permission. iOS10 Graphic

In iOS 10, Apple has changed how most permissions are controlled by requiring developers to declare ahead of time any access to a user’s private data in their Info.plist. In this blog post, you’ll learn how to ensure your existing Xamarin apps continue to work flawlessly with iOS 10’s new permissions policy.

Example iOS 9 Permissions Request

For instance, if we wanted to integrate photos into our application, we would want to request permission with the following code:

PHPhotoLibrary.RequestAuthorization(status =>
{
  switch(status)
  {
    case PHAuthorizationStatus.Authorized:
      break;
    case PHAuthorizationStatus.Denied:
      break;
    case PHAuthorizationStatus.Restricted:
      break;
    default:
      break;
  }
 });

The above code would bring up a dialog box requesting permissions that we could handle, with the message was directly by the system.

What’s New in iOS 10

Starting in iOS 10, nearly all APIs that require requesting authorization and other APIs, such as opening the camera or photo gallery, require a new key value pair to describe their usage in the Info.plist. This is very similar to the requirement for NSLocationWhenInUseUsageDescription or NSLocationAlwaysUsageDescription to be put into the Info.plit when using Geolocation and iBeacon APIs. The difference now is that the application will crash when the app attempts authorization without these keys set. These include use of:

  • Bluetooth Sharing
  • Calendar
  • CallKit/VoIP
  • Camera
  • Contacts
  • Health
  • HomeKit
  • Location
  • Media Library
  • Microphone
  • Motion
  • Photos
  • Reminders
  • Speech Recognition
  • SiriKit
  • TV Provider

These new attributes only take effect when we start compiling against the iOS 10 SDK, which means we must provide keys when using these APIs. If we want to use the Media Plugin for Xamarin and Windows, for example, to take or browse for a photo, we must add the follow privacy settings into the Info.plist file:

properties

When we attempt to pick a photo, our message will be shown to the users:

popup

Each of the privacy keys map to specific values that are set in the Info.plist. Opening it in a text editor, we’ll see the following:

NSCameraUsageDescription
This app needs access to the camera to take photos.
NSPhotoLibraryUsageDescription
This app needs access to photos.

Here’s a mapping of each of the values in case you need to manually add them to the Info.plist:

  • Bluetooth Sharing – NSBluetoothPeripheralUsageDescription
  • Calendar – NSCalendarsUsageDescription
  • CallKit – NSVoIPUsageDescription
  • Camera – NSCameraUsageDescription
  • Contacts – NSContactsUsageDescription
  • Health – NSHealthShareUsageDescription & NSHealthUpdateUsageDescription
  • HomeKit – NSHomeKitUsageDescription
  • Location – NSLocationUsageDescription, NSLocationAlwaysUsageDescription, NSLocationWhenInUseUsageDescription
  • Media Library – NSAppleMusicUsageDescription
  • Microphone – NSMicrophoneUsageDescription
  • Motion – NSMotionUsageDescription
  • Photos – NSPhotoLibraryUsageDescription
  • Reminders – NSRemindersUsageDescription
  • Speech Recognition – NSSpeechRecognitionUsageDescription
  • SiriKit – NSSiriUsageDescription
  • TV Provider – NSVideoSubscriberAccountUsageDescription

Learn More

To learn more about these keys, be sure to read through Apple’s Cocoa Keys documentation. To learn more about the new APIs and changes in iOS 10, be sure to read through our Introduction to iOS 10 guide and our new iOS Security and Privacy Enhancements documentation.

The post New iOS 10 Privacy Permission Settings appeared first on Xamarin Blog.

September 19, 2016 4:38 GMT

Yet Another Podcast #164 – Azure Mobile Apps with Chris Risner

Chris is a Principle Software Development Engineer at Microsoft.  There he

Chris Risner 2012

Chris Risner 2012

works within the Developer Experience team where he leads a team focusing on making non-traditional Microsoft technology work well with Microsoft technology.


September 17, 2016 11:00 GMT

NuGet Support in Xamarin Studio 6.1

Xamarin Studio 6.1 was released last week as part of the latest stable Xamarin Platform release and it includes changes made to the NuGet support.

Changes

  • NuGet 3.4.3 support
  • Support for project.json files
  • A specific NuGet package version can now be installed from a list shown in the Add Packages dialog
  • NuGet operations can now be cancelled from the status bar or Package Console
  • Support browsing for a local directory when creating a package source
  • Support forcefully removing a NuGet package when it is missing from all package sources
  • Packages installed in the solution are no longer shown in the Add Packages dialog
  • Only global package sources are now shown in Preferences
  • NuGet version supported is now displayed in the About dialog

More information on all the new features and changes in Xamarin Studio 6.1 can be found in the release notes.

NuGet 3.4.3 support

Xamarin Studio now includes NuGet 3.4.3 which means project.json files are now supported and NuGet packages that only support NuGet 3 or above can now be installed.

Support for project.json files

The project.json file is a new package file format introduced with NuGet 3 which supports transitive restore. More detailed information on project.json can be found in the NuGet documentation.

A project.json file replaces the packages.config file and holds the NuGet packages being used by the project. One difference you will notice is that the project.json file may not show the same list of NuGet packages that a packages.config file would show. This is because the project.json file only shows the NuGet packages you explicitly install into your project. So if you install say bootstrap you will only see bootstrap in the project.json file even though it depends on jQuery. If you do the same for a packages.config file you would see both bootstrap and jQuery saved in the file. Another difference is that references are not added to your project file (.csproj) when using a project.json file.

In order to use a project.json file with Xamarin Studio you will need to create the file yourself in the project directory and close and re-open the solution. The project.json file needs to be available when you open the project otherwise Xamarin Studio will default to using a packages.config file.

An example project.json file for a .NET 4.5 library project is shown below:

{
  "frameworks": {
    "net45": {}
  }
}

When you add a NuGet package to a project that uses a project.json file the NuGet package information will be added into a dependencies section:

 "dependencies": {
   "NUnit": "3.2.1"
 }

Please note that when using a project.json file the project will not display a From Packages directory inside the References folder. This is because the project file does not have any references added to it when using a project.json and the reference information is currently not available from the project system.

Please note that there are future plans to move the information stored in a project.json file into the project file.

NuGet 3 package source

Xamarin Studio now supports using the NuGet 3 package source:

https://api.nuget.org/v3/index.json

This can be added into your package sources in Preferences. It is also the package source that will be created by default if your global NuGet.Config file is missing.

Installing a specific NuGet package version from the Add Packages dialog

Older versions of Xamarin Studio supported being able to install specific package versions by using a package version search in the Add Packages dialog as shown below:

NUnit version:*

This package version search was not easy to discover and so it has been removed and replaced in Xamarin Studio 6.1 with a combo box that allows a particular version to be selected. The Version combo box is in the bottom right hand corner of the Add Packages dialog as shown in the screenshots below.

Add Packages dialog

Add Packages dialog with version combo box selected

Note that in order to populate the version combo box a second request is sent to the package source so it may not show all the versions immediately.

Also note that for package sources which are local directories only the latest version will be displayed in the version combo box.

Cancelling a NuGet operation

With Xamarin Studio you can now cancel the currently running NuGet package operation. This can be done by clicking the red Stop button in the Status Bar or in the Package Console.

Status bar stop button

Adding local package sources

When adding a package source in Preferences it is now easier to create a package source for a directory on your local machine. There is now a browse button which will allow you to browse to a directory and add it rather than having to type the full path into the text box.

Add Package Source dialog

The Add Package Source dialog has also been changed to make it more obvious that either a URL or a folder can be used as a package source. The URL label has been changed to Location and the placeholder text now specifies that a URL or a folder can be used.

Forced NuGet package removal

A NuGet package can now be removed when it not restored and is unavailable from all package sources.

With older versions of Xamarin Studio a NuGet package must be restored before it can be removed. This is a requirement of NuGet since it requires the original NuGet package to work out what has been installed so it can determine what needs to be uninstalled. NuGet can do more than just update the project file with references and MSBuild .targets files, it may add new files to the project or it may run app.config or web.config transforms.

When the NuGet package removal fails because the NuGet package cannot be restored a dialog will be displayed asking whether you want to try to remove the NuGet package anyway. If the OK button is selected then Xamarin Studio will:

  1. Remove the NuGet package from the packages.config file.
  2. Remove any assembly references for the NuGet package from the project file (.csproj).
  3. Remove any Imports that refer to .targets or .props files that were included with that NuGet package.

This process may miss files that were added to the project by NuGet but in the majority of cases it should remove the NuGet package successfully without having to manually remove the NuGet package information from the project file.

Packages installed in solution are no longer shown in Add Packages dialog

With previous versions of Xamarin Studio all packages installed in the solution were shown first in the list of packages in the Add Packages dialog. Packages installed in the solution are now no longer shown in the Add Packages dialog.

Only global package sources shown in Preferences

The package sources shown in the Preferences dialog are now only read from the global NuGet config file. Per-solution NuGet.Config files located in individual solution directories are no longer read when showing the package sources in Preferences. This is because changes made in Preferences only modifies the global NuGet.Config file.

The package sources shown in the Add packages dialog will still include package sources defined in a solution’s NuGet.Config file and is unaffected by this change.

NuGet version displayed in About dialog

The version of NuGet supported by Xamarin Studio is now displayed in the About dialog when the Show Details button is selected.

About dialog

Bug Fixes

Custom MSBuild .targets files were not always added to the end of the project

When installing a NuGet package that has a .targets file the Import element created was grouped with the existing Import elements. This is OK most of the time however if there are other items in the project added after the import then any build targets may fail since these items are included after the import. One example is the netfx-System.StringResources NuGet package which may not find any resource files that occur in the project after its Import element.

Now .targets files are added as the last element in the project file. This also makes the behaviour consistent with how NuGet works in Visual Studio.

Custom MSBuild .props files were not added to the start of the project

Installing a NuGet package that included an MSBuild .props file would add an Import element for the .props file at the end of the project file which is incorrect. Now .props files are added to the project file as the first child element inside the Project’s root element.

Known Issues

Offline package restore

Package restore may not work when you are offline even though the NuGet packages may be available in the local NuGet cache on your machine.

The current workaround is to create a package source that points to a local directory containing all the required NuGet packages and disable all online NuGet package sources. With just the local package source enabled you can then restore the NuGet packages when you are offline. Note that this problem also affects Visual Studio 2015.

September 16, 2016 8:21 GMT

Xamarin Around the World with Xamarin Dev Days

Xamarin Dev Days are the place to find free hands-on Xamarin training, live demos, and a fun environment to build your very own cloud-based Xamarin.Forms application with Azure. User groups around the world are working to provide events in their cities offering developers the opportunity to learn native mobile development for iOS, Android, and Windows from the ground up. Xamarin Dev Days have been so popular, we are here to announce another set of brand new cities all across the globe.

What are Xamarin Dev Days?XDD Agenda

They are community run, comprehensive introductions to building mobile apps with Xamarin, Xamarin.Forms, and creating cloud-connected mobile apps with Microsoft Azure. After lunch, there will be an opportunity to put new skills into practice with a hands-on workshop. Whether you are a brand new or experienced C#/.Net developer, every attendee will walk away with a better understanding of how you can build, test, and monitor native iOS, Android, and Windows apps.

 

xdd-praavi

MORE Xamarin Dev Days!

9/23: Abuja, Nigeria
10/1: Bogota, Columbia
10/1: Bangalore, India
10/8: Hanoi, Vietnam
10/8: Monterrey, Mexico
10/8: London, United Kingdom
10/8: Dakar, Senegal
10/8: Dallas, TX
10/15: Jaipur, India
10/15: Cádiz, Spain
10/15: Ankara, Turkey
10/22: Toronto, Canada
10/29: Chiapas, Mexico
10/29: Gliwice, Poland
10/29: Moka, Mauritius
11/05: Sousse, Tunisia
11/05: Kernersville, NC
11/12: Cleveland, OH
11/18: Berlin, Germany
11/19: Cranbury, NJ
11/19: Bournemouth, United Kingdom
11/25: Bari, Italy
11/26: Paris, France
12/10: Dubai, UAE

If you’re looking for an event in your area, visit the Xamarin Dev Days website for a full list of all of the Xamarin Dev Days. You can also use this interactive map to help find a Xamarin Dev Days in your area:

Want a Xamarin Dev Days in Your City?

Apply as a Xamarin Dev Days host! We’ll provide you with everything you need for a fantastic Xamarin Dev Days event, including all of the speaker content and lab walkthrough, a hosting guideline to help organize your event, and assistance with registration and promotion. Hurry and apply for your city now—the deadline for events in 2016 closes soon!

Sponsoring Xamarin Dev Days

We’re working with tons of Xamarin Partners and community members to help facilitate the Xamarin Dev Days series. If your company is interested in participating in these awesome events, apply as a sponsor and get global recognition and access to our worldwide developer community!

The post Xamarin Around the World with Xamarin Dev Days appeared first on Xamarin Blog.

September 16, 2016 6:12 GMT

Scaling from Side Project to 200,000+ Downloads with Xamarin and Microsoft Azure

As mobile technology evolves, developers everywhere are building new, innovative apps that capture our interest and improve our lives, from creating unique social media communities to developing digital assistants and bots.

With over 200,000 downloads and 4+ stars, Foundbite strikes the right balance of practical and engaging, with apps that allow users to add sound to static images, creating “foundbites” that bring experiences, events, and places to life for their friends, followers, and fans.

James Mundy, Foundbite Founder and Lead Developer, shares how he got started with mobile development and how he was able to use his C# skills to get Foundbite into the hands of Android, iOS, and Windows users everywhere.

Tell us a little bit about your company and role. Have you always been a developer?

I started developing Foundbite while studying Physics at university in 2012. Now, we’re a London-based team of three building an app that allows you to share and explore sounds from around the world.

I started building the app as a side project. I was able to secure some funding from Microsoft and Nokia to bring it to Windows Phone first, so the very first version of our app was built in C#. Since I’d written several Windows Phone apps before this, it was a good fit.

Tell us about your app / what prompted you to build it.

The idea behind Foundbite is to allow people to share and explore the sounds of the world around them from their phone. With Foundbite, users record five seconds to five minutes of sound, add photos to give the sound context, and tag it with a location.

Users can share their creations with friends (through Facebook and Twitter) and the public Foundbite community. We also have an interactive global map that allows users to search, find, and listen to sounds from places all over the world, getting a real feeling for what it’s like to be there.

What is the most compelling or exciting aspect of your app?

The feature that resonates most with our users is its truly global nature—we’ve had uploads from the UK, US, Taiwan, Iran, China, and more—and the ability to explore a map, find a place you’re interested in or haven’t heard of before, and then listen to the sounds that another user has recorded. Recording the sound of a place really does ignite your imagination and give you a feel for what it’s like to be there.

Some Foundbite examples include: the Tennis World Tour Finals at O2 Arena, a bullet train passing in Taiwan, and the crowd cheering at the Seattle Seahawks’ stadium, plus many more on the website.

How long did it take to ship your app, from design to deploy?

Thanks to Xamarin, our whole code base is shared at around 60% across Windows, iOS, and Android platforms. This makes maintaining code and diagnosing bugs far easier, but the main advantage is that we’ve been able to deploy three highly rated apps to three different platforms with a team of just two full time developers.

We use Microsoft Azure for our backend, so we have a full Microsoft and .NET Stack. We use Azure Notification Hubs, Azure Search, Redis, Azure SQL, Azure App Service, so we also have code shared between our app client projects and our server side code, which is ideal!

How long would it have taken you without Xamarin?

It would have taken us significantly longer to develop the apps. We had experience with C# already, and would have had to learn Objective C/Swift and Java and have been replicating a lot of code in these other languages that we had already written in C# for the Windows app.

Even though we were building the apps in C#, there was still a lot of learning to do regarding how to use the iOS and Android platform APIs and getting to grips with the nuances of each platform. Overall, the APIs were well documented, and there’s a very active Xamarin Forums and StackOverflow community to turn to for help. Even without that, it’s very easy to adapt samples written in Swift/Objective C to C#.

Are you using mobile DevOps / CI?

We’re starting to use Xamarin Test Cloud and TFS build server to improve our internal processes and improve the quality and reliability of the builds we push out to our users.

What’s your team planning to build next?

We’ve got lots more features planned, like the ability to combine several Foundbites into a collection to document a trip or event even better. Thanks (again) to Xamarin, we hope to roll this out to our users nearly simultaneously across all platforms.

What advice do you have for developers who are just starting out or investigating mobile development? Any best resources?

I’d recommend starting simple and using GitHub to find other mobile (Xamarin or otherwise) projects that developers have done and open sourced. I found this to be particularly useful in working out how apps were built and how to solve problems as I built my own app.

What would you say to a developer or enterprise just starting mobile development?

I’d definitely advise starting off with Xamarin—there’s less repeated code, you can have a more versatile, smaller team with the potential for everyone to be able to work on each platform, and a quicker development cycle, which are all advantageous for any company, whether big or small.

Using Xamarin as an early stage company has enabled us to write less, better code with a smaller team to reach more customers quicker.

To learn how our customers around the world are building amazing apps, visit xamarin.com/customers, and start building your own today at xamarin.com/download.

The post Scaling from Side Project to 200,000+ Downloads with Xamarin and Microsoft Azure appeared first on Xamarin Blog.

September 14, 2016 1:00 GMT

Gone Mobile 39: Serverless and Azure Functions with Donna Malayeri and Fabio Cavalcante

Want to know what this whole serverless thing is about? Learn all about it and what you can do with Azure Functions from Donna Malayeri and Fabio Cavalcante!

Hosts: Greg Shackles, Jon Dick

Guests: Donna Malayeri, Fabio Cavalcante

Links:

Thanks to our Sponsors!

http://raygun.com

Raygun provides error and crash reporting software for all programming languages and platforms including iOS, Android, Xamarin, Javascript and more. Don’t just log errors and crashes, solve them with Raygun!

September 13, 2016 8:36 GMT

Mysterious crashes in your iOS 10 program? Check your info.plist

If you’re developing for iOS 10 and your app “silently” crashes (especially if it’s an older app), the culprit could well be the increased privacy requirements in iOS 10. Namepaces such as HomeKit now require specific privacy-related keys to be in your info.plist (for instance, NSHomeKitUsageDescription). If you don’t have them, the system automatically closes your application without an exception or Console.log message (if you run in the simulator, you may see a PRIVACY_VIOLATION notice in the stack trace).

September 13, 2016 1:57 GMT

Yet Another Podcast #163: James Montemagno and Xamarin Cycle 8

James Montemagno is a Principal Program Manager on the Xamarin team at james-montemagnoMicrosoft. He has been a .NET developer since 2005 working in a wide range of industries including game development, printer software, and web services.

Prior to becoming a Principal Program Manager, James was a professional mobile developer and has now been crafting apps since 2011 with Xamarin.

He blogs code regularly on his personal blog, and on the weekly development podcast Merge Conflict. You can also find him on his Channel 9 TV Show or on Twitter

September 10, 2016 11:57 GMT

Review: The Imposter’s Handbook by Rob Conery

This review is based off the initial version of the preview of the Imposter’s Handbook that Rob Conery made available for purchase. When the book is released, I will update this review should the Imposter’s Handbook’s content warrant.

Instead of shooting off fireworks on July 4, I was browsing Twitter and I came across this:

And after reading the description of the book on his website I thought to myself:

Holy crap! Rob Conery is writing book just for me!

I never set out to find a book that covered the topics that this one does… things that self-taught developers may have missed out on from CompSci classes… but when I saw this one, I knew instantly I would purchase it.

With that said, I will try not to let my initial positive reaction influence my review of this book. But I do want you to know that I came in wanting to like it … and I left liking it, maybe a bit more than I thought I would.

So, since I spoiled the ending already … the TL;DR of it all is that I recommend The Imposter’s Handbook without hesitation. Even if you write compilers for fun before going to bed – I’m confident this book will teach you something new. As for me, I learned both completely new topics and some new things about topics I already knew … and what more could one ask for out of a book?

Now let’s back up to the beginning…

The Imposter’s Handbook aims to fill in the blanks that self-taught developers may have by not completing a formal Computer Science degree. In other words, it covers a TON of topics. It’s broken down into 6 high level sections: Linux, Computer Science Theory, Data Structures, Database Theory, Programming Languages and Software Design. Each of those sections are then broken down into chapters, each covering a discrete topic. As you can imagine, each one of those chapters, if expanded upon in full, would be enough to fill an entire book. So in that respect, don’t expect to emerge from any of the high level sections of this book, much less the individual chapters, as an expert in the topic, much less proficient.

However, that’s not the point of this book. As Rob states in the introduction:

The first thing: this book is a compendium in summary form. I cannot possibly dive into things in as much detail as they deserve.

I’m going to give you enough detail so that you can have an intelligent conversation about a topic.

If you want to know more: that’s up to you.

And in this regard Rob succeeds.

Each chapter has enough information to provide context around why the subject matter is important and/or how it fits in to the topic of the section. The chapters also provide enough info to get started on the topic. At the very least, you’ll have a basic understanding … and know enough to start asking questions to further your education on it.

The book is peppered with links to more in-depth information throughout. Several times I found myself no longer reading the book, rather deep down the rabbit hole of exploration on one of the topics presented. Lambda Calculus in particular kept me away from the book for several days.

Everything in the book is well researched … and presented with enthusiasm (more on that below). The hand-drawn illustrations help greatly in making key points, and are entertaining. Overall, the way the content of the book is presented, it makes you want to go and find out more.

Each of the chapters is presented with the same diligence as the others. Which is great, because although I feel pretty solid on relational database theory, that doesn’t mean everybody will … so the enthusiasm apparent the chapters that I’m interested in and inspired me to dig deeper is also apparent in others that I’m … not so interested in.

All of this is a long way of saying … The Imposter’s Syndrome succeeded in giving me a head start on and making me want to find out more about a topic that I didn’t really know anything about.

In the days after finishing the book I was reading an article on machine learning, and I recognized the underpinnings of Lambda Calculus in the topic being discussed, though Lambda Calculus was never explicitly called out. At best I would consider myself some dude who knows what Lambda Calculus is, and could explain its premise. But by having only that basic understanding … I was able to divine deeper insight into the machine learning article, and really have it click. And now I’m starting to recognize it in more and more places, all because Imposter’s Syndrome gave me the curiosity to dig into Lambda Calculus.

Same thing with data mart schemas. Though the subject matter of data marts doesn’t interest me to explore those more, I did find myself in a meeting where star vs snowflake schemes were being discussed. And because of the corresponding chapter, I was able to participate more than quietly look at the floor.

Seems like this review is heading towards fan boy territory … so let me tell you about some things that didn’t particularly work.

The strength of containing a ton of information is also one of its larger weaknesses. I read the book in a linear fashion and found myself wanting there to be an overarching narrative. For example, how or why does knowing Linux lead in to needing to know Comp Sci Theory?

To be completely fair, Rob does state in the introduction that there is not a strategy on his part in presenting the content, he just wants to fill the holes. Even so, a book is naturally looked upon as the ultimate authority of the content presented within. More of an attempt to weave how and why the high level sections play into each other would be appreciated.

The content within the high level sections flow together fine, but the high level sections themselves don’t need to be read in sequential order (with the exception that Comp Sci theory section should be probably be read before Data Structures).

In my opinion there were too many walk-throughs in the Linux section. If somebody is only reading the book, not working the samples at the same time, the section gets a bit long. The walk-throughs seem to break the flow and would maybe be better served as a separate top level section.

Finally, there are some chapters that I still don’t understand. The chapter discussing P and NP in the Comp Sci section being one. I can’t put all the blame on Rob for that, heck not even the majority … but I did have to work through some of the examples and analogies explaining a topic several times. So some better, more clear, examples would help.

Fortunately, at least during the preview period, Rob is taking and responding to issues on GitHub for the book. So if you see something wrong, or want something added, or just think something should be more clear … open an issue. That’s pretty slick… So in the case of the P and NP chapter I mentioned above… all I need is to come up with some feedback from helpful than “I don’t get it”.

You’ve probably noticed by now that I’m referring to Rob Conery as Rob … it’s not because we hang out together, but rather he writes in such an informal style that when reading the book, it seems like you’re sitting around with him talking development. That’s a great attribute for a book that dives into some heavy content to possess. His enthusiasm for the multitude of subjects covered also comes through. I found myself reading fast, because that’s how I imagined it would be spoken – fast and with enthusiasm.

In other words – I’m saying a book that gets into some deep Comp Sci theory is an easy read!

Summing It Up

If you made it this far, you already know that I’m recommending the book. There’s a ton of information in it, and it’s presented in an informal and enthusiastic manner that it’s difficult to not want to explore more.

As for me … I have a ton of reading to do on Lambda Calculus now …

You can find more info on The Imposter’s Handbook here.

September 07, 2016 3:52 GMT

My Favorite Mac Utilities

I have experimented with a number of different utilities.  I’m going to list some of them hMacBookProere, and then invite you to please add your preferred utilities that I haven’t tried yet, and why you like them.

The following are not in any particular order

My favorite go-to browser is Chrome, though I do keep a number of others around for testing Web applications

Speaking of Google, I love Google Photos.  Its ability to search without tagging is amazing.

My favorite calendar these days is BusyCal, though I flirt with Fantastical, which I know many people like a lot. And some of the time I just bring up calendar.gmail.com, though Kiwi (see below) will bring up its equivalent.

That brings me to BusyContacts, which is by far the best contact manager I’ve seen.

For Mail I use gmail, but lately I’ve been using Kiwi, which is essentially a desktop version of gmail.

Possibly my most important utility is Evernote.  I have over 3,000 notes, and I evernote use it every day, many times a day.

A good contender for second most important is DropBox.  I keep 2/3 of my files on DropBox so that I can access them from any machine.  It also makes yet another automatic, offsite backup.

Speaking of backups, after some experimentation, I’ve settled on Backblaze.  Unlimited storage for $5/month, and it is wicked fast.

I struggle with To-Do lists and have not loved any of them.  When the music stopped I ended up with 2Do, but I can’t say I love it.  It does coordinate with ToodleDo which is very good.    Still looking for a killer, simple todo list.

I live on Skype and Slack

I’ve not settled on one program for interactive conference calls,  so I generally go with whatever the other person wants.  GoToMeeting and BlueJeans are good, but there arpostmane many contenders.

For web development, PostMan is indispensable

I’ve been struggling with a good web page highlighter, so far I’m using Yellow Highlighter for Chrome though I confess I’m still not happy.

This brings me to Chrome Extensions.  The ones I actually use are 1Password Manager (the best, in my opinion), AdBlock, Augury (for Angular), DropBox  Evernote clipper , Google Dictionary, Google Docs, HootSuite Hootlet, InstaPaper, and Zoom.

To switch among speakers and microphones I use AudioSwitcher (Mac App Store) Not perfect but very good.  Be sure to set the preferences.

For capturing images, I use SnagIt.  And for screen captures / video I use Camtasia.  Both great.  (caveat, I get them free through the MVP program)

I won’t go into more developer tools here (Xamarin Studio, etc.) but I do need to mention Reflector, which lets you show your live phone on your Mac.

I’ve become a fan of Lightning PDF editor.simplemind

My go to program for mind-maps is SimpleMind.

I have not yet settled on an RSS reader, though I’m leaning towards Feedly.

For word processing and slides, I still use Microsoft Office.  So sue me.

I’ve be experimenting with Commander One, but haven’t decided yet.

I have Alfred, but I don’t use it, though I keep thinking I will.  Same for TextExpander.

Oh, stop the presses.  For scanning, run out and buy the ScanSnap Evernote scanner.  Oops, they’re not making the Evernote edition anymore, but you can still buy the ScanSnap which is the same thing without the Evernote connection.  Great scanner.

I keep my computer “clean” with Clean My Mac.

I almost forget f.lux.  Wonderful program that sets the color of your monitor when evening falls.

OK, that’s my list, at least for now.  Again, please post the great utilities that I’m missing.

 

 

 

September 07, 2016 2:13 GMT

Be ahead! Test your apps with the latest iOS (beta) version

After installing the latest Xamarin.iOS beta build I got an error message while building.

Xamarin.iOS 10.0 SDK error message
Xamarin.iOS 10.0 SDK error message

‘This version of Xamarin.iOS requires the iOS 10.0 SDK (shipped with Xcode 8.0) when the managed linker is disabled. Either upgrade Xcode, or enable the managed linker.’

That’s pretty self-explanatory, right? So if you want to go quick and dirty just go into the project properties and enable the managed linker. But I thought to myself: ‘why not take this opportunity to test my app with the new iOS 10 as well and make sure it’s still working OK?’. So that’s what I did. And it wasn’t even that hard!

The big objection I had with this is that I did not want my production development environment would stop working. But as it turned out, you can leave it intact! Yay!

First thing you need to do is download the new Xcode version. You can do that from the Apple Developer portal. Just log in and in the lower left-hand side go to Downloads.

Click the nice blue Download button behind Xcode 8 beta x (6 in my case) and wait for the approximately 4 gb to come in.

Download Xcode beta version
Download Xcode 8 beta version

After it is downloaded, unzip it. You’ll see that the app is called ‘Xcode-beta’, so by default Xcode won’t be overridden, nice!

Just place the Xcode-beta in your Applications folder and start it. You’ll have to agree to some updated EULA’s and some stuff needs to be verified. While that is going on start Xamarin Studio on your Mac and go into the Preferences.

Xamarin Studio iOS SDK configuration
Xamarin Studio iOS SDK configuration

Find the SDK Locations node and click Apple. You’ll see the current location is the default one which points to the stable Xcode.app.

Just replace this with the freshly installed Xcode-beta.app, so just add ‘-beta’, and that’s it! Don’t forget to save the new preferences, wait for the Xcode beta to be up and running, restart your Visual Studio if you’re working with that and try to build again. You’ll see it now works!

Also, in the devices list you will now find the iOS 10 simulator images, so you can start testing and developing for that.
If you have some work to do on the stable Xcode and iOS SDKs, just go back into your Xamarin Studio and reset the Apple SDK to ‘Xcode.app’, restart Visual Studio if you use it, and you can work with that yet again!

Pretty easy right?!

Please note that you cannot submit builds of your iOS app which use the beta iOS SDK to the App Store! This can only be done after it has been released officially and you rebuild your app with the stable SDK.

September 05, 2016 3:15 GMT

Getting Started with Azure Functions and F#

While it's been possible to use F# in Azure Functions for some time now, it wasn't until this week that it really became a first class citizen. Previously it would execute your F# scripts by calling out to fsi, but now the runtime is fully available, including input and output bindings, making it a far more compelling option.

I recently built a somewhat complex "serverless" application using AWS Lambda and JavaScript, thinking to myself the entire time that I wished I could have been writing it in F#. In this world of event-driven functions a language like F# really shines, so I'm excited to see Microsoft embrace supporting it in Azure Functions. In this post I'll walk through creating a simple Azure Functions application in F# that takes in a URL for an image, runs it through Microsoft's Cognitive Services Emotion API, and overlays each face with an emoji that matches the detected emotion. This started out as an attempt to replicate Scott Hanselman's demo in F#, but then I figured I may as well take it a step further while I was in there.

Initial Setup

While you can do a lot through the editor inside the Azure portal, for this demo I'm going to walk through creating an application that uses source control to handle deployments, since this is closer to what you'd be doing for any real application.

If you haven't installed it already, you'll want to install the azurefunctions npm package:

npm i -g azurefunctions  

This is a nice CLI tool the Azure Functions team maintains to help build and manage functions. I will also note that as of right now these things are all in a preview state and a bit of a moving target, so the experience isn't without a few rough edges currently. I have no doubts these will be smoothed out over time.

With that installed, run func init to create a new Git repository with some initial files:

C:\code\github\gshackles\facemoji> func init  
Writing .gitignore  
Writing host.json  
Writing .secrets  
Initialized empty Git repository in C:/code/github/gshackles/facemoji/.git/


Tip: run func new to create your fSirst function.  

Next, commit that to your repository and push that out somewhere. In my case, I'm using GitHub.

In the Azure portal, go ahead and create a new Function App, and then under its settings choose to configure continuous integration. Connect the app to the Git repository you just created, which will allow Azure to automatically deploy your functions anytime you push.

Create The Function

Now we can actually start creating our function! From the command line, run func new:

C:\code\github\gshackles\facemoji [master +3 ~0 -0 !]> func new

     _-----_
    |       |    ╭──────────────────────────╮
    |--(o)--|    │   Welcome to the Azure                      │
   `---------´   │   Functions generator!                      │
    ( _´U`_ )    ╰──────────────────────────╯
    /___A___\   /
     |  ~  |
   __'.___.'__
 ´   `  |° ´ Y `

? Select an option... List all templates
There are 50 templates available  
? Select from one of the available templates... QueueTrigger-FSharp
? Enter a name for your function... facemoji
Creating your function facemoji...  
Location for your function...  
C:\code\github\gshackles\facemoji\facemoji


Tip: run `func run <functionName>` to run the function.  

This is one of those rough edges I mentioned - as of right now the only F# template in this tool is QueueTrigger-FSharp so we'll choose that, even though it doesn't match what we're actually going to do. I'm sure this will be updated very soon with more up to date options.

In our case we're going to use HTTP input and output instead of being driven by a queue, so update the contents of function.json to:

{
  "bindings": [
    {
      "type": "httpTrigger",
      "name": "req",
      "authLevel": "anonymous",
      "direction": "in"
    },
    {
      "type": "http",
      "name": "res",
      "direction": "out"
    }
  ],
  "disabled": false
}

We can also go ahead and add a project.json file to declare some NuGet dependencies:

{
    "frameworks": {
        "net46": {
            "dependencies": {
                "FSharp.Data": "2.3.2",
                "Newtonsoft.Json": "9.0.1"
            }
        }
    }
}

You'll also want to copy in the PNG files found in my GitHub repository as well. Finally, go into your app settings and add a setting named EmotionApiKey with a value of the key you get from Cognitive Services.

Implement the Function

Okay, with all that out of the way, let's actually implement this thing! The implementation of the function will go in run.fsx. Since this is F# we will build things out from top to bottom as small functions we can compose together. First we can pull in some references we'll need:

#r "System.Drawing"

open System  
open System.IO  
open System.Net  
open System.Net.Http.Headers  
open System.Drawing  
open System.Drawing.Imaging  
open FSharp.Data  
open Newtonsoft.Json  

Next, create a few types to match the Cognitive Services API models and pull in some environment variables:

type FaceRectangle = { Height: int; Width: int; Top: int; Left: int; }  
type Scores = { Anger: float; Contempt: float; Disgust: float; Fear: float;  
                Happiness: float; Neutral: float; Sadness: float; Surprise: float; }
type Face = { FaceRectangle: FaceRectangle; Scores: Scores }

let apiKey = Environment.GetEnvironmentVariable("EmotionApiKey")  
let appPath = Path.Combine(Environment.GetEnvironmentVariable("HOME"), "site", "wwwroot", "facemoji")  

Originally I had wanted to use the JSON type provider to avoid needing Json.NET and these models but I ran into some issues there, another rough edge I suspect will be ironed out.

Next, we'll need to parse the query string of the request sent to us, grab the image URL from it, and download the image into a byte array:

let getImageUrl (req: HttpRequestMessage) =  
    req.GetQueryNameValuePairs()
    |> Seq.find(fun pair -> pair.Key.ToLowerInvariant() = "url")
    |> fun pair -> pair.Value

let getImage url =  
    Http.Request(url, httpMethod = "GET")
    |> fun (imageResponse) -> 
        match imageResponse.Body with
        | Binary bytes -> bytes
        | _ -> failwith "expected binary response but received text"

With the image downloaded, we can send it to Cognitive Services to have it analyzed:

let getFaces bytes =  
    Http.RequestString("https://api.projectoxford.ai/emotion/v1.0/recognize",
        httpMethod = "POST",
        headers = [ "Ocp-Apim-Subscription-Key", apiKey ],
        body = BinaryUpload bytes)
    |> fun (json) -> JsonConvert.DeserializeObject<Face[]>(json)

Now that we have a list of faces in the image, we need to determine which emoji to show for each one:

let getEmoji face =  
    match face.Scores with
        | scores when scores.Anger > 0.1 -> "angry.png"
        | scores when scores.Fear > 0.1 -> "afraid.png"
        | scores when scores.Sadness > 0.1 -> "sad.png"
        | scores when scores.Happiness > 0.5 -> "happy.png"
        | _ -> "neutral.png"
    |> fun filename -> Path.Combine(appPath, filename)
    |> Image.FromFile

So now we have an image, a list of faces, and an accurate emoji to use for each. Let's tie those together and draw the emoji on the image, returning a new image byte array:

let drawImage (bytes: byte[]) faces =  
    use inputStream = new MemoryStream(bytes)
    use image = Image.FromStream(inputStream)
    use graphics = Graphics.FromImage(image)

    faces |> Array.iter(fun face ->
        let rect = face.FaceRectangle
        let emoji = getEmoji face
        graphics.DrawImage(emoji, rect.Left, rect.Top, rect.Width, rect.Height)
    )

    use outputStream = new MemoryStream();
    image.Save(outputStream, ImageFormat.Jpeg)
    outputStream.ToArray()

Now we just need to return that image as an HTTP response:

let createResponse bytes =  
    let response = new HttpResponseMessage()
    response.Content <- new ByteArrayContent(bytes)
    response.StatusCode <- HttpStatusCode.OK
    response.Content.Headers.ContentType <- MediaTypeHeaderValue("image/jpeg")

    response

That's all the plumbing we need here for our function, so all that's left is to define the Run method that Azure Functions will actually invoke:

let Run (req: HttpRequestMessage) =  
    let bytes = getImage <| getImageUrl req

    getFaces bytes
    |> drawImage bytes
    |> createResponse

In less than 80 lines of code we're taking a URL input, downloading an image, detecting faces and emotions, drawing emoji over each face, and returning the new image as an HTTP response. Let's try it out!

Results

Let's start out with an image that's clearly full of anger:

Anger

Okay, let's counter that with a nice happy train:

Happy

Nobody has ever known sadness quite like Jon Snow:

Sadness

And finally, Kevin McCallister to test out fear:

Fear

Not bad!

Not bad

All of the code for this app is available on GitHub.

September 01, 2016 1:16 GMT

Exposing ADO.NET Performance Counters through Datadog

There are a number of useful performance counters exposed for System.Data.SqlClient that can provide some nice insight into what's going on under the hood of your applications. Today I found myself monitoring the NumberOfReclaimedConnections counter to track down some connections that weren't being properly disposed.

It's certainly not ideal to need to log into the servers to monitor this, so I went off looking for how to expose this through our Datadog dashboards. On top of just being able to query it, this would also mean I could create some monitors and alerts based on the counters as well. Datadog provides support for Windows Management Instrumentation (WMI) out of the box, so it was just a matter of figuring out exactly how to query these ADO.NET counters.

Figuring out exactly what the class name was to get access to these counters required a bit of digging that brought me to some parts of the internet that time seems to have forgotten, so I thought I'd just document it here to hopefully save someone else (or future me) the same digging.

In your wmi_check.yaml file (see the WMI integration instructions for how to set this), add the following:

# ADO.NET performance counters
  - class: Win32_PerfFormattedData_NETDataProviderforSqlServer_NETDataProviderforSqlServer
    metrics:
      - [NumberOfActiveConnectionPools, adonet.activeconnectionpools.count, gauge]
      - [NumberOfReclaimedConnections, adonet.reclaimedconnections.count, gauge]
      - [HardConnectsPerSecond, adonet.hardconnects.rate, rate]
      - [HardDisconnectsPerSecond, adonet.harddisconnects.rate, rate]
      - [NumberOfActiveConnectionPoolGroups, adonet.activeconnectionpoolgroups.count, gauge]
      - [NumberOfInactiveConnectionPoolGroups, adonet.inactiveconnectionpoolgroups.count, gauge]
      - [NumberOfInactiveConnectionPools, adonet.inactiveconnectionpools.count, gauge]
      - [NumberOfNonPooledConnections, adonet.nonpooledconnections.count, gauge]
      - [NumberOfPooledConnections, adonet.pooledconnections.count, gauge]
      - [NumberOfStasisConnections, adonet.stasisconnections.count, gauge]
    tag_by: Name

After restarting your Datadog agent it will now start reporting the ADO.NET counters using the names provided here with the adonet. prefix. For each, the instance name for each counter will be included as a tag, allowing you to associate all of the incoming data with the process it came from.

September 01, 2016 10:21 GMT

Bots, They Talk Amongst Us – Microsoft Bot Framework Explained

Bots! They’re everywhere! And on March 30, 2016 – Microsoft introduced its Bot Framework – a bet that bots will succeed apps and websites as the next “big thing”. This series of posts will explore the Bot Framework entirely, from looking at the framework in general, creating bots, adding intelligence to a bot, to adding a bot into a custom mobile app. But before we dig into what exactly the Bot Framework is, let’s first take a look at what exactly makes a bot … a bot.

Bots?

Bots?

What Are Bots?

Bots have been with us for quite a while. In fact every time you call a large company and get stuck in a labyrinth of voice prompts saying things like “press 1 for yes, 2 for no” – that could be considered a bot. But bots are more than that … quite a bit more. Another example would be Slackbot. It’s always there listening and can provide in-context help for using Slack. For example, if you reply to a message with an emoji, it will helpfully suggest, and show you how, to add a direct reaction to it, instead of a full on new message.

Then there are the more fantastical versions of bots … imagine a Slack team at a large company – where people may or may not know each other. You and another developer are talking back and forth about what a certain requirement really means. A bot notices the back and forth with all the questions around a single business area – it also knows who the business owner is for that particular area. The bot then can loop the functionality’s owner into the conversation – getting the question answered quicker.

Or maybe even better … a bot could order a pizza anytime somebody tells it to.

You can think of bots as micro-applications running within another, larger application. That larger application will have something of a chat interface in order to invoke the bot. Since people are spending more and more time inside applications such as Slack, Facebook Messenger, Telegram, Kik and any number of other apps and websites that have a chat-like interface – it only makes sense that we, as developers, would want to get our applications … or bots … in front of them.

And this is where the Microsoft Bot Framework comes in.

Microsoft Bot Framework

At the heart of it, the services above, Slack, Kik, Facebook Messenger, all do the same thing … they send messages back and forth. They provide additional functionality that distinguish them from one another, but they all enable communication via messaging. Unfortunately, they also all expose a different API for developers to tie bots into. The Microsoft Bot Framework aims to solve this problem by giving developers a single API to develop against, and then it takes care of integrating into the various services. This abstraction frees the developer to concentrate on creating a great experience without having to bog down in service specific implementation details.

However, because each service provides unique features that a developer may want to take advantage of – the Bot Framework also provides a means to send “raw” code to the service. For example, this would be code only intended to work on Facebook Messenger and nothing else.

We can use one of two APIs to write bots with the MS Bot Framework. As you can imagine – since Microsoft maintains it – there is a .Net API to develop against. However … since JavaScript is taking over the world – there also is a node.js API as well. Once node.js gets involved, it’s usually a pretty safe guess that the entry point to the framework will be a REST service – and that indeed is the case.

So let’s now take a look at some of the key concepts of the Bot Framework.

Connector Service

Everything in the Microsoft Bot Framework revolves around the connector service. It provides the abstraction that allows developers to use a single API to develop a bot and then it connects it to various channels or services such as Slack, Facebook Messenger, Skype or even SMS messages.

The connector service provides a connection between the single API and various services.

The connector service provides a connection between the single API and various services.


So we know on one end all of the various channels plug into the connector service. But let’s explore what it provides developers on the other end.

Activities

Activities model the actual communications between the bot and its human counterpart. They contain properties for the sender, recipient and conversation they belong to. Activities also can contain rich content – images or these things called cards. Think of a card as something of a pretty, formatted message. Another attribute of a card is that it provides an action, so for example it could be possible to launch a website by clicking a button on the card.

Activities can also maintain state … although one should design bots to be stateless by default. This means in order to maintain state, the class that models the state will need to be serializable. Finally, the activity contains functionality to invoke service specific features.

All in all, the activity is the “message”.

Bot Builder

The Bot Builder is not a single thing, rather it is composed of many parts. These parts provide a means to easily create an entire conversation flow. One could model and create a conversation only from Activites, but the Bot Builder makes the process easier and more robust. The most significant concept within Bot Builder is the Dialog.

We can use dialogs to model an entire conversation. Dialogs have the ability to be composed together fluently – and that means we can use LINQ to put together a conversation flow.

We create dialog classes by implementing the IDialog interface. That interface allows us to hook into different lifetime events to provide context appropriate responses for the bot.

There are also a couple of framework provided dialogs, including one based on LUIS – or Language Understanding Intelligence Service. Ahh… now our bot is able to understand natural language – out of the box!

And More…

We covered the core concepts that apply to both the .Net and node.js APIs. The .Net API get some additional love in the form of FormFlow (see what I did there … form of FormFlow … hilarious). FormFlow will take a normal C# class with properties and create a bot whose purpose in life is to get those properties filled in. So a fully functional bot from a single class.

Direct Line. Technically this isn’t a way to create a bot, rather a way for an app to consume a bot created with the Bot Framework. So if you created your own app that needs a bot – this is the way to integrate it.

Summary

Microsoft’s Bot Framework is a new framework aimed at developers to help them create bots for multiple services using a single API. It provides a rich feature set that most services have in common, and a means to invoke service specific functionality when needed. Of all the terms thrown at you – remember that the Connector Service connects your code to the services where the bots will live. Activities are the raw messages. And Dialogs abstract activities away and provide a means to model an entire conversation experience.

In the next post we’ll take an in-depth dive with Dialogs and see how you can use them to create a full bot and integrate it into several different services.

August 31, 2016 1:00 GMT

Gone Mobile 38: Microsoft Graph API with Simon Jager

In this episode we learn all about the Microsoft Graph API from Simon Jager, and how you can build your mobile apps on top of its offerings.

Hosts: Greg Shackles, Jon Dick

Guests: Simon Jager

Links:

Thanks to our Sponsors!

http://raygun.com

Raygun provides error and crash reporting software for all programming languages and platforms including iOS, Android, Xamarin, Javascript and more. Don’t just log errors and crashes, solve them with Raygun!