Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

This group of businesses is the most often attacked on earth—here’s how we helped


Try out WebView2 with the new interactive API sample

$
0
0

Over the past few years, we have seen increased demand for the development of applications that leverage both web and native technologies to modernize native applications, iterate faster with web technologies, and more easily develop cross-platform.

At this year’s Build conference in May, we introduced the Win32 preview of the WebView2 control, powered by the new Chromium-based Microsoft Edge browser. A WebView is a modal that is embedded within a native application, and which renders web content (HTML/CSS/JavaScript) powered by the browser. Since launching our Win32 WebView2 preview, we have been engaging with the community and partners to collect a great deal of feedback, and delivering SDK updates every six weeks.

To learn more about WebViews, how they work, and more about options like Evergreen (WebView content is rendered by the Microsoft Edge browser instance on the user’s computer) vs. Bring Your Own (WebView content is rendered by a separate instance of the Microsoft Edge browser downloaded with the application) check out our developer documentation.

WebView2 API Sample

Recently, we built and launched a sample application (we call it WebView2 API Sample) using the WebView2 APIs to create an interactive application that demonstrates WebView2’s functionalities. The WebView2 API Sample is intended to be the most comprehensive guide available and will be updated regularly as we add more features to our SDK.

Notable features in our WebView2 API Sample are Navigation, Web Messaging (communication between the Win32 Host and the WebView), and Native Object Injection (accessing Win32 Objects directly from JavaScript).

Screen capture showing a WebView2 sample browser

You can build and play around with the WebView2 API Sample by downloading or cloning it from our WebView2 Samples repository. To learn more about the sample’s source code and functionality, read our WebView2 API Sample guide. As you develop your own applications, we recommend referencing the source code for suggested API patterns for WebView2 workflows.

Build your own WebView2 application

You can learn more about WebView2 through our documentation, get started using our getting-started guide, and checkout more examples in our samples repository.

Tell us what you plan to build with WebView2 and please reach out with any thoughts or feedback through our feedback repo.

– Palak Goel, Program Manager, WebView

The post Try out WebView2 with the new interactive API sample appeared first on Microsoft Edge Blog.

Visual Studio 2019 for Mac version 8.4 Preview 4 is now available

$
0
0

Today, we released Visual Studio 2019 for Mac version 8.4 Preview 4. This preview version of Visual Studio for Mac brings support for the latest stable version of .NET Core, Scaffolding support for ASP.NET Core projects, and additional improvements to overall product accessibility. Developers using Xamarin Pair to Mac should also look at the additional information in this blog post related to our release schedule.

To try out the preview, you’ll need to download and install the latest version of Visual Studio 2019 for Mac, then switch to the Preview channel in the IDE.

For more information on the other changes in this release, look at our release notes.

Stay on the latest and greatest with support for .NET Core 3.1

With this release, Visual Studio for Mac adds official support for the newly released .NET Core 3.1. While this release of .NET Core brings with it a small series of improvements over .NET Core 3.0, it’s important to note that .NET Core 3.1 is a long-term supported (LTS) release. This means it will be supported for three years.

Updating to Preview 4 will install the .NET Core 3.1 SDK. If you previously installed Visual Studio for Mac without selecting the .NET Core target in the installer, you’ll need to take the following steps to get started developing .NET Core in Visual Studio for Mac:

Demonstration of the .NET Core target being checked in the Visual Studio for Mac installer

The .NET Core 3.1 release notes contain a full list of changes introduced by this update.

Use assistive technology more reliably

We’re committed to empowering all Mac developers with the ability to bring their thoughts to life using Visual Studio for Mac. In order to do so, we realize the need to support various assistive technologies. We’ve continued to make improvements to accessibility over the entire surface area of the IDE. Some of these efforts include:

  • Refining focus order when navigating with assistive technologies
  • Increasing color contrast ratios for text and icons
  • Eliminating keyboard traps that hinder navigation of the IDE
  • More accurate VoiceOver reading and navigation
  • Rewriting inaccessible components of the IDE with accessibility in mind

Despite the work we’re doing to make Visual Studio for Mac accessible to all, we know there’s still a long journey ahead of us and no end of the road when it comes to making the IDE a delightful experience for all. This has been and will continue to be a top priority for our team and we welcome any and all feedback from our users that will assist in guiding this work. Please reach out directly to me via dominicn@microsoft.com if you’d like to engage with us directly on our accessibility work. I’d look forward to learning from those of you who reach out.

Speaking about feedback from our community, let’s move on to ASP.NET Core Scaffolding…

Speed up your web app development with ASP.NET Core Scaffolding

A top ask from our community has been to add ASP.NET Core Scaffolding to Visual Studio for Mac. We’ve taken that feedback and have now enabled Scaffolding for ASP.NET Core projects in Visual Studio for Mac. Scaffolding makes ASP.NET Core app development easier and faster by generating boilerplate code for common scenarios.

To use the new Scaffolding feature in Visual Studio for Mac, click on the New Scaffolding entry in the Add flyout of the project context menu. The node on which you opened the right-click context menu will be the location where the generated files will be placed.

You’ll then see a Scaffolding wizard to help you generate code into your project. In the image below, I’m using one of our ASP.NET Core sample projects – a movie database app – to demonstrate scaffolding in action. I’ve used the tool to make pages for Create, Read, Update, and Delete operations (CRUD) and a Details page for the movie model.

Scaffolding wizard for ASP.NET Core project in Visual Studio for Mac

Once the wizard closes, it will add required NuGet packages to your project and create additional pages, based on the scaffolder you chose.

If you’re new to Scaffolding ASP.NET Core projects, take a look at our documentation for more information.

Xamarin Pair to Mac considerations

Developers using Visual Studio 2019 for Mac version 8.3 with Visual Studio 2019 version 16.4 for iOS development with Xamarin will see the following warnings in Windows:

Xamarin Pair to Mac warning messages

If you agree to continue, the Mono and Xamarin.iOS SDKs on your Mac will be updated to the latest versions. While we recommend updating to Visual Studio 2019 for Mac 8.4 Preview 4 to avoid version mismatches when working with Xamarin on Windows, updating by clicking through the warnings shown above will allow you to continue to work without moving from the Stable channel on Mac.

We plan to release Visual Studio for Mac version 8.4 to Stable in early January and appreciate your patience with this experience and the workaround until then.

Give it a try today!

Now that we’ve discussed the major additions to Visual Studio for Mac version 8.4 Preview 4, it’s time to download and install the release! To do so, make sure you’ve downloaded Visual Studio 2019 for Mac, then switch to the Preview channel.

As always, if you have any feedback on this, or any, version of Visual Studio for Mac, we invite you to leave them in the comments below this post or to reach out to us on Twitter at @VisualStudioMac. If you run into issues while using Visual Studio for Mac, you can use Report a Problem to notify the team. In addition to product issues, we also welcome your feature suggestions on the Visual Studio Developer Community website.

The post Visual Studio 2019 for Mac version 8.4 Preview 4 is now available appeared first on Visual Studio Blog.

An Introduction to System.Threading.Channels

$
0
0

“Producer/consumer” problems are everywhere, in all facets of our lives. A line cook at a fast food restaurant, slicing tomatoes that are handed off to another cook to assemble a burger, which is handed off to a register worker to fulfill your order, which you happily gobble down. Postal drivers delivering mail all along their routes, and you either seeing a truck arrive and going out to the mailbox to retrieve your deliveries or just checking later in the day when you get home from work. An airline employee offloading suitcases from a cargo hold of a jetliner, placing them onto a conveyer belt, where they’re shuttled down to another employee who transfers bags to a van and drives them to yet another conveyer that will take them to you. And a happy engaged couple preparing to send out invites to their wedding, with one partner addressing an envelope and handing it off to the other who stuffs and licks it.

As software developers, we routinely see happenings from our everyday lives make their way into our software, and “producer/consumer” problems are no exception. Anyone who’s piped together commands at a command-line has utilized producer/consumer, with the stdout from one program being fed as the stdin to another. Anyone who’s launched multiple workers to compute discrete values or to download data from multiple sites has utilized producer/consumer, with a consumer aggregating results for display or further processing. Anyone who’s tried to parallelize a pipeline has very explicitly employed producer/consumer. And so on.

All of these scenarios, whether in our real-world or software lives, have something in common: there is some vehicle for handing off the results from the producer to the consumer. The fast food employee places the completed burgers in a stand that the register worker pulls from to fill the customer’s bag. The postal worker places mail into a mailbox. The engaged couple’s hands meet to transfer the materials from one to the other. In software, such a hand-off requires a data structure of some kind to facilitate the transaction, storage that can used by the producer to transfer a result and potentially buffer more, while also enabling the consumer to be notified that one or more results are available. Enter System.Threading.Channels.

What is a Channel?

I often find it easiest to understand some technology by implementing a simple version myself. In doing so, I learn about various problems implementers of that technology may have had to overcome, trade-offs they may have had to make, and the best way to utilize the functionality. To that end, let’s start learning about System.Threading.Channels by implementing a “channel” from scratch.

A channel is simply a data structure that’s used to store produced data for a consumer to retrieve, and an appropriate synchronization to enable that to happen safely, while also enabling appropriate notifications in both directions. There is a multitude of possible design decisions involved. Should a channel be able to hold an unbounded number of items? If not, what should happen when it fills up? How critical is performance? Do we need to try to minimize synchronization? Can we make any assumptions about how many producers and consumers are allowed concurrently? For the purposes of quickly writing a simple channel, let’s make simplifying assumptions that we don’t need to enforce any particular bound and that we don’t need to be overly concerned about overheads. We’ll also make up a simple API.

To start, we need our type, to which we’ll add a few simple methods:

public sealed class Channel<T>
{
    public void Write(T value);
    public ValueTask<T> ReadAsync(CancellationToken cancellationToken = default);
}

Our Write method gives us a method we can use to produce data into the channel, and our ReadAsync method gives us a method to consume from it. Since we decided our channel is unbounded, producing data into it will always complete successfully and synchronously, just as does calling Add on a List<T>, hence we’ve made it non-asynchronous and void-returning. In contrast, our method for consuming is ReadAsync, which is asynchronous because the data we want to consume may not yet be available yet, and thus we’ll need to wait for it to arrive if nothing is available to consume at the time we try. And while in our getting-started design we’re not overly concerned with performance, we also don’t want to have lots of unnecessary overheads. Since we expect to be reading frequently, and for us to often be reading when data is already available to be consumed, our ReadAsync method returns a ValueTask<T> rather than a Task<T>, so that we can make it allocation-free when it completes synchronously.

Now we just need to implement these two methods. To start, we’ll add two fields to our type: one to serve as the storage mechanism, and one to coordinate between the producers and consumers:

private readonly ConcurrentQueue<T> _queue = new ConcurrentQueue<T>();
private readonly SemaphoreSlim _semaphore = new SemaphoreSlim(0);

We use a ConcurrentQueue<T> to store the data, freeing us from needing to do our own locking to protect the buffering data structure, as ConcurrentQueue<T> is already thread-safe for any number of producers and any number of consumers to access concurrently. And we use a SempahoreSlim to help coordinate between producers and consumers and to notify consumers that might be waiting for additional data to arrive.

Our Write method is simple. It just needs to store the data into the queue and increment the SemaphoreSlim‘s count by “release”ing it:

public void Write(T value)
{
    _queue.Enqueue(value); // store the data
    _semaphore.Release(); // notify any consumers that more data is available
}

And our ReadAsync method is almost just as simple. It needs to wait for data to be available and then take it out.

public async ValueTask<T> ReadAsync(CancellationToken cancellationToken = default)
{
    await _semaphoreSlim.WaitAsync(cancellationToken).ConfigureAwait(false); // wait
    bool gotOne = _queue.TryDequeue(out T item); // retrieve the data
    Debug.Assert(gotOne);
    return item;
}

Note that because no other code could be manipulating the semaphore or the queue, we know that once we’ve successfully waited on the semaphore, the queue will have data to give us, which is why we can just assert that the TryDequeue method successfully returned one. If those assumptions ever changed, this implementation would need to become more complicated.

And that’s it: we have our basic channel. If all you need are the basic features assumed here, such an implementation is perfectly reasonable. Of course, the requirements are often more significant, both on performance and on APIs necessary to enable more scenarios.

Now that we understand the basics of what a channel provides, we can switch to looking at the actual System.Threading.Channel APIs.

Introducing System.Threading.Channels

The core abstractions exposed from the System.Threading.Channels library are a writer:

public abstract class ChannelWriter<T>
{
    public abstract bool TryWrite(T item);
    public virtual ValueTask WriteAsync(T item, CancellationToken cancellationToken = default);
    public abstract ValueTask<bool> WaitToWriteAsync(CancellationToken cancellationToken = default);
    public void Complete(Exception error);
    public virtual bool TryComplete(Exception error);
}

and a reader:

public abstract class ChannelReader<T>
{
    public abstract bool TryRead(out T item);
    public virtual ValueTask<T> ReadAsync(CancellationToken cancellationToken = default)
    public abstract ValueTask<bool> WaitToReadAsync(CancellationToken cancellationToken = default);
    public virtual IAsyncEnumerable<T> ReadAllAsync([EnumeratorCancellation] CancellationToken cancellationToken = default);
    public virtual Task Completion { get; }
}

Having just completed our own simple channel design and implementation, most of this API surface area should feel familiar. ChannelWriter<T> provides a TryWrite method that’s very similar to our Write method; however, it’s abstract and a Try method that returns a Boolean, to account for the fact that some implementations may be bounded in how many items they can physically store, and if the channel was full such that writing couldn’t complete synchronously, TryWrite would need to return false to indicate that writing was unsuccessful. However, ChannelWriter<T> also provides the WriteAsync method; in such a case where the channel is full and writing would need to wait (often referred to as “back pressure”), WriteAsync can be used, with the producer awaiting the result of WriteAsync and only being allowed to continue when room becomes available.

Of course, there are situations where code may not want to produce a value immediately; if producing a value is expensive or if a value represents an expensive resource (maybe it’s a big object that would take up a lot of memory, or maybe it stores a bunch of open files) and if there’s a reasonable chance the producer is running faster than the consumer, the producer may want to delay producing a value until it knows a write will be immediately successful. For that, and related scenarios, there’s WaitToWriteAsync. A producer can await for WaitToWriteAsync to return true, and only then choose to produce a value that it then TryWrites or WriteAsyncs to the channel.

Note that WriteAsync is virtual. Some implementations may choose to provide a more optimized implementation, but with abstract TryWrite and WaitToWriteAsync, the base type can provide a reasonable implementation, which is only slightly more sophisticated than this:

public async ValueTask WriteAsync(T item, CancellationToken cancellationToken)
{
    while (await WaitToWriteAsync(cancellationToken).ConfigureAwait(false))
        if (TryWrite(item))
            return;

    throw new ChannelCompletedException();
}

In addition to showing how WaitToWriteAsync and TryWrite can be used, this highlights a few additional interesting things. First, the while loop is present here because channels by default can be used by any number of producers and any number of consumers concurrently. If a channel had an upper bound on how many items it could store, and if multiple threads are racing to write to the buffer, it’s possible for two threads to be told “yes, there’s space” via WaitToWriteAsync, but then for one of them to lose the race condition and have TryWrite return false, hence the need to loop around and try again. This example also highlights why WaitToWriteAsync returns a ValueTask<bool> instead of just ValueTask, as well as situations beyond a full buffer in which TryWrite may return false. Channels support the notion of completion, where a producer can signal to a consumer that there won’t be any further items produced, enabling the consumer to gracefully stop trying to consume. This is done via the Complete or TryComplete methods previously shown on ChannelWriter<T> (Complete is just implemented to call TryComplete and throw if it returns false). But if one producer marks the channel as complete, other producers need to know they’re no longer welcome to write into the channel; in that case, TryWrite returns false, WaitToWriteAsync also returns false, and WriteAsync throws a ChannelCompletedException.

Most of the members on ChannelReader<T> are likely self-explanatory as well. TryRead will try to synchronously extract the next element from the channel, returning whether it was successful in doing so. ReadAsync will also extract the next element from the channel, but if an element can’t be retrieved synchronously, it will return a task for that element. And WaitToReadAsync returns a ValueTask<bool> that serves as a notification for when an element is available to be consumed. Just as with ChannelWriter<T>‘s WriteAsync, ReadAsync is virtual, with the base implementation implementable in terms of the abstract TryRead and WaitToReadAsync; this isn’t the exact implementation in the base class, but it’s close:

public async ValueTask<T> ReadAsync(CancellationToken cancellationToken)
{
    while (true)
    {
        if (!await WaitToReadAsync(cancellationToken).ConfigureAwait(false))
            throw new ChannelClosedException();

        if (TryRead(out T item))
            return item;
    }
}

There are a variety of typical patterns for how one consumes from a ChannelReader<T>. If a channel represents an unending stream of values, one approach is simply to sit in an infinite loop consuming via ReadAsync:

while (true)
{
    T item = await channelReader.ReadAsync();
    Use(item);
}

Of course, if the stream of values isn’t infinite and the channel will be marked completed at some point, once consumers have emptied the channel of all its data subsequent attempts to ReadAsync from it will throw. In contrast TryRead will return false, as will WaitToReadAsync. So, a more common consumption pattern is via a nested loop:

while (await channelReader.WaitToReadAsync())
    while (channelReader.TryRead(out T item))
        Use(item);

The inner “while” could have instead been a simple “if”, but having the tight inner loop enables a cost-conscious developer to avoid the small additional overheads of WaitToReadAsync when an item is already available such that TryRead will successfully consume an item. In fact, this is the exact pattern employed by the ReadAllAsync method. ReadAllAsync was introduced in .NET Core 3.0, and returns an IAsyncEnumerable&lt;T&gt;. It enables all of the data to be read from a channel using familiar language constructs:

await foreach (T item in channelReader.ReadAllAsync())
    Use(item);

And the base implementation of the virtual method employs the exact pattern nested-loop pattern shown previously with WaitToReadAsync and TryRead:

public virtual async IAsyncEnumerable<T> ReadAllAsync(
    [EnumeratorCancellation] CancellationToken cancellationToken = default)
{
    while (await WaitToReadAsync(cancellationToken).ConfigureAwait(false))
        while (TryRead(out T item))
            yield return item;
}

The final member of ChannelReader&lt;T&gt; is Completion. This simply returns a Task that will complete when the channel reader is completed, meaning the channel was marked for completion by a writer and all data has been consumed.

Built-In Channel Implementations

Ok, so we know how to write to writers and read from readers… but from where do we get those writers and readers?

The Channel&lt;TWrite, TRead&gt; type exposes a Writer property and a Reader property that returns a ChannelWriter&lt;TWrite&gt; and a ChannelReader&lt;TRead&gt;, respectively:

public abstract class Channel<TWrite, TRead>
{
    public ChannelReader<TRead> Reader { get;  }
    public ChannelWriter<TWrite> Writer { get; }
}

This base abstract class is available for the niche uses cases where a channel may itself transform written data into a different type for consumption, but the vast majority use case has TWrite and TRead being the same, which is why the majority use happens via the derived Channel type, which is nothing more than:

public abstract class Channel<T> : Channel<T, T> { }

The non-generic Channel type then provides factories for several implementations of Channel&lt;&T&gt;:

public static class Channel
{
    public static Channel<T> CreateUnbounded<T>();
    public static Channel<T> CreateUnbounded<T>(UnboundedChannelOptions options);

    public static Channel<T> CreateBounded<T>(int capacity);
    public static Channel<T> CreateBounded<T>(BoundedChannelOptions options);
}

The CreateUnbounded method creates a channel with no imposed limit on the number of items that can be stored (of course at some point it might hit the limits of the memory, just as with List&lt;T&gt; and any other collection), very much like the simple Channel-like type we implemented at the beginning of this post. Its TryWrite will always return true, and both its WriteAsync and its WaitToWriteAsync will always complete synchronously.

In contrast, the CreateBounded method creates a channel with an explicit limit maintained by the implementation. Prior to reaching this capacity, just as with CreateUnbounded, TryWrite will return true and both WriteAsync and WaitToWriteAsync will complete synchronously. But once the channel fills up, TryWrite will return false, and both WriteAsync and WaitToWriteAsync will complete asynchronously, only completing their returned tasks when space is available, or another producer signals the channel’s completion. (It should go without saying that all of these APIs that accept a CancellationToken can also be interrupted by cancellation being requested).

Both CreateUnbounded and CreateBounded have overloads that accept a ChannelOptions-derived type. This base ChannelOptions provides options that can control any channel’s behavior. For example, it exposes SingleWriter and SingleReader properties, which allow the creator to indicate constraints they’re willing to accept; a creator sets SingleWriter to true to indicate that at most one producer will be accessing the writer at a time, and similarly sets SingleReader to true to indicate that at most one consumer will be accessing the reader at a time. This allows for the factory methods to specialize the implementation that’s created, optimizing it based on the supplied options; for example, if the options passed to CreateUnbounded specifies SingleReader as true, it returns an implementation that not only avoids locks when reading, it also avoids interlocked operations when reading, significantly reducing the overheads involved in consuming from the channel. The base ChannelOptions also exposes an AllowSynchronousContinuations property. As with SingleReader and SingleWriter, this defaults to false, and a creator setting it to true means signing up for some optimizations that also have strong implications for how producing and consuming code is written. Specifically, AllowSynchronousContinuations in a sense allows a producer to temporarily become a consumer. Let’s say there’s no data in a channel and a consumer comes along and calls ReadAsync. By awaiting the task returned from ReadAsync, that consumer is effectively hooking up a callback to be invoked when data is written to the channel. By default, that callback will be invoked asynchronously, with the producer writing the data to the channel and then queueing the invocation of that callback, which allows the producer to concurrently go on its merry way while the consumer is processed by some other thread. However, in some situations it may be advantageous for performance to allow that producer writing the data to also itself process the callback, e.g. rather than TryWrite queueing the invocation of the callback, it simply invokes the callback itself. This can significantly cut down on overheads, but also requires great understanding of the environment, as, for example, if you were holdling a lock while calling TryWrite, with AllowSynchronousContinuations set to true, you might end up invoking the callback while holding your lock, which (depending on what the callback tried to do) could end up observing some broken invariants your lock was trying to maintain.

The BoundedChannelOptions passed to CreateUnbounded layers on additional options specific to bounding. In addition to the maximum capacity supported by the channel, it also exposes a BoundedChannelFullMode enum that indicates the behavior writes should experience when the channel is full:

public enum BoundedChannelFullMode
{
    Wait,
    DropNewest,
    DropOldest,
    DropWrite
}

The default is Wait, which has the semantics already discussed: TryWrite on a full channel returns false, WriteAsync will return a task that will only complete when space became available and the write could complete successfully, and similarly WaitToWriteAsync will only complete when space becomes available. The other three modes instead enable writes to always complete synchronously, dropping an element if the channel is full rather than introducing back pressure. DropOldest will remove the “oldest” item (wall-clock time) from the queue, meaning whichever element would next be dequeued by a consumer. Conversely, DropNewest will remove the newest item, whichever element was most recently written to the channel. And DropWrite drops the item currently being written, meaning for example TryWrite will return true but the item it added will immediately be removed.

Performance

From an API perspective, that’s pretty much it. The abstractions exposed are relatively simple, which is a large part of where the power of the library comes from. Simple abstractions and a few concrete implementations that should meet the 99.9% use cases of developers’ needs. Of course, the surface area of the library might suggest that the implementation is also simple. In truth, there’s a decent amount of complexity in the implementation, mostly focused on enabling great throughput while enabling simple consumption patterns easily used in consuming code. The implementation, for example, goes to great pains to minimize allocations. You may have noticed that many of the methods in the surface area return ValueTask and ValueTask<T> rather than Task and Task<T>. As we saw in our trivial example implementation at the beginning of this article, we can utilize ValueTask<T> to avoid allocations when methods complete synchronously, but the System.Threading.Channels implementation also takes advantage of the advanced IValueTaskSource and IValueTaskSource<T> interfaces to avoid allocations even when the various methods complete asynchronously and need to return tasks.

Consider this benchmark:

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Threading.Channels;
using System.Threading.Tasks;

[MemoryDiagnoser]
public class Program
{
    static void Main() => BenchmarkRunner.Run<Program>();

    private readonly Channel<int> s_channel = Channel.CreateUnbounded<int>();

    [Benchmark]
    public async Task WriteThenRead()
    {
        ChannelWriter<int> writer = s_channel.Writer;
        ChannelReader<int> reader = s_channel.Reader;
        for (int i = 0; i < 10_000_000; i++)
        {
            writer.TryWrite(i);
            await reader.ReadAsync();
        }
    }
}

Here we’re just testing the throughput and memory allocation on an unbounded channel when writing an element and then reading out that element 10 million times, which means an element will always be available for the read to consume and thus the read will always complete synchronously, yielding the following results on my machine (the 72 bytes shown in the Allocated column is for the single Task returned from WriteThenRead):

Method Mean Error StdDev Gen 0 Gen 1 Gen 2 Allocated
WriteThenRead 527.8 ms 2.03 ms 1.90 ms 72 B

But now let’s change it slightly, first issuing the read and only then writing the element that will satisfy it. In this case, reads will always complete asynchronously because the data to complete them will never be available:

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Threading.Channels;
using System.Threading.Tasks;

[MemoryDiagnoser]
public class Program
{
    static void Main() => BenchmarkRunner.Run<Program>();

    private readonly Channel<int> s_channel = Channel.CreateUnbounded<int>();

    [Benchmark]
    public async Task ReadThenWrite()
    {
        ChannelWriter<int> writer = s_channel.Writer;
        ChannelReader<int> reader = s_channel.Reader;
        for (int i = 0; i < 10_000_000; i++)
        {
            ValueTask<int> vt = reader.ReadAsync();
            writer.TryWrite(i);
            await vt;
        }
    }
}

which on my machine for 10 million writes and reads yields results like this:

Method Mean Error StdDev Gen 0 Gen 1 Gen 2 Allocated
ReadThenWrite 881.2 ms 4.60 ms 4.30 ms 72 B

So, there’s some more overhead when every read completes asynchronously, but even here we see zero allocations for the 10 million asynchronously-completing reads (again, the 72 bytes shown in the Allocated column is for the Task returned from ReadThenWrite)!

Combinators

Generally consumption of channels is simple, using one of the approaches shown earlier. But as with IEnumerables, it’s also possible to implement various kinds of operations over channels to accomplish a specific purpose. For example, let’s say I want to wait for the first element to arrive from either of two supplied readers; I could write something like this:

public static async ValueTask<ChannelReader<T>> WhenAny<T>(
    ChannelReader<T> reader1, ChannelReader<T> reader2)
{
    var cts = new CancellationTokenSource();
    Task<bool> t1 = reader1.WaitToReadAsync(cts.Token).AsTask();
    Task<bool> t2 = reader2.WaitToReadAsync(cts.Token).AsTask();
    Task<bool> completed = await Task.WhenAny(t1, t2);
    cts.Cancel();
    return completed == t1 ? reader1 : reader2;
}

Here we’re just calling WaitToReadAsync on both channels, and returning the reader for whichever one completes first. One of the interesting things to note about this example is that, while ChannelReader<T> bears many similarities to IEnumerator<T>, this example can’t be implemented well on top of IEnumerator<T> (or IAsyncEnumerator<T>). I{Async}Enumerator<T> exposes a MoveNext{Async} method, which moves the cursor ahead to the next item, which is then exposed from Current. If we tried to implement such a WhenAny on top of IAsyncEnumerator<T>, we would need to invoke MoveNextAsync on each. In doing so, we would potentially move both ahead to their next item. If we then used that method in a loop, we would likely end up missing items from one or both enumerators, because we would potentially have advanced the enumerator that we didn’t return from the method.

Relationship to the rest of .NET Core

System.Threading.Channels is part of the .NET Core shared framework, meaning a .NET Core app can start using it without installing anything additional. It’s also available as a separate NuGet package, though the separate implementation doesn’t have all of the optimizations that built-in implementation has, in large part because the built-in implementation is able to take advantage of additional runtime and library support in .NET Core.

It’s also used by a variety of other systems in .NET. For example, ASP.NET uses channels as part of SignalR as well as in its Libuv-based Kestrel transport. Channels are also used by the upcoming QUIC implementation currently being developed for .NET 5.

If you squint, the System.Threading.Channels library also looks a bit similar to the System.Threading.Tasks.Dataflow library that’s been available with .NET for years. In some ways, the dataflow library is a superset of the channels library; in particular, the BufferBlock<T> type from the dataflow library exposes much of the same functionality. However, the dataflow library is also focused on a different programming model, one where blocks are linked together such that data flows automatically from one to the next. It also includes advanced functionality that supports, for example, a form of two-phase commit, with multiple blocks linked to the same consumers, and those consumers able to atomically take from multiple blocks without deadlocking. Those mechanisms required to enable that are much more involved, and while more powerful are also more expensive. This is evident just by writing the same benchmark for BufferBlock<T> as we did earlier for Channels.

using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
using System.Threading.Channels;
using System.Threading.Tasks;
using System.Threading.Tasks.Dataflow;

[MemoryDiagnoser]
public class Program
{
    static void Main() => BenchmarkRunner.Run<Program>();

    private readonly Channel<int> _channel = Channel.CreateUnbounded<int>();
    private readonly BufferBlock<int> _bufferBlock = new BufferBlock<int>();

    [Benchmark]
    public async Task Channel_ReadThenWrite()
    {
        ChannelWriter<int> writer = _channel.Writer;
        ChannelReader<int> reader = _channel.Reader;
        for (int i = 0; i < 10_000_000; i++)
        {
            ValueTask<int> vt = reader.ReadAsync();
            writer.TryWrite(i);
            await vt;
        }
    }

    [Benchmark]
    public async Task BufferBlock_ReadThenWrite()
    {
        for (int i = 0; i < 10_000_000; i++)
        {
            Task<int> t = _bufferBlock.ReceiveAsync();
            _bufferBlock.Post(i);
            await t;
        }
    }
}
Method Mean Error StdDev Gen 0 Gen 1 Gen 2 Allocated
Channel_ReadThenWrite 878.9 ms 0.68 ms 0.60 ms 72 B 72 B
BufferBlock_ReadThenWrite 20,116.4 ms 192.82 ms 180.37 ms 1184000.0000 2000.0000 7360000232 B

This is in no way meant to suggest that the System.Threading.Tasks.Dataflow library shouldn’t be used. It enables developers to express succinctly a large number of concepts, and it can exhibit very good performance when applied to the problems it suits best. However, when all one needs is a hand-off data structure between one or more producers and one or more consumers you’ve manually implemented, System.Threading.Channels is a much simpler, leaner bet.

What’s Next?

Hopefully at this point you have a better understanding of the System.Threading.Channels library, enough to see how it might fit into and help improve your applications. Give it a try, and we’d love your feedback, suggestions, issues, and PRs to improve it further at https://github.com/dotnet/runtime. Thanks!

The post An Introduction to System.Threading.Channels appeared first on .NET Blog.

ConfigureAwait FAQ

$
0
0

.NET added async/await to the languages and libraries over seven years ago. In that time, it’s caught on like wildfire, not only across the .NET ecosystem, but also being replicated in a myriad of other languages and frameworks. It’s also seen a ton of improvements in .NET, in terms of additional language constructs that utilize asynchrony, APIs offering async support, and fundamental improvements in the infrastructure that makes async/await tick (in particular performance and diagnostic-enabling improvements in .NET Core).

However, one aspect of async/await that continues to draw questions is ConfigureAwait. In this post, I hope to answer many of them. I intend for this post to be both readable from start to finish as well as being a list of Frequently Asked Questions (FAQ) that can be used as future reference.

To really understand ConfigureAwait, we need to start a bit earlier…

What is a SynchronizationContext?

The System.Threading.SynchronizationContext docs state that it “Provides the basic functionality for propagating a synchronization context in various synchronization models.” Not an entirely obvious description.

For the 99.9% use case, SynchronizationContext is just a type that provides a virtual Post method, which takes a delegate to be executed asynchronously (there are a variety of other virtual members on SynchronizationContext, but they’re much less used and are irrelevant for this discussion). The base type’s Post literally just calls ThreadPool.QueueUserWorkItem to asynchronously invoke the supplied delegate. However, derived types override Post to enable that delegate to be executed in the most appropriate place and at the most appropriate time.

For example, Windows Forms has a SynchronizationContext-derived type that overrides Post to do the equivalent of Control.BeginInvoke; that means any calls to its Post method will cause the delegate to be invoked at some later point on the thread associated with that relevant Control, aka “the UI thread”. Windows Forms relies on Win32 message handling and has a “message loop” running on the UI thread, which simply sits waiting for new messages to arrive to process. Those messages could be for mouse movements and clicks, for keyboard typing, for system events, for delegates being available to invoke, etc. So, given a SynchronizationContext instance for the UI thread of a Windows Forms application, to get a delegate to execute on that UI thread, one simply needs to pass it to Post.

The same goes for Windows Presentation Foundation (WPF). It has its own SynchronizationContext-derived type with a Post override that similarly “marshals” a delegate to the UI thread (via Dispatcher.BeginInvoke), in this case managed by a WPF Dispatcher rather than a Windows Forms Control.

And for Windows RunTime (WinRT). It has its own SynchronizationContext-derived type with a Post override that also queues the delegate to the UI thread via its CoreDispatcher.

This goes beyond just “run this delegate on the UI thread”. Anyone can implement a SynchronizationContext with a Post that does anything. For example, I may not care what thread a delegate runs on, but I want to make sure that any delegates Post‘d to my SynchronizationContext are executed with some limited degree of concurrency. I can achieve that with a custom SynchronizationContext like this:

internal sealed class MaxConcurrencySynchronizationContext : SynchronizationContext
{
    private readonly SemaphoreSlim _semaphore;

    public MaxConcurrencySynchronizationContext(int maxConcurrencyLevel) =>
        _semaphore = new SemaphoreSlim(maxConcurrencyLevel);

    public override void Post(SendOrPostCallback d, object state) =>
        _semaphore.WaitAsync().ContinueWith(delegate
        {
            try { d(state); } finally { _semaphore.Release(); }
        }, default, TaskContinuationOptions.None, TaskScheduler.Default);

    public override void Send(SendOrPostCallback d, object state)
    {
        _semaphore.Wait();
        try { d(state); } finally { _semaphore.Release(); }
    }
}

In fact, the unit testing framework xunit provides a SynchronizationContext very similar to this, which it uses to limit the amount of code associated with tests that can be run concurrently.

The benefit of all of this is the same as with any abstraction: it provides a single API that can be used to queue a delegate for handling however the creator of the implementation desires, without needing to know the details of that implementation. So, if I’m writing a library, and I want to go off and do some work, and then queue a delegate back to the original location’s “context”, I just need to grab their SynchronizationContext, hold on to it, and then when I’m done with my work, call Post on that context to hand off the delegate I want invoked. I don’t need to know that for Windows Forms I should grab a Control and use its BeginInvoke, or for WPF I should grab a Dispatcher and uses its BeginInvoke, or for xunit I should somehow acquire its context and queue to it; I simply need to grab the current SynchronizationContext and use that later on. To achieve that, SynchronizationContext provides a Current property, such that to achieve the aforementioned objective I might write code like this:

public void DoWork(Action worker, Action completion)
{
    SynchronizationContext sc = SynchronizationContext.Current;
    ThreadPool.QueueUserWorkItem(_ =>
    {
        try { worker(); }
        finally { sc.Post(_ => completion(), null); }
    });
}

A framework that wants to expose a custom context from Current uses the SynchronizationContext.SetSynchronizationContext method.

What is a TaskScheduler?

SynchronizationContext is a general abstraction for a “scheduler”. Individual frameworks sometimes have their own abstractions for a scheduler, and System.Threading.Tasks is no exception. When Tasks are backed by a delegate such that they can be queued and executed, they’re associated with a System.Threading.Tasks.TaskScheduler. Just as SynchronizationContext provides a virtual Post method to queue a delegate’s invocation (with the implementation later invoking the delegate via typical delegate invocation mechanisms), TaskScheduler provides an abstract QueueTask method (with the implementation later invoking that Task via the ExecuteTask method).

The default scheduler as returned by TaskScheduler.Default is the thread pool, but it’s possible to derive from TaskScheduler and override the relevant methods to achieve arbitrary behaviors for when and where a Task is invoked. For example, the core libraries include the System.Threading.Tasks.ConcurrentExclusiveSchedulerPair type. An instance of this class exposes two TaskScheduler properties, one called ExclusiveScheduler and one called ConcurrentScheduler. Tasks scheduled to the ConcurrentScheduler may run concurrently, but subject to a limit supplied to ConcurrentExclusiveSchedulerPair when it was constructed (similar to the MaxConcurrencySynchronizationContext shown earlier), and no ConcurrentScheduler Tasks will run when a Task scheduled to ExclusiveScheduler is running, with only one exclusive Task allowed to run at a time… in this way, it behaves very much like a reader/writer-lock.

Like SynchronizationContext, TaskScheduler also has a Current property, which returns the “current” TaskScheduler. Unlike SynchronizationContext, however, there’s no method for setting the current scheduler. Instead, the current scheduler is the one associated with the currently running Task, and a scheduler is provided to the system as part of starting a Task. So, for example, this program will output “True”, as the lambda used with StartNew is executed on the ConcurrentExclusiveSchedulerPair‘s ExclusiveScheduler and will see TaskScheduler.Current set to that scheduler.

Interestingly, TaskScheduler provides a static FromCurrentSynchronizationContext method, which creates a new TaskScheduler that queues Tasks to run on whatever SynchronizationContext.Current returned, using its Post method for queueing tasks.

How do SynchronizationContext and TaskScheduler relate to await?

Consider writing a UI app with a Button. Upon clicking the Button, we want to download some text from a web site and set it as the Button‘s Content. The Button should only be accessed from the UI thread that owns it, so when we’ve successfully downloaded the new date and time text and want to store it back into the Button‘s Content, we need to do so from the thread that owns the control. If we don’t, we get an exception like:

System.InvalidOperationException: 'The calling thread cannot access this object because a different thread owns it.'

If we were writing this out manually, we could use SynchronizationContext as shown earlier to marshal the setting of the Content back to the original context, such as via a TaskScheduler:

private static readonly HttpClient s_httpClient = new HttpClient();

private void downloadBtn_Click(object sender, RoutedEventArgs e)
{
    s_httpClient.GetStringAsync("http://example.com/currenttime").ContinueWith(downloadTask =>
    {
        downloadBtn.Content = downloadTask.Result;
    }, TaskScheduler.FromCurrentSynchronizationContext());
}

or using SynchronizationContext directly:

private static readonly HttpClient s_httpClient = new HttpClient();

private void downloadBtn_Click(object sender, RoutedEventArgs e)
{
    SynchronizationContext sc = SynchronizationContext.Current;
    s_httpClient.GetStringAsync("http://example.com/currenttime").ContinueWith(downloadTask =>
    {
        sc.Post(delegate
        {
            downloadBtn.Content = downloadTask.Result;
        }, null);
    });
}

Both of these approaches, though, explicitly uses callbacks. We would instead like to write the code naturally with async/await:

private static readonly HttpClient s_httpClient = new HttpClient();

private async void downloadBtn_Click(object sender, RoutedEventArgs e)
{
    string text = await s_httpClient.GetStringAsync("http://example.com/currenttime");
    downloadBtn.Content = text;
}

This “just works”, successfully setting Content on the UI thread, because just as with the manually implemented version above, awaiting a Task pays attention by default to SynchronizationContext.Current, as well as to TaskScheduler.Current. When you await anything in C#, the compiler transforms the code to ask (via calling GetAwaiter) the “awaitable” (in this case, the Task) for an “awaiter” (in this case, a TaskAwaiter<string>). That awaiter is responsible for hooking up the callback (often referred to as the “continuation”) that will call back into the state machine when the awaited object completes, and it does so using whatever context/scheduler it captured at the time the callback was registered. While not exactly the code used (there are additional optimizations and tweaks employed), it’s something like this:

object scheduler = SynchronizationContext.Current;
if (scheduler is null && TaskScheduler.Current != TaskScheduler.Default)
{
    scheduler = TaskScheduler.Current;
}

In other words, it first checks whether there’s a SynchronizationContext set, and if there isn’t, whether there’s a non-default TaskScheduler in play. If it finds one, when the callback is ready to be invoked, it’ll use the captured scheduler; otherwise, it’ll generally just execute the callback on as part of the operation completing the awaited task.

What does ConfigureAwait(false) do?

The ConfigureAwait method isn’t special: it’s not recognized in any special way by the compiler or by the runtime. It is simply a method that returns a struct (a ConfiguredTaskAwaitable) that wraps the original task it was called on as well as the specified Boolean value. Remember that await can be used with any type that exposes the right pattern. By returning a different type, it means that when the compiler accesses the instances GetAwaiter method (part of the pattern), it’s doing so off of the type returned from ConfigureAwait rather than off of the task directly, and that provides a hook to change the behavior of how the await behaves via this custom awaiter.

Specifically, awaiting the type returned from ConfigureAwait(continueOnCapturedContext: false) instead of awaiting the Task directly ends up impacting the logic shown earlier for how the target context/scheduler is captured. It effectively makes the previously shown logic more like this:

object scheduler = null;
if (continueOnCapturedContext)
{
    scheduler = SynchronizationContext.Current;
    if (scheduler is null && TaskScheduler.Current != TaskScheduler.Default)
    {
        scheduler = TaskScheduler.Current;
    }
}

In other words, by specifying false, even if there is a current context or scheduler to call back to, it pretends as if there isn’t.

Why would I want to use ConfigureAwait(false)?

ConfigureAwait(continueOnCapturedContext: false) is used to avoid forcing the callback to be invoked on the original context or scheduler. This has a few benefits:

Improving performance. There is a cost to queueing the callback rather than just invoking it, both because there’s extra work (and typically extra allocation) involved, but also because it means certain optimizations we’d otherwise like to employ in the runtime can’t be used (we can do more optimization when we know exactly how the callback will be invoked, but if it’s handed off to an arbitrary implementation of an abstraction, we can sometimes be limited). For very hot paths, even the extra costs of checking for the current SynchronizationContext and the current TaskScheduler (both of which involve accessing thread statics) can add measurable overhead. If the code after an await doesn’t actually require running in the original context, using ConfigureAwait(false) can avoid all these costs: it won’t need to queue unnecessarily, it can utilize all the optimizations it can muster, and it can avoid the unnecessary thread static accesses.

Avoiding deadlocks. Consider a library method that uses await on the result of some network download. You invoke this method and synchronously block waiting for it to complete, such as by using .Wait() or .Result or .GetAwaiter().GetResult() off of the returned Task object. Now consider what happens if your invocation of it happens when the current SynchronizationContext is one that limits the number of operations that can be running on it to 1, whether explicitly via something like the MaxConcurrencySynchronizationContext shown earlier, or implicitly by this being a context that only has one thread that can be used, e.g. a UI thread. So you invoke the method on that one thread and then block it waiting for the operation to complete. The operation kicks off the network download and awaits it. Since by default awaiting a Task will capture the current SynchronizationContext, it does so, and when the network download completes, it queues back to the SynchronizationContext the callback that will invoke the remainder of the operation. But the only thread that can process the queued callback is currently blocked by your code blocking waiting on the operation to complete. And that operation won’t complete until the callback is processed. Deadlock! This can apply even when the context doesn’t limit the concurrency to just 1, but when the resources are limited in any fashion. Imagine the same situation, except using the MaxConcurrencySynchronizationContext with a limit of 4. And instead of making just one call to the operation, we queue to that context 4 invocations, each of which makes the call and blocks waiting for it to complete. We’ve now still blocked all of the resources while waiting for the async methods to complete, and the only thing that will allow those async methods to complete is if their callbacks can be processed by this context that’s already entirely consumed. Again, deadlock! If instead the library method had used ConfigureAwait(false), it would not queue the callback back to the original context, avoiding the deadlock scenarios.

Why would I want to use ConfigureAwait(true)?

You wouldn’t. ConfigureAwait(true) does nothing meaningful. When comparing await task with await task.ConfigureAwait(true), they’re functionally identical. If you see ConfigureAwait(true) in production code, you can delete it.

The ConfigureAwait method accepts a Boolean because there are some niche situations in which you want to pass in a variable to control the configuration. But the 99% use case is with a hardcoded false argument value, ConfigureAwait(false).

When should I use ConfigureAwait(false)?

It depends: are you implementing application-level code or general-purpose library code?

When writing applications, you generally want the default behavior (which is why it is the default behavior). If an app model / environment (e.g. Windows Forms, WPF, ASP.NET Core, etc.) publishes a custom SynchronizationContext, there’s almost certainly a really good reason it does: it’s providing a way for code that cares about synchronization context to interact with the app model / environment appropriately. So if you’re writing an event handler in a Windows Forms app, writing a unit test in xunit, writing code in an ASP.NET MVC controller, whether or not the app model did in fact publish a SynchronizationContext, you want to use that SynchronizationContext if it exists. And that means the default / ConfigureAwait(true). You make simple use of await, and the right things happen with regards to callbacks/continuations being posted back to the original context if one existed. This leads to the general guidance of: if you’re writing app-level code, do not use ConfigureAwait(false). If you think back to the Click event handler code example earlier in this post:

private static readonly HttpClient s_httpClient = new HttpClient();

private async void downloadBtn_Click(object sender, RoutedEventArgs e)
{
    string text = await s_httpClient.GetStringAsync("http://example.com/currenttime");
    downloadBtn.Content = text;
}

the setting of downloadBtn.Content = text needs to be done back in the original context. If the code had violated this guideline and instead used ConfigureAwait(false) when it shouldn’t have:

private static readonly HttpClient s_httpClient = new HttpClient();

private async void downloadBtn_Click(object sender, RoutedEventArgs e)
{
    string text = await s_httpClient.GetStringAsync("http://example.com/currenttime").ConfigureAwait(false); // bug
    downloadBtn.Content = text;
}

bad behavior will result. The same would go for code in a classic ASP.NET app reliant on HttpContext.Current; using ConfigureAwait(false) and then trying to use HttpContext.Current is likely going to result in problems.

In contrast, general-purpose libraries are “general purpose” in part because they don’t care about the environment in which they’re used. You can use them from a web app or from a client app or from a test, it doesn’t matter, as the library code is agnostic to the app model it might be used in. Being agnostic then also means that it’s not going to be doing anything that needs to interact with the app model in a particular way, e.g. it won’t be accessing UI controls, because a general-purpose library knows nothing about UI controls. Since we then don’t need to be running the code in any particular environment, we can avoid forcing continuations/callbacks back to the original context, and we do that by using ConfigureAwait(false) and gaining both the performance and reliability benefits it brings. This leads to the general guidance of: if you’re writing general-purpose library code, use ConfigureAwait(false). This is why, for example, you’ll see every (or almost every) await in the .NET Core runtime libraries using ConfigureAwait(false) on every await; with a few exceptions, in cases where it doesn’t it’s very likely a bug to be fixed. For example, this PR fixed a missing ConfigureAwait(false) call in HttpClient.

As with all guidance, of course, there can be exceptions, places where it doesn’t make sense. For example, one of the larger exemptions (or at least categories that requires thought) in general-purpose libraries is when those libraries have APIs that take delegates to be invoked. In such cases, the caller of the library is passing potentially app-level code to be invoked by the library, which then effectively renders those “general purpose” assumptions of the library moot. Consider, for example, an asynchronous version of LINQ’s Where method, e.g. public static async IAsyncEnumerable<T> WhereAsync(this IAsyncEnumerable<T> source, Func<T, bool> predicate). Does predicate here need to be invoked back on the original SynchronizationContext of the caller? That’s up to the implementation of WhereAsync to decide, and it’s a reason it may choose not to use ConfigureAwait(false).

Even with these special cases, the general guidance stands and is a very good starting point: use ConfigureAwait(false) if you’re writing general-purpose library / app-model-agnostic code, and otherwise don’t.

Does ConfigureAwait(false) guarantee the callback won’t be run in the original context?

No. It guarantees it won’t be queued back to the original context… but that doesn’t mean the code after an await task.ConfigureAwait(false) won’t still run in the original context. That’s because awaits on already-completed awaitables just keep running past the await synchronously rather than forcing anything to be queued back. So, if you await a task that’s already completed by the time it’s awaited, regardless of whether you used ConfigureAwait(false), the code immediately after this will continue to execute on the current thread in whatever context is still current.

Is it ok to use ConfigureAwait(false) only on the first await in my method and not on the rest?

In general, no. See the previous FAQ. If the await task.ConfigureAwait(false) involves a task that’s already completed by the time it’s awaited (which is actually incredibly common), then the ConfigureAwait(false) will be meaningless, as the thread continues to execute code in the method after this and still in the same context that was there previously.

One notable exception to this is if you know that the first await will always complete asynchronously and the thing being awaited will invoke its callback in an environment free of a custom SynchronizationContext or a TaskScheduler. For example, CryptoStream in the .NET runtime libraries wants to ensure that its potentially computationally-intensive code doesn’t run as part of the caller’s synchronous invocation, so it uses a custom awaiter to ensure that everything after the first await runs on a thread pool thread. However, even in that case you’ll notice that the next await still uses ConfigureAwait(false); technically that’s not necessary, but it makes code review a lot easier, as otherwise every time this code is looked at it doesn’t require an analysis to understand why ConfigureAwait(false) was left off.

Can I use Task.Run to avoid using ConfigureAwait(false)?

Yes. If you write:

Task.Run(async delegate
{
    await SomethingAsync(); // won't see the original context
});

then a ConfigureAwait(false) on that SomethingAsync() call will be a nop, because the delegate passed to Task.Run is going to be executed on a thread pool thread, with no user code higher on the stack, such that SynchronizationContext.Current will return null. Further, Task.Run implicitly uses TaskScheduler.Default, which means querying TaskScheduler.Current inside of the delegate will also return Default. That means the await will exhibit the same behavior regardless of whether ConfigureAwait(false) was used. It also doesn’t make any guarantees about what code inside of this lambda might do. If you have the code:

Task.Run(async delegate
{
    SynchronizationContext.SetSynchronizationContext(new SomeCoolSyncCtx());
    await SomethingAsync(); // will target SomeCoolSyncCtx
});

then the code inside SomethingAsync will in fact see SynchronizationContext.Current as that SomeCoolSyncCtx instance, and both this await and any non-configured awaits inside SomethingAsync will post back to it. So to use this approach, you need to understand what all of the code you’re queueing may or may not do and whether its actions could thwart yours.

This approach also comes at the expense of needing to create/queue an additional task object. That may or may not matter to your app or library depending on your performance sensitivity.

Also keep in mind that such tricks may cause more problems than they’re worth and have other unintended consequences. For example, static analysis tools (e.g. Roslyn analyzers) have been written to flag awaits that don’t use ConfigureAwait(false), such as CA2007. If you enable such an analyzer but then employ a trick like this just to avoid using ConfigureAwait, there’s a good chance the analyzer will flag it, and actually cause more work for you. So maybe you then disable the analyzer because of its noisiness, and now you end up missing other places in the codebase where you actually should have been using ConfigureAwait(false).

Can I use SynchronizationContext.SetSynchronizationContext to avoid using ConfigureAwait(false)?

No. Well, maybe. It depends on the involved code.

Some developers write code like this:

Task t;
SynchronizationContext old = SynchronizationContext.Current;
SynchronizationContext.SetSynchronizationContext(null);
try
{
    t = CallCodeThatUsesAwaitAsync(); // awaits in here won't see the original context
}
finally { SynchronizationContext.SetSynchronizationContext(old); }
await t; // will still target the original context

in hopes that it’ll make the code inside CallCodeThatUsesAwaitAsync see the current context as null. And it will. However, the above will do nothing to affect what the await sees for TaskScheduler.Current, so if this code is running on some custom TaskScheduler, awaits inside CallCodeThatUsesAwaitAsync (and that don’t use ConfigureAwait(false)) will still see and queue back to that custom TaskScheduler.

All of the same caveats also apply as in the previous Task.Run-related FAQ: there are perf implications of such a workaround, and the code inside the try could also thwart these attempts by setting a different context (or invoking code with a non-default TaskScheduler).

With such a pattern, you also need to be careful about a slight variation:

SynchronizationContext old = SynchronizationContext.Current;
SynchronizationContext.SetSynchronizationContext(null);
try
{
    await t;
}
finally { SynchronizationContext.SetSynchronizationContext(old); }

See the problem? It’s a bit hard to see but also potentially very impactful. There’s no guarantee that the await will end up invoking the callback/continuation on the original thread, which means the resetting of the SynchronizationContext back to the original may not actually happen on the original thread, which could lead subsequent work items on that thread to see the wrong context (to counteract this, well-written app models that set a custom context generally add code to manually reset it before invoking any further user code). And even if it does happen to run on the same thread, it may be a while before it does, such that the context won’t be appropriately restored for a while. And if it runs on a different thread, it could end up setting the wrong context onto that thread. And so on. Very far from ideal.

I’m using GetAwaiter().GetResult(). Do I need to use ConfigureAwait(false)?

No. ConfigureAwait only affects the callbacks. Specifically, the awaiter pattern requires awaiters to expose an IsCompleted property, a GetResult method, and an OnCompleted method (optionally with an UnsafeOnCompleted method). ConfigureAwait only affects the behavior of {Unsafe}OnCompleted, so if you’re just directly calling to the awaiter’s GetResult() method, whether you’re doing it on the TaskAwaiter or the ConfiguredTaskAwaitable.ConfiguredTaskAwaiter makes zero behavior difference. So, if you see task.ConfigureAwait(false).GetAwaiter().GetResult() in code, you can replace it with task.GetAwaiter().GetResult() (and also consider whether you really want to be blocking like that).

I know I’m running in an environment that will never have a custom SynchronizationContext or custom TaskScheduler. Can I skip using ConfigureAwait(false)?

Maybe. It depends on how sure you are of the “never” part. As mentioned in previous FAQs, just because the app model you’re working in doesn’t set a custom SynchronizationContext and doesn’t invoke your code on a custom TaskScheduler doesn’t mean that some other user or library code doesn’t. So you need to be sure that’s not the case, or at least recognize the risk if it may be.

I’ve heard ConfigureAwait(false) is no longer necessary in .NET Core. True?

False. It’s needed when running on .NET Core for exactly the same reasons it’s needed when running on .NET Framework. Nothing’s changed in that regard.

What has changed, however, is whether certain environments publish their own SynchronizationContext. In particular, whereas the classic ASP.NET on .NET Framework has its own SynchronizationContext, in contrast ASP.NET Core does not. That means that code running in an ASP.NET Core app by default won’t see a custom SynchronizationContext, which lessens the need for ConfigureAwait(false) running in such an environment.

It doesn’t mean, however, that there will never be a custom SynchronizationContext or TaskScheduler present. If some user code (or other library code your app is using) sets a custom context and calls your code, or invokes your code in a Task scheduled to a custom TaskScheduler, then even in ASP.NET Core your awaits may see a non-default context or scheduler that would lead you to want to use ConfigureAwait(false). Of course, in such situations, if you avoid synchronously blocking (which you should avoid doing in web apps regardless) and if you don’t mind the small performance overheads in such limited occurrences, you can probably get away without using ConfigureAwait(false).

Can I use ConfigureAwait when ‘await foreach’ing an IAsyncEnumerable?

Yes. See this MSDN Magazine article for an example.

await foreach binds to a pattern, and so while it can be used to enumerate an IAsyncEnumerable<T>, it can also be used to enumerate something that exposes the right API surface area. The .NET runtime libraries include a ConfigureAwait extension method on IAsyncEnumerable<T> that returns a custom type that wraps the IAsyncEnumerable<T> and a Boolean and exposes the right pattern. When the compiler generates calls to the enumerator’s MoveNextAsync and DisposeAsync methods, those calls are to the returned configured enumerator struct type, and it in turns performs the awaits in the desired configured way.

Can I use ConfigureAwait when ‘await using’ an IAsyncDisposable?

Yes, though with a minor complication.

As with IAsyncEnumerable<T> described in the previous FAQ, the .NET runtime libraries expose a ConfigureAwait extension method on IAsyncDisposable, and await using will happily work with this as it implements the appropriate pattern (namely exposing an appropriate DisposeAsync method):

await using (var c = new MyAsyncDisposableClass().ConfigureAwait(false))
{
    ...
}

The problem here is that the type of c is now not MyAsyncDisposableClass but rather a System.Runtime.CompilerServices.ConfiguredAsyncDisposable, which is the type returned from that ConfigureAwait extension method on IAsyncDisposable.

To get around that, you need to write one extra line:

var c = new MyAsyncDisposableClass();
await using (c.ConfigureAwait(false))
{
    ...
}

Now the type of c is again the desired MyAsyncDisposableClass. This also has the effect of increasing the scope of c; if that’s impactful, you can wrap the whole thing in braces.

I used ConfigureAwait(false), but my AsyncLocal still flowed to code after the await. Is that a bug?

No, that is expected. AsyncLocal<T> data flows as part of ExecutionContext, which is separate from SynchronizationContext. Unless you’ve explicitly disabled ExecutionContext flow with ExecutionContext.SuppressFlow(), ExecutionContext (and thus AsyncLocal<T> data) will always flow across awaits, regardless of whether ConfigureAwait is used to avoid capturing the original SynchronizationContext. For more information, see this blog post.

Could the language help me avoid needing to use ConfigureAwait(false) explicitly in my library?

Library developers sometimes express their frustration with needing to use ConfigureAwait(false) and ask for less invasive alternatives.

Currently there aren’t any, at least not built into the language / compiler / runtime. There are however numerous proposals for what such a solution might look like, e.g. https://github.com/dotnet/csharplang/issues/645, https://github.com/dotnet/csharplang/issues/2542, https://github.com/dotnet/csharplang/issues/2649, and https://github.com/dotnet/csharplang/issues/2746.

If this is important to you, or if you feel like you have new and interesting ideas here, I encourage you to contribute your thoughts to those or new discussions.

The post ConfigureAwait FAQ appeared first on .NET Blog.

Visual Studio Code November 2019

Microsoft again recognized as a Leader in the 2019 Gartner Content Services Platforms Magic Quadrant Report

Modernizing Find in Files

$
0
0

Find in Files is one of the most commonly used features in Visual Studio. It’s also a feature that gets a substantial amount of feedback, and due to the age of the code, has been very costly to improve. Earlier this year, we decided to reimplement the feature from the ground up in order to realize significant performance and usability improvements.

We’ve released the new find in files experience in Visual Studio 2019 version 16.5 Preview 1 and we’re looking for feedback from the community. We expect this experience to be the one our developers will use and love in the future, so we want to make sure we’ve prioritized the right features. We still have more improvements coming that we’re not quite ready to talk about yet, but before we deprecate the old experience, we want to make sure the new version is meeting the needs of our users.

A screen capture of the new Find in Files dialog.

The new experience is available by searching for “Find in Files” or “Replace in Files” in Visual Studio search (Ctrl+Q by default). You can also get to these commands with Ctrl+Shift+F and Ctrl+Shift+H respectively. The new experience is pictured above and should be easily recognized by the more modern look and consistent color theming.

If you’re not seeing the new version, you can search for “Preview Features” in Visual Studio search (Again, Ctrl+Q by default). On that page, make sure “Use previous Find in Files” is unchecked. Conversely, if you’re having problems with the new experience, you can toggle this option to enable the old one. If you do find that you need the old Find in Files experience, we’d love to hear why. Please feel free to supply any feedback you might have over in Developer Community.

Performance

We took the previous implementation of Find in Files and reimplemented it completely in managed C#. This allows us to avoid unnecessary interop calls and gives us much more room for improving the experience. The memory consumption is smaller, and our performance is much faster.

In our internal testing on directories containing 100k+ files, we saw searches that took over 4 minutes with the old implementation be done in 26 seconds. The biggest gains are in searches that use regular expressions, but searches without regular expressions generally cut the search time in half.

Specifying Paths

Using the new experience should feel comfortable for most folks since we’ve gone with an experience that matches many other common find experiences. There are a few nuances that are worth calling out.

A screen capture of the Find in Files dialog that is cropped to only show the Look in and File types fields along with the options to include miscellaneous and external items.

The “Look in” box has a new option, “Current Directory”, which will search the folder that contains the currently open document. When searching a solution, there are checkboxes to include miscellaneous files (files that you’ve opened but aren’t part of the solution) as well as external items (files like “windows.h” that you might reference but aren’t part of the solution).

The three dots button next to the “Look in” box work like any other browse option to specify a directory to look in, but if you’ve already specified a directory, this button will append the new directory instead of replacing. For instance, if your “Look in” value was “.Code”, you could click the three buttons and navigate to a folder named “Shared Code”. The “Look in” would now show “.Code;.Shared Code” and when the Find command is executed, it will search both of those folders.

The File types folder now also can exclude files. Any path or file type prefixed with the “!” character will be excluded from the search. For instance, you can add “!*node_modules*” to the file types list to exclude any files in a node_modules folder.

Multiple Searches

One of the more frequent requests we’ve gotten is the ability to keep the results from one search while doing other searches. This makes it easy to compare results and see them side-by-side. This feature has been in Visual Studio for a while, and the new experience still supports it.

In the screenshot above, the Keep Results button has been enabled. Now, when a new search is executed, the results will be shown in a new tab. The screenshot above shows three searches that have already completed. Currently, this feature supports up to five searches. If you’ve already got five search results showing, the next search will reuse the oldest search result tab.

The Keep Results button is available for Find in Files as well as the Find All References feature.

Regular Expression Builder

A screen capture of the Find in Files dialog with a regular expression being used.

With Visual Studio 2019 version 16.5 preview 2, the Regular Expression builder will be available. The “Use regular expressions” checkbox will enable you to specify a regular expression as a pattern for a match. Checking this box with Visual Studio 2019 version 16.5 preview 2 (or later) will also bring up the Regular Expression builder, which is useful for creating regular expressions. Regular expressions can allow searches for strings that span multiple lines. For instance, the expression “.*Hello.*rn.*World.*” will match any occurrence of the string “Hello” that has an occurrence of the string “World” anywhere on the next line.

When the “Use regular expressions” checkbox is checked, the regular expression builder will appear next to the Find field. Clicking this will give some examples for building regular expressions as well as a link to the documentation.

What’s Next

Now that the Find in Files experience has been reimplemented to use the newer patterns of Visual Studio, we’re going to be able to provide more of the features we get asked for. We’d love to hear your experiences with the new dialog. We’re always watching Developer Community, and we’ve got a survey specifically for collecting feedback on the new experience that you can answer here. We know there are features that aren’t available today and your feedback is how we’ll prioritize the rest of the features. If you’re running into problems or you think the new dialog isn’t working correctly, please send us feedback with the Give Feedback button in Visual Studio.

The post Modernizing Find in Files appeared first on Visual Studio Blog.


Microsoft Office 365 now available from new Swiss datacenter regions

New features in Azure Monitor metrics explorer based on your feedback

$
0
0

A few months ago, we posted a survey to gather feedback on your experience with metrics in Azure Portal. Thank you for participation and providing valuable suggestions! We appreciate your input, whether you are working on a hobby project, in a governmental organization, or any size company—small to huge.

We want to share some of the insights we gained from the survey and highlight some of the features that we delivered based on your feedback. These features include:

  • Resource picker that supports multi-resource scoping.
  • Splitting by dimension allows limiting the number of time series and specifying sort order.
  • Charts can show large number of datapoints.
  • Improved chart legends.

Resource picker with multi-resource scoping

One of the key pieces of feedback we heard was about the resource picker panel. You said that being able to select only one resource at a time when choosing a scope is too limiting. Now you can select multiple resources across resources groups in a subscription.

image1-cross-resource-picker 

Ability to limit the number of timeseries and change sort order when splitting by dimension

Many of you asked for ability to configure the sort order based on dimension values, and for control over the maximum number of timeseries shown on the chart. Those who asked, explained that for some metrics, such as “Available memory” and “Remaining disk space,” they want to see the timeseries with smallest values, while for other metrics, including “CPU Utilization” or “Count of Failures,” showing the timeseries with highest values make more sense. To make it possible, we expanded the dimension splitter selector with Sort order and Limit count inputs.
 image2-split-picker-expanded 

Charts that show large number of datapoints

Charts with multiple timeseries over the long period, especially with short time grain are based on queries that return lots of datapoints. Unfortunately, processing too many datapoints may slow down chart interactions. To ensure the best performance, we used to apply a hard limit on the number of datapoints per chart, prompting users to lower the time range or to increase the time grain when the query returns too much data.

Some of you found the old experience frustrating. You said that that occasionally you might want to plot charts with lots of datapoints, regardless of performance. Based on your suggestions, we changed the way we handle the limit. Instead of blocking chart rendering, we now display a message that suggests that the metrics query will return a lot of data, but letting your proceed anyways (with a friendly reminder that you might need to wait longer for the chart to display).
 image3-too-much-data-continue 
High-density charts from lots of datapoints can be useful to visualize the outliers, as shown in this example:
  image4-spikes-and-dips

Improved chart legend

A small but useful improvement was made based on your feedback that the chart legends often wouldn’t fit on the chart, making it hard to interpret the data. This was almost always happening with the charts pinned to dashboards and rendered in the tight space of dashboard tiles, or on screens that have smaller resolution. To solve the problem, we now let you scroll the legend until you find the data you need:
  image5-legend-scroll

Feedback

Let us know how we're doing and what more you'd like to see. Please stay tuned for more information on these and other new features in the coming months. We are continuously addressing pain points and making improvements based on your input.

If you have any questions or comments before our next survey, please use the feedback button on the Metrics blade. Don’t feel shy about giving us a shout out if you like a new feature or are excited about the direction we’re headed. Smiles are just as important in influencing our plans as frowns!

R 3.6.2 is out, and a preview of R 4.0.0

$
0
0

R 3.6.2, the latest update to the R language, is now available for download on Windows, Mac and Linux.

As a minor release, R 3.6.2 makes only small improvements to R, including some new options for dot charts and better handling of missing values when using running medians as a smoother on charts. It also includes several bug fixes and performance improvements.

But big changes are coming to R with version 4.0.0, which is expected to be released not long after R's official 20th birthday on February 29, 2020. (The CelebRation 2020 conference will mark the occasion in Copenhagen.) The R Core team has announced previews of some of the changes, which include:

An enhanced reference counting system. When you delete an object in R, it will usually releases the associated memory back to the operating system. Likewise, if you copy an object with y <- x, R won't allocate new memory for y unless x is later modified. In current versions of R, however, that system breaks down if there are more than 2 references to any block of memory. Starting with R 4.0.0, all references will be counted, and so R should reclaim as much memory as possible, reducing R's overall memory footprint. This will have no impact on how you write R code, but this change make R run faster, especially on systems with limited memory and with slow storage systems.

Normalization of matrix and array types. Conceptually, a matrix is just a 2-dimensional array. But current versions of R handle matrix and 2-D array objects differently in some cases. In R 4.0.0, matrix objects will formally inherit from the array class, eliminating such inconsistencies.

A refreshed color palette for charts. The base graphics palette for current versions of R (shown as R3 below) features saturated colors that vary considerably in brightness (for example, yellow doesn't display as prominently as red). In R 4.0.0, the palette R4 below will be used, with colors of consistent luminance that are easier to distinguish, especially for viewers with color deficiencies. Additional palettes will make it easy to make base graphics charts that match the color scheme of ggplot2 and other graphics systems.

R4 pallette

Many other smaller changes are in the works too. See the NEWS file for the upcoming R release for details.

R developer page: NEWS file for upcoming R release

Build C++ Applications in a Linux Docker Container with Visual Studio

$
0
0

Docker containers provide a consistent development environment for building, testing, and deployment. The virtualized OS, file system, environment settings, libraries, and other dependencies are all encapsulated and shipped as one image that can be shared between developers and machines. This is especially useful for C++ cross-platform developers because you can target a container that runs a different operating system than the one on your development machine.

In this blog post we’re going to use Visual Studio’s native CMake support to build a simple Linux application in a Linux docker container over SSH. This post focuses on creating your first docker container and building from Visual Studio. If you’re interested in learning more about Docker as a tool to configure reproducible build environments, check out our post on using multi-stage containers for C++ development.

This workflow leverages Visual Studio’s native support for CMake, but the same instructions can be used to build a MSBuild-based Linux project in Visual Studio.

Set-up your first Linux docker container

First, we’ll set-up a Linux docker container on Windows. You will need to download the Docker Desktop Client for Windows and create a docker account if you haven’t already. See Install Docker Desktop on Windows for download information, system requirements, and installation instructions.

We’ll get started by pulling down an image of the Ubuntu OS  and running a few commands. From the Windows command prompt run:

> docker pull ubuntu

This will download the latest image of Ubuntu from Docker. You can see a list of your docker images by running:

> docker images

Next, we’ll use a Dockerfile to create a custom image based on our local image of Ubuntu. Dockerfiles contain the commands used to assemble an image and allow you to automatically reproduce the same build environment from any machine. See Dockerfile reference for more information on authoring your own Dockerfiles. The following Dockerfile can be used to install Visual Studio’s required build tools and configure SSH. CMake is also a required dependency but I will deploy statically linked binaries directly from Visual Studio in a later step. Use your favorite text editor to create a file called ‘Dockerfile’ with the following content.

# our local base image
FROM ubuntu 

LABEL description="Container for use with Visual Studio" 

# install build dependencies 
RUN apt-get update && apt-get install -y g++ rsync zip openssh-server make 

# configure SSH for communication with Visual Studio 
RUN mkdir -p /var/run/sshd

RUN echo 'PasswordAuthentication yes' >> /etc/ssh/sshd_config &&  
   ssh-keygen -A 

# expose port 22 
EXPOSE 22

We can then build an image based on our Dockerfile by running the following command from the directory where your Dockerfile is saved:

> docker build -t ubuntu-vs .

Next, we can run a container derived from our image:

> docker run -p 5000:22 -i -t ubuntu-vs /bin/bash

The -p flag is used to expose the container’s internal port to the host. If this step was successful, then you should automatically attach to the running container. You can stop your docker container at any time and return to the command prompt using the exit command. To reattach, run docker ps -a, docker start <container-ID>, and docker attach <container-ID> from the command prompt.

Lastly, we will interact with our docker container directly to start SSH and create a user account to use with our SSH connection. Note that you can also enable root login and start SSH from your Dockerfile if you want to avoid any manual and container-specific configuration. Replace <user-name> with the username you would like to use and run:

> service ssh start
> useradd -m -d /home/<user-name> -s /bin/bash -G sudo <user-name>
> passwd <user-name>

The -m and -d flags create a user with the specified home directory, and the -s flag sets the user’s default shell.

You are now ready to connect to your container from Visual Studio.

Connect to your docker container from Visual Studio

Make sure you have Visual Studio 2019 and the Linux development with C++ workload installed.

Open Visual Studio 2019 a create a new CMake Project. CMake is cross-platform and allows you to configure an application to run on both Windows and Linux.

Once the IDE has loaded, you can add a SSH connection to your Linux docker container the same way you would add any other remote connection. Navigate to the Connection Manager (Tools > Options > Cross Platform > Connection Manager) and select “Add” to add a new remote connection.

Add a new remote connection in Visual Studio, with input fields for host name, port, user name, authentication type, and password.

Your host name should be “localhost”, the port should be whatever you are using for your SSH connection (in this example we’re using 5000), and your username and password should match the user account that you just created for your container.

Configure build in Visual Studio

At this point the project behaves like any other CMake project in Visual Studio. To configure and build the console application in our Linux container navigate to “Manage Configurations…” in the configuration drop-down.

You can then select the green plus sign in the CMake Settings Editor to add a new “Linux-Debug” configuration. Make sure that the remote machine name of your Linux configuration matches the remote connection we created for our Linux docker container.

Remote machine name property in the CMake Settings Editor showing the local docker container I am connected to

Save the CMake Settings Editor (ctrl + s) and select your new Linux configuration from the configuration drop-down to kick off a CMake configuration. If you don’t already have CMake installed on your docker container, then Visual Studio will prompt you to deploy statically linked binaries directly to your remote connection as a part of the configure step.

At this point you can build your application in your Linux docker container directly from Visual Studio. Additional build settings (including custom toolchain files, CMake variables, and environment variables) can be configured in the CMake Settings Editor. The underlying CMakeSettings.json file can store multiple build configurations and can be checked into source control and shared between team members.

Coming next

This post showed you how to build a C++ application in a Linux docker container with Visual Studio. Stay tuned for our next post, where will we show you how to copy the build artifacts back to your local Windows machine and debug using gdbserver on a second remote system.

Give us your feedback

Do you have feedback on our Linux tooling or CMake support in Visual Studio? We’d love to hear from you to help us prioritize and build the right features for you. We can be reached via the comments below, Developer Community, email (visualcpp@microsoft.com), and Twitter (@VisualC).

The post Build C++ Applications in a Linux Docker Container with Visual Studio appeared first on C++ Team Blog.

Top Stories from the Microsoft DevOps Community – 2019.12.13

$
0
0

It is the holiday season, and the bright lights are everywhere. In the technology world, I hope you’re seeing more green than red lights in your Azure Pipelines status badges!

Monitor Azure DevOps workflows and pipelines with Datadog
Pipeline status monitoring is an important part of the software delivery lifecycle. You can now monitor Azure DevOps in Datadog, seeing data live in an event stream! The integration can be configured in minutes, and allows you to monitor Azure Pipelines as well as other types of events, such as work item or repository activity. Thank you, Steve Harrington, Rogan Ferguson and Shashank Barsin for creating this overview!

Automating Build Pipeline Creation using Azure DevOps Services REST API
While you can configure all of your pipelines through the UI, Azure DevOps also offers a full REST API that allows you to automate the workflow. Ryan Buchanan was facing a particular problem – the need to create a large number of similar pipelines. This post demonstrates a PowerShell script used to automate the REST API calls for creating the Azure Pipelines. Thank you, Ryan!

Canary Deployments with Just Azure DevOps
Canary releases are very useful, especially when it comes to applications deployed on container orchestrators. In this blog, Yuri Burger details how to create canary releases for Kubernetes apps using Azure Pipelines. This implementation does not use the new canary deployment strategy yet, but does have gated approvals, which is certainly helpful!

What Is Azure Pipelines A Primer
In this new video series, Mickey Gousset starts with an overview of general CI/CD and Azure Pipelines concepts, and then continues with deeper dives into Azure YAML Pipeline features. Subscribe to Mickey’s channel for more upcoming videos!

CI/CD for Go App with Azure Pipelines
And if you are in the mood for a longer video, here is an excellent presentation on how to build, test and deploy a Go Web API using Azure DevOps. Thanks, Rainer Stropek!

If you’ve written an article about Azure DevOps or find some great content about DevOps on Azure, please share it with the #AzureDevOps hashtag on Twitter!

The post Top Stories from the Microsoft DevOps Community – 2019.12.13 appeared first on Azure DevOps Blog.

Announcing future user-agents for Bingbot

$
0
0

As announced in October, Bing is adopting the new Microsoft Edge as the engine to run JavaScript and render web pages. We have already switched to Microsoft Edge for thousands of web sites “under the hood”. This evolution was transparent for most of the sites and we carefully tested to check whether each website is rendering fine on switching to Microsoft Edge. Over the coming months, we will scale this migration to cover all the sites.

So far, we were crawling using an existing bingbot user-agents. With this change, we will start the transition to a new bingbot user-agent, first for sites which require it for rendering and then gradually and carefully to all sites.  

Bingbot user-agents today 

  • Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) 
  • Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS X) AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465 Safari/9537.53 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) 
  • Mozilla/5.0 (Windows Phone 8.1; ARM; Trident/7.0; Touch; rv:11.0; IEMobile/11.0; NOKIA; Lumia 530) like Gecko (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) 


In addition to the existing user-agents listed above, following are the new evergreen Bingbot user-agents

Desktop
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) Chrome/W.X.Y.Z Safari/537.36 Edg/W.X.Y.Z 

Mobile
Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 Edg/W.X.Y.Z (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm) 

We are committing to regularly update our web page rendering engine to the most recent stable version of Microsoft Edge thus making the above user agent strings to be evergreen. Thus, "W.X.Y.Z" will be substituted with the latest Microsoft Edge version we're using, for example “80.0.345.0". 

How to test your web site 

For most web sites, there is nothing to worry as we will carefully test the sites to dynamically render fine before switching them to Microsoft Edge and our new user-agent.

We invite you to install and test the new Microsoft Edge to check if your site looks fine with it. If it does then you will not be affected by the change. You can also register your site on Bing Webmaster Tools to get insights about your site, to be notified if we detect issues and to investigate your site using our upcoming tools based on our new rendering engine. 

We look forward to sharing more details in the future. 

Thanks, 
Fabrice Canel 
Principal Program Manager 
Microsoft - Bing

Better performance with bursting enhancement on Azure Disks

$
0
0

We introduced the preview of bursting support on Azure Premium SSD Disks, and new disk sizes 4/8/16 GiB on both Premium & Standard SSDs at Microsoft Ignite in November. We would like to share more details about it. With bursting, eligible Premium SSD disks can now achieve up to 30x of the provisioned performance target, better handling for spiky workloads. If you have workloads running on-premises with less predictable disk traffic, you can migrate to Azure and improve your overall performance taking advantage of bursting support.

Disk bursting is enforced on a credit based system, where you will accumulate credits when traffic is below provisioned target and consume credit when it exceeds provisioned. You can best leverage the capability in these scenarios below:

  • OS disks to accelerate virtual machine (VM) boot: You can expect to experience a boost as part of VM boot where reads to the OS disk may be issued at a higher rate. If you are hosting cloud workstations on Azure, your applications launch time can potentially be reduced taking advantage of additional disk throughput.
  • Data disks to accommodate spiky traffic: Some production operations trigger spikes of disk input/output (IO) by design. For example, if you conduct a database checkpoint, there will be a sudden increase of writes against the data disk, and a similar increase in reads for backup operations. Disk bursting provides you better flexibility to handle any excepted or unexpected change of disk traffic pattern.

With this preview release, we lower the entry cost of cloud adoption with smaller disk sizes and make our disk offerings more performant leveraging burst support. Start leveraging these new disk capabilities to build your most performant, robust and cost-efficient solution on Azure today!

Getting Started

Create new managed disks on the burst applicable sizes using the Azure portal, Powershell, or command-line interface (CLI) now! You can find the specifications of burst eligible and new disk sizes in the table below. The preview regions that support bursting and new disk sizes are listed in our Azure Disks frequently asked questions article. We are actively extending the preview support to more regions.

Premium SSD managed disks

Bursting capability is supported on Premium SSD managed disks only. It will be enabled by default for all new deployments in the supported regions. For existing disks of the applicable sizes, you can enable bursting with either of the two options: detach and re-attach the disk or stop and restart the attached VM. To learn more details on how bursting works, please refer to this "What disk types are available in Azure?" article.

Burst Capable Disks

Disk Size

Provisioned IOPS per disk

Provisioned Bandwidth per disk

Max Burst IOPS per disk

Max Burst Bandwidth per disk

Max Burst Duration at Peak Burst Rate

P1 – New

4 GiB

120

25 MiB/sec

3,500

170 MiB/sec

30 mins

P2 – New

8 GiB

120

25 MiB/sec

3,500

170 MiB/sec

30 mins

P3 – New

16 GiB

120

25 MiB/sec

3,500

170 MiB/sec

30 mins

P4

32 GiB

120

25 MiB/sec

3,500

170 MiB/sec

30 mins

P6

64 GiB

240

50 MiB/sec

3,500

170 MiB/sec

30 mins

P10

128 GiB

500

100 MiB/sec

3,500

170 MiB/sec

30 mins

P15

256 GiB

1,100

125 MiB/sec

3,500

170 MiB/sec

30 mins

P20

512 GiB

2,300

150 MiB/sec

3,500

170 MiB/sec

30 mins

Standard SSD Managed Disks

Here are the new disk sizes introduced on Standard SSD Disks. The performance targets define the max IOPS and bandwidth you can achieve on these sizes. Compared to Premium SSD Disks above, the disk IOPS and bandwidth offered are not provisioned. For your performance sensitive workloads or single instance deployment, we recommend you leverage Premium SSDs.

 

Disk Size

Max IOPS per disk

Max Bandwidth per disk

E1 – New

4 GiB

120

25 MB/sec

E2 – New

8 GiB

120

25 MB/sec

E3 – New

16 GiB

120

25 MB/sec

Visit our service website to explore the Azure Disk Storage portfolio. To learn about pricing, you can visit the Azure Managed Disks pricing page.

General feedback

We look forward to hearing your feedback on the new disk sizes. Please email us at AzureDisks@microsoft.com.


An Introduction to DataFrame

$
0
0

Last month, we announced .NET support for Jupyter notebooks, and showed how to use them to work with .NET for Apache Spark and ML.NET. Today, we’re announcing the preview of a DataFrame type for .NET to make data exploration easy. If you’ve used Python to manipulate data in notebooks, you’ll already be familiar with the concept of a DataFrame. At a high level, it is an in-memory representation of structured data. In this blog post, I’m going to give an overview of this new type and how you can use it from Jupyter notebooks. To play along, fire up a .NET Jupyter Notebook in a browser.

How to use DataFrame?

DataFrame stores data as a collection of columns. Let’s populate a DataFrame with some sample data and go over the major features. The full sample can be found on Github(C# and F#). To follow along in your browser, click here and navigate to csharp/Samples/DataFrame-Getting Started.ipynb(or fsharp/Samples/DataFrame-Getting Started.ipynb). To get started, let’s import the Microsoft.Data.Analysis package and namespace into our .NET Jupyter Notebook (make sure you’re using the C# or F# kernel):

Microsoft.Data.Analysis package

Let’s make three columns to hold values of types DateTime, int and string.

PrimitiveDataFrameColumn<DateTime> dateTimes = new PrimitiveDataFrameColumn<DateTime>("DateTimes"); // Default length is 0.
PrimitiveDataFrameColumn<int> ints = new PrimitiveDataFrameColumn<int>("Ints", 3); // Makes a column of length 3. Filled with nulls initially
StringDataFrameColumn strings = new StringDataFrameColumn("Strings", 3); // Makes a column of length 3. Filled with nulls initially

PrimitiveDataFrameColumn is a generic column that can hold primitive types such as int, float, decimal etc. A StringDataFrameColumn is a specialized column that holds string values. Both the column types can take a length parameter in their contructors and are filled with null values initially. Before we can add these columns to a DataFrame though, we need to append three values to our dateTimes column. This is because the DataFrame constructor expects all its columns to have the same length.

// Append 3 values to dateTimes
dateTimes.Append(DateTime.Parse("2019/01/01"));
dateTimes.Append(DateTime.Parse("2019/01/01"));
dateTimes.Append(DateTime.Parse("2019/01/02"));

Now we’re ready to create a DataFrame with three columns.

DataFrame df = new DataFrame(dateTimes, ints, strings); // This will throw if the columns are of different lengths

One of the benefits of using a notebook for data exploration is the interactive REPL. We can enter df into a new cell and run it to see what data it contains. For the rest of this post, we’ll work in a .NET Jupyter environment. All the sample code will work in a regular console app as well though.

Array Print

We immediately see that the formatting of the output can be improved. Each column is printed as an array of values and we don’t see the names of the columns. If df had more rows and columns, the output would be hard to read. Fortunately, in a Jupyter environment, we can write custom formatters for types. Let’s write a formatter for DataFrame.

using Microsoft.AspNetCore.Html;
Formatter<DataFrame>.Register((df, writer) =>
{
    var headers = new List<IHtmlContent>();
    headers.Add(th(i("index")));
    headers.AddRange(df.Columns.Select(c => (IHtmlContent) th(c.Name)));
    var rows = new List<List<IHtmlContent>>();
    var take = 20;
    for (var i = 0; i < Math.Min(take, df.Rows.Count); i++)
    {
        var cells = new List<IHtmlContent>();
        cells.Add(td(i));
        foreach (var obj in df.Rows[i])
        {
            cells.Add(td(obj));
        }
        rows.Add(cells);
    }

    var t = table(
        thead(
            headers),
        tbody(
            rows.Select(
                r => tr(r))));

    writer.Write(t);
}, "text/html");

This snippet of code register a new DataFrame formatter. All subsequent evaluations of df in a notebook will now output the first 20 rows of a DataFrame along with the column names. In the future, the DataFrame type and other libraries that target Jupyter as one of their environments will be able to ship with their formatters.

Print DataFrame

Sure enough, when we re-evaluate df, we see that it contains the three columns we created previously. The formatting makes it much easier to inspect our values. There’s also a helpful index column in the output to quickly see which row we’re looking at. Let’s modify our data by indexing into df:

df[0, 1] = 10; // 0 is the rowIndex, and 1 is the columnIndex. This sets the 0th value in the Ints columns to 10

DataFrameIndexing

We can also modify the values in the columns through indexers defined on PrimitiveDataFrameColumn and StringDataFrameColumn:

// Modify ints and strings columns by indexing
ints[1] = 100;
strings[1] = "Foo!";

ColumnIndexers

One caveat to keep in mind here is the data type of the value passed in to the indexers. We passed in the right data types to the column indexers in our sample: an integer value of 100 to ints[1] and a string "Foo!" to string[1]. If the data types don’t match, an exception will be thrown. For cases where the type of data in the columns is not obvious, there is a handy DataType property defined on each column. The Info method displays the DataType and Length properties of each column:

Info

The DataFrame and DataFrameColumn classes expose a number of useful APIs: binary operations, computations, joins, merges, handling missing values and more. Let’s look at some of them:

// Add 5 to Ints through the DataFrame
df["Ints"].Add(5, inPlace: true);

Add

// We can also use binary operators. Binary operators produce a copy, so assign it back to our Ints column 
df["Ints"] = (ints / 5) * 100;

BinaryOperations

All binary operators are backed by functions that produces a copy by default. The + operator, for example, calls the Add method and passes in false for the inPlace parameter. This lets us elegantly manipulate data using operators without worrying about modifying our existing values. For when in place semantics are desired, we can set the inPlace parameter to true in the binary functions.

In our sample, df has null values in its columns. DataFrame and DataFrameColumn offer an API to fill nulls with values.

df["Ints"].FillNulls(-1, inPlace: true);
df["Strings"].FillNulls("Bar", inPlace: true);

Fill Nulls

DataFrame exposes a Columns property that we can enumerate over to access our columns and a Rows property to access our rows. We can index Rows to access each row. Here’s an example that accesses the first row:

DataFrameRow row0 = df.Rows[0];

Access Rows

To inspect our values better, let’s write a formatter for DataFrameRow that displays values in a single line.

using Microsoft.AspNetCore.Html;
Formatter<DataFrameRow>.Register((dataFrameRow, writer) =>
{
    var cells = new List<IHtmlContent>();
    cells.Add(td(i));
    foreach (var obj in dataFrameRow)
    {
        cells.Add(td(obj));
    }

    var t = table(
        tbody(
            cells));

    writer.Write(t);
}, "text/html");

Access Rows

To enumerate over all the rows in a DataFrame, we can write a simple for loop. DataFrame.Rows.Count returns the number of rows in a DataFrame and we can use the loop index to access each row.

for (long i = 0; i < df.Rows.Count; i++)
{
       DataFrameRow row = df.Rows[i];
}

Note that each row is a view of the values in the DataFrame. Modifying the values in the row object modifies the values in the DataFrame. We do however lose type information on the returned row object. This is a consequence of DataFrame being a loosely typed data structure.

Let’s wrap up our DataFrame API tour by looking at the Filter, Sort, GroupBy methods:

// Filter rows based on equality
PrimitiveDataFrameColumn<bool> boolFilter = df["Strings"].ElementwiseEquals("Bar");
DataFrame filtered = df.Filter(boolFilter);

DataFrame Filter

ElementwiseEquals returns a PrimitiveDataFrameColumn<bool> filled with a true for every row that equals "Bar" in the Strings column, and a false when it doesn’t equal "Bar". In the df.Filter call, each row corresponding to a true value in boolFilter selects a row out of df. The resulting DataFrame contains only these rows.

// Sort our dataframe using the Ints column
DataFrame sorted = df.Sort("Ints");
// GroupBy 
GroupBy groupBy = df.GroupBy("DateTimes");

Sort And GroupBy

The GroupBy method takes in the name of a column and creates groups based on unique values in the column. In our sample, the DateTimes column has two unique values, so we expect one group to be created for 2019-01-01 00:00:00Z and one for 2019-01-02 00:00:00Z.

// Count of values in each group
DataFrame groupCounts = groupBy.Count();
// Alternatively find the sum of the values in each group in Ints
DataFrame intGroupSum = groupBy.Sum("Ints");

GroupBy Sum

The GroupBy object exposes a set of methods that can called on each group. Some examples are Max(), Min(), Count() etc. The Count() method counts the number of values in each group and return them in a new DataFrame. The Sum("Ints") method sums up the values in each group.

Finally, when we want to work with existing datasets, DataFrame exposes a LoadCsv method.

DataFrame csvDataFrame = DataFrame.LoadCsv("path/to/file.csv");

Charting

Another cool feature of using a DataFrame in a .NET Jupyter environment is charting. XPlot.Plotly is one option to render charts. We can import the XPlot.Plotly namespace into our notebook and create interactive visualizations of the data in our DataFrame. Let’s populate a PrimitiveDataFrameColumn<double> with a normal distribution and plot a histogram of the samples:

#r "nuget:MathNet.Numerics,4.9.0"
using XPlot.Plotly;
using System.Linq;
using MathNet.Numerics.Distributions;

double mean = 0;
double stdDev = 0.1;
MathNet.Numerics.Distributions.Normal normalDist = new Normal(mean, stdDev);

PrimitiveDataFrameColumn<double> doubles = new PrimitiveDataFrameColumn<double>("Normal Distribution", normalDist.Samples().Take(1000));
display(Chart.Plot(
    new Graph.Histogram()
    {
        x = doubles,
        nbinsx = 30
    }
));

Chart

We first create a PrimitiveDataFrameColumn<double> by drawing 1000 samples from a normal distribution and then plot a histogram with 30 bins. The resulting chart is interactive! Hovering over the chart reveals the underlying data and lets us inspect each value precisely.

Summary

We’ve only explored a subset of the features that DataFrame exposes. Append, Join, Merge, and Aggregations are supported. Each column also implements IEnumerable<T?>, so users can write LINQ queries on columns. The custom DataFrame formatting code we wrote has a simple example. The complete source code(and documentation) for Microsoft.Data.Analysis lives on GitHub. In a follow up post, I’ll go over how to use DataFrame with ML.NET and .NET for Spark. The decision to use column major backing stores (the Arrow format in particular) allows for zero-copy in .NET for Spark User Defined Functions (UDFs)!

We always welcome the community’s feedback! In fact, please feel free to contribute to the source code. We’ve made it easy for users to create new column types that derive from DataFrameColumn to add new functionality. Support for structs such as DateTime and user defined structs is also not as complete as primitive types such as int, float etc. We believe this preview package allows the community to do data analysis in .NET. Try out DataFrame in a .NET Jupyter Notebook and let us know what you think!

The post An Introduction to DataFrame appeared first on .NET Blog.

Get started building extensions for the new Microsoft Edge

$
0
0

Starting today, the Microsoft Edge Addons store is now open for submissions for all developers. This is where users will find your extensions for the new Microsoft Edge. You can submit your extensions today by visiting the Partner Center Developer Dashboard.

In most cases, existing extensions built for Chromium will work without any modifications in the new Microsoft Edge. Check out our developer documentation to learn more about Microsoft Edge-specific APIs, tips on submitting your extension, and other helpful information. The extension submission program is in its preview phase and we are excited to hear and incorporate your feedback.

Transitioning your existing extensions to Chromium

As we move towards the general availability of the new Microsoft Edge on January 15th, 2020, we will no longer accept new submissions for Microsoft Edge Legacy (EdgeHTML-based) extensions after December 17th, 2019. We will continue to accept updates for your existing extensions.

We recommend you prioritize building new extensions for the new Chromium-based Microsoft Edge, and continue to support your existing EdgeHTML-based extensions to ensure a quality experience for active users.

Developers who have given consent for Microsoft to migrate their EdgeHTML extension listings to the new Microsoft Edge should begin to see their extensions available in the new Addons store experience in Microsoft Edge. If you publish an EdgeHTML extension and have not received any communication regarding its migration or are unsure of its status, please contact us at ExtensionPartnerOps@microsoft.com.

If you have already received a confirmation from us regarding migration, we encourage you to log on to the Partner Center Developer Dashboard to validate your access to the extension, and verify whether you can update it. Once the migration is complete, ownership and management will be completely transferred to you, and Microsoft will not be responsible for updating or maintaining your extension.

Migrating extension users to the new Microsoft Edge

We will migrate users’ extensions from the current version of Microsoft Edge when they update to the new Microsoft Edge (starting January 15th). Extensions will only be migrated for users if they are already available on the Microsoft Edge Addons store at the time of switching to the new browser.

We recommend that developers update your existing EdgeHTML extensions for Chromium and publish them via the new portal as soon as possible, so your existing customers will not face any interruptions when they update to the new Microsoft Edge.

Getting started

You can check out our initial developer documentation today, and expect to see more coming soon. If you have any additional questions about the extension submission process, please contact Microsoft Edge Addons Developer Support.

It’s a great time to build for the web, and we look forward to collaborating with you on our new browser!

– Killian McCoy, Program Manager 2
– Pratyusha Avadhanula, Senior Program Manager

The post Get started building extensions for the new Microsoft Edge appeared first on Microsoft Edge Blog.

Moving an ASP.NET Core from Azure App Service on Windows to Linux by testing in WSL and Docker first

$
0
0

I updated one of my websites from ASP.NET Core 2.2 to the latest LTS (Long Term Support) version of ASP.NET Core 3.1 this week. Now I want to do the same with my podcast site AND move it to Linux at the same time. Azure App Service for Linux has some very good pricing and allowed me to move over to a Premium v2 plan from Standard which gives me double the memory at 35% off.

My podcast has historically run on ASP.NET Core on Azure App Service for Windows. How do I know if it'll run on Linux? Well, I'll try it see!

I use WSL (Windows Subsystem for Linux) and so should you. It's very likely that you have WSL ready to go on you machine and you just haven't turned it on. Combine WSL (or the new WSL2) with the Windows Terminal and you're in a lovely spot on Windows with the ability to develop anything for anywhere.

First, let's see if I can run my existing ASP.NET Core podcast site (now updated to .NET Core 3.1) on Linux. I'll start up Ubuntu 18.04 on Windows and run dotnet --version to see if I have anything installed already. You may have nothing. I have 3.0 it seems:

$ dotnet --version

3.0.100

Ok, I'll want to install .NET Core 3.1 on WSL's Ubuntu instance. Remember, just because I have .NET 3.1 installed in Windows doesn't mean it's installed in my Linux/WSL instance(s). I need to maintain those on my own. Another way to think about it is that I've got the win-x64 install of .NET 3.1 and now I need the linux-x64 one.

  • NOTE: It is true that I could "dotnet publish -r linux-x64" and then scp the resulting complete published files over to Linux/WSL. It depends on how I want to divide responsibility. Do I want to build on Windows and run on Linux/Linux? Or do I want to build and run from Linux. Both are valid, it just depends on your choices, patience, and familiarity.
  • GOTCHA: Also if you're accessing Windows files at /mnt/c under WSL that were git cloned from Windows, be aware that there are subtleties if Git for Windows and Git for Ubuntu are accessing the index/files at the same time. It's easier and safer and faster to just git clone another copy within the WSL/Linux filesystem.

I'll head over to https://dotnet.microsoft.com/download and get .NET Core 3.1 for Ubuntu. If you use apt, and I assume you do, there's some preliminary setup and then it's a simple

sudo apt-get install dotnet-sdk-3.1

No sweat. Let's "dotnet build" and hope for the best!

Building my site under WSL

It might be surprising but if you aren't doing anything tricky or Windows-specific, your .NET Core app should just build the same on Windows as it does on Linux. If you ARE doing something interesting or OS-specific you can #ifdef your way to glory if you insist.

Bonus points if you have Unit Tests - and I do - so next I'll run my unit tests and see how it goes.

OPTION: I write things like build.ps1 and test.ps1 that use PowerShell as PowerShell is on Windows already. Then I install PowerShell (just for the scripting, not the shelling) on Linux so I can use my .ps1 scripts everywhere. The same test.ps1 and build.ps1 and dockertest.ps1, etc just works on all platforms. Make sure you have a shebang #!/usr/bin/pwsh at the top of your ps1 files so you can just run them (chmod +x) on Linux.

I run test.ps1 which runs this command

dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=lcov /p:CoverletOutput=./lcov .hanselminutes.core.tests

with coverlet for code coverage and...it works! Again, this might be surprising but if you don't have any hard coded paths, make any assumptions about a C: drive existing, and avoid the registry and other Windows-specific things, things work.

Test Run Successful.

Total tests: 23
Passed: 23
Total time: 9.6340 Seconds

Calculating coverage result...
Generating report './lcov.info'

+--------------------------+--------+--------+--------+
| Module | Line | Branch | Method |
+--------------------------+--------+--------+--------+
| hanselminutes.core.Views | 60.71% | 59.03% | 41.17% |
+--------------------------+--------+--------+--------+
| hanselminutes.core | 82.51% | 81.61% | 85.39% |
+--------------------------+--------+--------+--------+

I can build, I can test, but can I run it? What about running and testing in containers?

I'm running WSL2 on my system and I've doing all this in Ubuntu 18.04 AND I'm running the Docker WSL Tech Preview. Why not see if I can run my tests under Docker as well? From Docker for Windows I'll enabled the Experimental WSL2 support and then from the Resources menu, WSL Integration I'll enable Docker within my Ubuntu 18.04 instance (your instances and their names will be your own).

Docker under WSL2

I can confirm it's working with "docker info" under WSL and talking to a working instance. I should be able to run "docker info" in BOTH Windows AND WSL.

$ docker info

Client:
Debug Mode: false

Server:
Containers: 18
Running: 18
Paused: 0
Stopped: 0
Images: 31
Server Version: 19.03.5
Storage Driver: overlay2
Backing Filesystem: extfs
...snip...

Cool. I remembered I also I needed to update my Dockerfile as well from the 2.2 SDK on the Docker hub to the 3.1 SDK from Microsoft Container Registry, so this one line change:

#FROM microsoft/dotnet:2.2-sdk AS build

FROM mcr.microsoft.com/dotnet/core/sdk:3.1 as build

as well as the final runtime version for the app later in the Dockerfile. Basically make sure your Dockerfile uses the right versions.

#FROM microsoft/dotnet:2.1-aspnetcore-runtime AS runtime

FROM mcr.microsoft.com/dotnet/core/aspnet:3.1 AS runtime

I also volume mount the tests results so there's this offensive If statement in the test.ps1. YES, I know I should just do all the paths with / and make them relative.

#!/usr/bin/pwsh
docker build --pull --target testrunner -t podcast:test .
if ($IsWindows)
{
 docker run --rm -v d:githubhanselminutes-coreTestResults:/app/hanselminutes.core.tests/TestResults podcast:test
}
else
{
 docker run --rm -v ~/hanselminutes-core/TestResults:/app/hanselminutes.core.tests/TestResults podcast:test
}

Regardless, it works and it works wonderfully. Now I've got tests running in Windows and Linux and in Docker (in a Linux container) managed by WSL2. Everything works everywhere. Now that it runs well on WSL, I know it'll work great in Azure on Linux.

Moving from Azure App Service on Windows to Linux

This was pretty simple as well.

I'll blog in detail how I build andd eploy the sites in Azure DevOps and how I've moved from .NET 2.2 with Classic "Wizard Built" DevOps Pipelines to a .NET Core 3.1 and a source control checked-in YAML pipeline next week.

The short version is, make a Linux App Service Plan (remember that an "App Service Plan " is a VM that you don't worry about. See in the pick below that the Linux Plan has a penguin icon. Also remember that you can have as many apps inside your plan as you'd like (and will fit in memory and resources). When you select a "Stack" for your app within Azure App Service for Linux you're effectively selecting a Docker Image that Azure manages for you.

I started by deploying to staging.mydomain.com and trying it out. You can use Azure Front Door or CloudFlare to manage traffic and then swap the DNS. I tested on Staging for a while, then just changed DNS directly. I waited a few hours for traffic to drain off the Windows podcast site and then stopped it. After a day or two of no traffic I deleted it. If I did my job right, none of you noticed the site moved from Windows to Linux, from .NET Core 2.2 to .NET Core 3.1. It should be as fast or faster with no downtime.

Here's a snap of my Azure Portal. As of today, I've moved my home page, my blood sugar management portal, and my podcast site all onto a single Linux App Service Plan. Each is hosted on GitHub and each is deploying automatically with Azure DevOps.

Azure Plan with 3 apps on Linux

Next big migration to the cloud will be this blog which still runs .NET Framework 4.x. I'll blog how the podcast gets checked into GitHub then deployed with Azure DevOps next week.

What cool migrations have YOU done lately, Dear Reader?


Sponsor: Like C#? We do too! That’s why we've developed a fast, smart, cross-platform .NET IDE which gives you even more coding power. Clever code analysis, rich code completion, instant search and navigation, an advanced debugger... With JetBrains Rider, everything you need is at your fingertips. Code C# at the speed of thought on Linux, Mac, or Windows. Try JetBrains Rider today!


© 2019 Scott Hanselman. All rights reserved.
     

New enhancements for Azure IoT Edge automatic deployments

$
0
0

Since releasing Microsoft Azure IoT Edge, we have seen many customers using IoT Edge automatic deployments to deploy workloads to the edge at scale. IoT Edge automatic deployments handle the heavy lifting of deploying modules to the relevant Azure IoT Edge devices and allow operators to keep a close eye on status to quickly address any problems. Customers love the benefits and have given us feedback on how to make automatic deployments even better through greater flexibility and seamless experiences. Today, we are sharing a set of enhancements to IoT Edge automatic deployments that are a direct result of this feedback. These enhancements include layered deployments, deploying marketplace modules from the Azure portal and other UI updates, and module support for automatic device configurations.

Layered deployments

Layered deployments are a new type of IoT Edge automatic deployments that allow developers and operators to independently deploy subsets of modules. This avoids the need to create an automatic deployment for every combination of modules that may exist across your device fleet. Microsoft Azure IoT Hub evaluates all applicable layered deployments to determine the final set of modules for a given IoT Edge device. Layered deployments have the same basic components as any automatic deployment. They target devices based on tags in the device twins and provide the same functionality around labels, metrics, and status reporting. Layered deployments also have priorities assigned to them, but instead of using the priority to determine which deployment is applied to a device, the priority determines how multiple deployments are ranked on a device. For example, if two layered deployments have a module or a route with the same name, the layered deployment with the higher priority will be applied while the lower priority is overwritten.

Modules in a deployment

This first illustration shows how all modules need to be included in each regular deployment, requiring a separate deployment for each target group.

Layered deployments

This second illustration shows how layered deployments allow modules to be deployed independently to each target group, with a lower overall number of deployments.

Revamped UI for IoT Edge automatic deployments

There are updates throughout the IoT Edge automatic deployments UI in the Azure portal. For example, you can now select modules from Microsoft Azure Marketplace from directly within the create deployment experience. The Azure Marketplace features many Azure IoT Edge modules built by Microsoft and partners.

A screenshot of the IoT Edge Module Marketplace

Automatic configuration for module twins

Automatic device management in Azure IoT Hub automates many of the repetitive and complex tasks of managing large device fleets by using automatic device configurations to update and report status on device twin properties. We have heard from many of you that you would like the equivalent functionality for configuring module twins, and are happy to share that this functionality is now available.

Next steps

Microsoft is a leader in The Forrester Wave™: Streaming Analytics, Q3 2019

$
0
0

Processing Big data in real-time is an operational necessity for many businesses. Azure Stream Analytics is Microsoft’s serverless real-time analytics offering for complex event processing.

We are excited and humbled to announce that Microsoft has been named a leader in The Forrester Wave™: Streaming Analytics, Q3 2019. Microsoft believes this report truly reflects the market momentum of Azure Stream Analytics, satisfied customers, a growing partner ecosystem and the overall strength of our Azure cloud platform. You can access the full report here.

Forrester Wave for Streaming Analytics published in Q3 2019 that positions Microsoft as a leader in the category.

 

The Forrester Wave™: Streaming Analytics, Q3 2019

 

Forrester Wave™: Streaming Analytics, Q3 2019 report evaluated streaming analytics offerings from 11 different solution providers and we are honored to share that that Forrester has recognized Microsoft as a Leader in this category. Azure Stream Analytics received the highest possible score in 12 different categories including Ability to execute, Administration, Deployment, Solution Roadmap, Customer adoption and many more.

The report states, “Microsoft Azure Stream Analytics has strengths in scalability, high availability, deployment, and applications. Azure Stream Analytics is an easy on-ramp for developers who already know SQL. Zero-code integration with over 15 other Azure services makes it easy to try and therefore adopt, making the product the real-time backbone for enterprises needing real-time streaming applications on the Azure cloud. Additionally, through integration with IoT Hub and Azure Functions, it offers seamless interoperability with thousands of devices and business applications.”

Key Differentiators for Azure Stream Analytics

Fully integrated with Azure ecosystem: Build powerful pipelines with few clicks

Whether you have millions of IoT devices streaming data to Azure IoT Hub or have apps sending critical telemetry events to Azure Event Hubs, it only takes a few clicks to connect multiple sources and sinks to create an end-to-end pipeline.

Developer productivity

One of the biggest advantages of Stream Analytics is the simple SQL-based query language with its powerful temporal constraints to analyze data in motion. Familiarity with SQL language is enough to author powerful queries. Additionally, Azure Stream Analytics supports language extensibility via C# and JavaScript user-defined functions (UDFs) or user-defined aggregates to perform complex calculations as part of a Stream Analytics query.

Analytics prowess

Stream Analytics contains a wide array of analytic capabilities such as native support for geospatial functions, built-in callouts to custom machine learning (ML) models for real-time scoring, built-in ML models for Anomaly Detection, Pattern matching, and more to help developers easily tackle complex scenarios while staying in a familiar context.

Intelligent edge

Azure Stream Analytics helps bring real-time insights and analytics capabilities closer to where your data originates. Customers can easily enable new scenarios with true hybrid architectures for stream processing and run the same query in the cloud or on the IoT edge.

Best-in-class financially backed SLA by the minute

We understand it is critical for businesses to prevent data loss and have business continuity. Stream Analytics guarantees event processing with a 99.9 percent availability service-level agreement (SLA) at the minute level, which is unparalleled in the industry.

Scale instantly

Stream Analytics is a fully managed serverless (PaaS) offering on Azure. There is no infrastructure to worry about, and no servers, virtual machines, or clusters to manage. We do all the heavy lifting for you in the background. You can instantly scale up or scale-out the processing power from one to hundreds of streaming units for any job.

Mission critical

Stream Analytics guarantees “exactly once” event processing and at least once delivery of events. It has built-in recovery capabilities in case the delivery of an event fails. So, you never have to worry about your events getting dropped.

Try it today

There is a strong and growing developer community that supports Stream Analytics. Learn how to get started and build a real-time fraud detection system.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>