Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Uploading to Codecov just got easier

$
0
0

How do you know your tests actually exercise your code? Perhaps you’re using a code coverage tool like coverage.py or the tools built into Visual Studio. Codecov helps you track code coverage: how much of your code is covered, and are you getting more coverage over time? By integrating Codecov into your continuous integration (CI) pipeline, you’ll get great reports that help you improve your coverage. You can read more about Codecov on their features page.

Using Codecov

It’s really easy to get started. Codecov offers a results uploader you can run without having to install any additional tools. The script figures out what language your project is written in, where the coverage results are, and how to get them to the service. Typically when you integrate outside services into your CI pipeline, you also need to manage one or more secrets. These secrets – think passwords and certificates – are required to securely talk to the outside service.

Codecov has a clever feature in their Bash uploader: tokenless upload for public pipelines. By doing a little extra validation on their side, Codecov saves users the trouble of managing secrets. This only works for public projects on specific CI providers, and I’m happy to announce that now, Azure Pipelines is one of them. (If you have private pipelines on Azure Pipelines or anyplace else, you can still use Codecov and will need to manage a token.)

Connecting with Azure Pipelines

To give it a try for yourself, clone one of Codecov’s example repos. I like Python, so that’s where I started. It’s a toy Python project with two methods and one test. You’ll have to change two things about that example repo:

  1. Replace .travis.yml with an Azure Pipelines YAML definition.
  2. Instead of using the Python codecov uploader, rely on the Bash uploader.

Here’s my example Azure Pipelines file:

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'
strategy:
  matrix:
    Python35:
      python.version: '3.5'
    Python36:
      python.version: '3.6'
    Python37:
      python.version: '3.7'

steps:
- task: UsePythonVersion@0
  inputs:
    versionSpec: '$(python.version)'
  displayName: 'Use Python $(python.version)'

- script: |
    python -m pip install --upgrade pip
    pip install coverage
  displayName: 'Install coverage'

- script: |
    coverage run tests.py
  displayName: 'Run tests'

- script: |
    bash <(curl -s https://codecov.io/bash)
  displayName: 'Upload to codecov.io'

Showing it off

After I install the Codecov GitHub app and run this pipeline, Codecov handles the rest for me. Here’s an example of the graphs it generates:

Screen shot of the Codecov dashboard

I get ongoing coverage reports without having to add or manage secrets in my pipeline. Thanks to the folks over at Codecov for adding Azure Pipelines to their tokenless upload infrastructure.

The post Uploading to Codecov just got easier appeared first on Azure DevOps Blog.


Announcing Windows Community Toolkit v6.0

$
0
0

We’re thrilled to announce today the next update to the Windows Community Toolkit, version 6.0, made possible with help and contributions from our developer community. This release brings ARM64 support to the toolkit as well as an update to XAML Islands for .NET Core 3 support. In addition, we have new features like the EyeDropper control and new Win32 notification helpers. We also have an update to our preview of Microsoft Graph enabled XAML controls.

See more details on these features below.

XAML Islands brings UWP to WPF, WinForms, and Win32

XAML Islands enables a developer to enhance the look, feel, and functionality of an existing WPF, Windows Forms, or C++ Win32 application and make use of the latest Windows 10 UI features that are only available via UWP controls like inking:

Screen showing inking.

This release improves tooling support for .NET Core 3 and makes it even easier to get started.

Documentation for XAML Islands.

ARM64 Support

The Windows Community Toolkit now supports applications that are targeting ARM64. This allows developer’s apps to take advantage of increased performance and battery life by running on the native architecture for devices like the Surface Pro X. We also worked closely with the Win2D team to ensure it also now supports ARM64. This was important for Lottie and other toolkit features that rely on Win2D.

Lottie Improvements

This update brings more Adobe After Effects features to Lottie-Windows, including Linear and Radial Gradients, Masks, Track Mattes, and codegen support for Image Layers. We hope that these additions will allow motion designers and application developers to create even more visually compelling user experiences on Windows 10. Since some of these features rely on newer SDKs, Lottie-Windows now also offers adaptive versioning. We rely on the community to prioritize feature work so please do keep providing your valuable feedback and suggestions for Lottie-Windows here!

Eye Dropper

The new Eye Dropper control allows you to provide effortless color selection functionality to your app.

Example of Eye Dropper control feature.

Documentation for EyeDropper.

XAML Graph Controls Preview

This new addition to the Windows Community Toolkit allows developers to easily authenticate and access Microsoft Graph in Windows 10 apps to create rich data and user connected experiences. These controls are available as a preview of our 6.1 release today and will work with UWP apps and in WPF/WinForms for Win32 apps via XAML Islands on .NET Core 3. In addition, with the help of Xamarin and the Uno Platform, you will also soon be able to use them on Android and iOS.

Example of PeoplePicker functions.

Read about these new controls in our original announcement or on GitHub for all the latest details.

Get started today

There are a lot more updates than we can cover here, be sure to read the release notes for more details on all the fixes provided in this update.

As a reminder, you can get started by following our docs.microsoft.com tutorial, or preview the latest features by installing the Windows Community Toolkit Sample App from the Microsoft Store. If you would like to contribute, please join us on GitHub! To join the conversation on Twitter, use the #WindowsToolkit hashtag.

Happy coding!

The post Announcing Windows Community Toolkit v6.0 appeared first on Windows Developer Blog.

Visual Studio for Mac: Take Control of Your IDE with Keybindings

$
0
0

The great debates in computing all have one common theme. Whether it is tabs vs. spaces or Vi vs. Emacs, the thread linking all these debates together is keyboard efficiency. The truth is, we spend tons of hours working in an application, and keyboard shortcuts become automatic to us, the same muscle memory that great pianists or sports players have. If you suddenly give a virtuoso pianist a piano where the keys are half as wide and the sharp/flat keys are below as opposed to above the natural keys, they will struggle to make even the most basic melodies while they learn the new arrangement. Likewise, when it comes to keyboard shortcuts in your favorite IDE, any change can be disorienting quickly. Luckily, Visual Studio for Mac offers a ton of customizations to key bindings that will allow you get configure your key combinations to your liking.

First Run

New users to Visual Studio for Mac will notice right away that the IDE offers support for many different key mappings. The first time Visual Studio for Mac is launched on a computer, you will receive a prompt directing you to pick your favorite key mapping.

Here, you can select from four different key mappings to help you be as productive as possible from the first line of code you write. But what if you want even more customizations? Well, Visual Studio for Mac has you covered there as well!

More Customizing

While setting a default keymap is certainly handy, it doesn’t solve all circumstances. There may be custom mappings that you’ve used in other IDEs, or specific commands that are outside the bounds of the array of preconfigured options. With the Key Bindings selection window, you can map every possible command within the IDE to a specific key. To see the Key Binding options, select Visual Studio > Preferences > Environment > Key Bindings.

There are several features that I want to point out in this window, and I will take you through them one by one. The most immediate option you see is that there is a dropdown available for various “Schemes” which map to the options that new users see when they first install the IDE. Here you can select from many different pre-packaged key bindings, such as Visual Studio, VS Code and Xcode.

But what if you want even more control? What if you really, really want “Find Derived Symbols” to be mapped to Control-Option-D? Setting custom keybindings is super easy in Visual Studio for Mac. To get started, you can either scroll through the list of available commands, or search for the command in the search box. The list of available commands is organized by type of command and can be collapsed for easier navigation. Once you find the command you would like to map, you can select it and then type the desired key binding in “Edit Binding” followed by clicking “Apply”. In the below GIF, I set the binding for “New Breakpoint” to Control-Shift-B.

 

 

You can also edit an existing breakpoint in a very similar manner. In the below GIF, you can see how to edit the “New File” command to map to Control-Shift-N from the default Command-N. You’ll notice that all I need to do is type in the command I prefer and click apply. If you want to add multiple bindings, simply click “Add” instead of “Apply”.

 

 

Finally, with so many commands to remember, it can sometimes be hard to keep track and avoid duplicates. To ensure that each key binding is unique, Visual Studio for Mac checks against all configured commands and warns you of a duplicate if one is detected. It will also check for command duplication, so you can either replace the original or your newly created binding. The GIF below shows what happens when mapping the “New Breakpoint” command to the “Command-C” keyboard binding which conflicts with “Copy”.

 

 

Now that you know how to edit the key mappings in any way you see fit, you can fully customize the IDE and get to writing code the way you love! If you want to see more key binding information, please check out our Toolbox video on the subject on Channel 9

If you have any feedback or suggestions, please leave them in the comments below. You can also reach out to us on Twitter at @VisualStudioMac. For any issues that you run into when using Visual Studio for Mac, please Report a Problem.

The post Visual Studio for Mac: Take Control of Your IDE with Keybindings appeared first on Visual Studio Blog.

Improvements in .NET Core 3.0 for troubleshooting and monitoring distributed apps

$
0
0

Post was authored by Sergey Kanzhelev. Thank you David Fowler and Richard Lander for reviews.

Introduction

Operating distributed apps is hard. Distributed apps typically consists of multiple components. These components may be owned and operated by different teams. Every interaction with an app results in distributed trace of code executions across many components. If your customer experiences a problem – pinpointing the root cause in one of components participated in a distributed trace is a hard task.

One big difference of distributed apps comparing to monoliths is a difficulty to correlate telemetry (like logs) across a single distributed trace. Looking at logs you can see how each component processed each request. But it is hard to know which request in once component and request in other component belong to the same distributed trace.

Historically, Application Performance Monitoring (APM) vendors provided the functionality of distributed trace context propagation from one component to another. Telemetry is correlated using this context. Due to heterogeneous nature of many environments, with components owned by different teams and using different tools for monitoring, it was always hard to instrument distributed apps consistently. APM vendors provided automatic code injection agents and SDKs to handle complexity of understanding various distributed context formats and RPC protocols.

With the upcoming transition of W3C Trace Context specification into Proposed Recommendation maturity level, and support of this specification by many vendors and platforms, the complexity of the context propagation is decreasing. The W3C Trace Context specification describes semantics of the distributed trace context and its format. This ensures that every component in a distributed app may understand this context and propagate it to components it calls into.

Microsoft is working on making distributed apps development easier with many ongoing developments like Orleans framework and project Dapr. As for distributed trace context propagation – Microsoft services and platforms will be adopting a W3C Trace Context format.

We believe that ASP.NET Core must provide an outstanding experience for building distributed tracing apps. With every release of ASP.NET Core we execute on this promise. This post describes the scenario of distributed tracing and logging highlighting improvements in .NET Core 3.0 and talks about discussions of a new exciting features we plan to add going forward.

Distributed Tracing and Logging

Let’s explore distributed tracing in .NET Core 3.0 and improvements recently made. First, we’ll see how two “out of the box” ASP.NET Core 3.0 apps has logs correlated across the entire distributed trace. Second, we’ll explore how easy it is to set distributed trace context for any .NET Core application and how it will automatically be propagated across http. And third, we’ll see how the same distributed trace identity is used by telemetry SDKs like OpenTelemetry and ASP.NET Core logs.

This demo will also demonstrate how .NET Core 3.0 embraces W3C Trace Context standard and what other features it offers.

Demo set up

In this demo we will have three simple components: ClientApp, FrontEndApp and BackEndApp.

BackEndApp is a template ASP.NET Core application called WeatherApp. It exposes a REST API to get a weather forecast.

FrontEndApp proxies all incoming requests into the calls to BackEndApp using this controller:

[ApiController]
[Route("[controller]")]
public class WeatherForecastProxyController : ControllerBase
{
    private readonly ILogger<WeatherForecastProxyController> _logger;
    private readonly HttpClient _httpClient;

    public WeatherForecastProxyController(
        ILogger<WeatherForecastProxyController> logger, 
        HttpClient httpClient)
    {
        _logger = logger;
        _httpClient = httpClient;
    }

    [HttpGet]
    public async Task<IEnumerable<WeatherForecast>> Get()
    {
        var jsonStream = await 
                  _httpClient.GetStreamAsync("http://localhost:5001/weatherforecast");

        var weatherForecast = await 
              JsonSerializer.DeserializeAsync<IEnumerable<WeatherForecast>>(jsonStream);

        return weatherForecast;
    }
}

Finally, ClientApp is a .NET Core 3.0 Windows Forms app. ClientApp calls into FrontEndApp for the weather forecast.

private async Task<string> GetWeatherForecast()
{
    return await _httpClient.GetStringAsync(
                                 "http://localhost:5000/weatherforecastproxy");
}

Please note, there were no additional SDKs enabled or libraries installed on demo apps. As the demo progresses – every code change will be mentioned.

Correlated logs

Let’s make the very first call from ClientApp and take a look at the logs produced by FrontEndApp and BackEndApp.

FrontEndApp (a few line breaks added for readability):

info: Microsoft.AspNetCore.Routing.EndpointMiddleware[1]
      => ConnectionId:0HLR1BR0PL1CH 
      => RequestPath:/weatherforecastproxy 
         RequestId:0HLR1BR0PL1CH:00000001, 
         SpanId:|363a800a-4cf070ad93fe3bd8., 
         TraceId:363a800a-4cf070ad93fe3bd8, 
         ParentId:
Executed endpoint 'FrontEndApp.Controllers.WeatherForecastProxyController.Get (FrontEndApp)'

BackEndApp:

info: BackEndApp.Controllers.WeatherForecastController[0]
      => ConnectionId:0HLR1BMQHFKRL 
      => RequestPath:/weatherforecast 
         RequestId:0HLR1BMQHFKRL:00000002, 
         SpanId:|363a800a-4cf070ad93fe3bd8.94c1cdba_, 
         TraceId:363a800a-4cf070ad93fe3bd8, 
         ParentId:|363a800a-4cf070ad93fe3bd8. 
Executed endpoint 'FrontEndApp.Controllers.WeatherForecastController.Get (BackEndApp)'

Like magic, logs from two independent apps share the same TraceId. Behind the scene, ASP.NET Core 3.0 app will initialize a distributed trace context and pass it in the header. This is how incoming headers to the BackEndApp looks like:

You may notice that FrontEndApp didn’t receive any additional headers:

The reason is that in ASP.NET Core apps – distributed trace being initiated by ASP.NET Core framework itself on every incoming request. Next section explains how to do it for any .NET Core 3.0 app.

Initiate distributed trace in .NET Core 3.0 app

You may have noticed the difference in behavior of Windows Forms ClientApp and ASP.NET Core FrontEndApp. ClientApp didn’t set any distributed trace context. So FrontEndApp didn’t receive it. It is easy to set up distributed operation. Easiest way to do it is to use an API called Activity from the DiagnosticSource package.

private async Task<string> GetWeatherForecast()
{
    var activity = new Activity("CallToBackend").Start();

    try
    {
        return await _httpClient.GetStringAsync(
                               "http://localhost:5000/weatherforecastproxy");
    }
    finally
    {
        activity.Stop();
    }
}

Once you have started an activity, HttpClient knows that distributed trace context needs to be propagated. Now all three components – ClientApp, FrontEndApp and BackEndApp share the same TraceId.

W3C Trace Context support

You may notice that the context is propagating using the header called Request-Id. This header was introduced in Asp.Net Core 2.0 and is used by default for better compatibility with these apps. However, as the W3C Trace Context specification is being widely adopted, it is recommended to switch to this format of context propagation.

With .NET Core 3.0, it is easy to switch to W3C Trace Context format to propagate distributed trace identifiers. Easiest way to do it is in the main method- just add a simple line in the Main method:

static void Main()
{
    Activity.DefaultIdFormat = ActivityIdFormat.W3C;
    …
    Application.Run(new MainForm());
}

Now, when the FrontEndApp receives requests from the ClientApp, you see a traceparent header in the request:

The ASP.NET Core app will understand this header and recognize that it needs to use W3C Trace Context format for outgoing calls now.

Note, ASP.NET Core apps will recognize the correct format of distributed trace context automatically. However, it is still a good practice to switch the default format of distributed trace context to W3C for better interoperability in heterogeneous environments.

You will see all the logs attributed with the TraceId and SpanId obtained from the incoming header:

info: Microsoft.AspNetCore.Hosting.Diagnostics[1]
      => ConnectionId:0HLQV2BC3VP2T
      => RequestPath:/weatherforecast 
         RequestId:0HLQV2BC3VP2T:00000001, 
         SpanId:da13aa3c6fd9c146, 
         TraceId:f11a03e3f078414fa7c0a0ce568c8b5c, 
         ParentId:5076c17d0a604244
      Request starting HTTP/1.1 GET http://localhost:5000/weatherforecast

Activity and distributed tracing with OpenTelemetry

OpenTelemetry provides a single set of APIs, libraries, agents, and collector services to capture distributed traces and metrics from your application. You can analyze them using Prometheus, Jaeger, Zipkin, and other observability tools.

Let’s enable OpenTelemetry on the BackEndApp. It is very easy to do, just call AddOpenTelemetry on startup:

services.AddOpenTelemetry(b => 
    b.UseZipkin(o => {
                    o.ServiceName="BackEndApp"; 
                    o.Endpoint=new Uri("http://zipkin /api/v2/spans");
               })
     .AddRequestCollector());

Now, as we just saw, TraceId in the FrontEndApp logs will match TraceId in the BackEndApp.

info: Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker[2]
      => ConnectionId:0HLR2RC6BIIVO 
      => RequestPath:/weatherforecastproxy 
         RequestId:0HLR2RC6BIIVO:00000001, 
         SpanId:54e2de7b9428e940, 
         TraceId:e1a9b61ec50c954d852f645262c7b31a, 
         ParentId:69dce1f155911a45 
      => FrontEndApp.Controllers.WeatherForecastProxyController.Get (FrontEndApp)
Executed action FrontEndApp.Controllers.WeatherForecastProxyController.Get (FrontEndApp) in 3187.3112ms

info: Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker[2]
      => ConnectionId:0HLR2RLEHSKBV 
      => RequestPath:/weatherforecast 
         RequestId:0HLR2RLEHSKBV:00000001, 
         SpanId:0e783a0867544240, 
         TraceId:e1a9b61ec50c954d852f645262c7b31a, 
         ParentId:54e2de7b9428e940 
      => BackEndApp.Controllers.WeatherForecastController.Get (BackEndApp)
Executed action BackEndApp.Controllers.WeatherForecastController.Get (BackEndApp) in 3085.9111ms

Furthermore, the same Trace will be reported by Zipkin. So now you can correlate distributed traces collected by your distributed tracing tool and logs from the machine. You can also give this TraceId to the user when ClientApp experience issues. The user can share it with your app support and corresponding logs and distributed traces can be easily discovered across all components.

Taking example one step further – you can easily enable monitoring for all three components and see them on the gantt chart.

ASP.NET Core apps integrates with distributed trace

As we just seen telemetry collected by Application Monitoring vendors is correlated using the same distributed trace context as ASP.NET Core uses. This makes ASP.NET Core 3.0 apps great for the environments where different components are owned by different teams.

Imagine that only two of apps – A and C on the picture below enabled telemetry collection using SDK like OpenTelemetry. Before ASP.NET Core 3.0 it would mean that distributed tracing will not work, and a trace will be “broken” by app B.

With ASP.NET Core 3.0, since in most deployments ASP.NET Core apps are configured with the basic logging enabled, app B will propagate distributed trace context. This distributed traces from A and C will be correlated.

With the example of apps from before – if ClientApp and BackEndApp are instrumented and FrontEndApp is not – you see distributed trace is still being correlated:

This also makes ASP.NET Core apps great for the service mesh environments. In service mesh deployments, A and C from the picture above may represent a service mesh. In order for service mesh to stitch request entering and leaving component B – certain headers have to be propagated by an app. See this note from the Istio for example:

Although Istio proxies are able to automatically send spans, they need some hints to tie together the entire trace. Applications need to propagate the appropriate HTTP headers so that when the proxies send span information, the spans can be correlated correctly into a single trace.

As we work with service mesh authors to adopt W3C Trace Context format, ASP.NET Core apps will “just work” and propagate needed headers.

Passing additional context

Talking about other scenarios, it is often the case that you want to share more context between components in a distributed app. Let’s say a ClientApp wants to send its version so all REST calls will know where the request is coming from. You can add these properties in Activity.Baggage like this:

private async Task<string> GetWeatherForecast()
{
    var activity = new Activity("CallToBackend")
        .AddBaggage("appVersion", "v1.0")
        .Start();

    try
    {
        return await _httpClient.GetStringAsync(
                                         "http://localhost:5000/weatherforecastproxy");
    }
    finally
    {
        activity.Stop();
    }
}

Now on server side you see an additional header Correlation-Context in both – FrontEndApp and BackEndApp.

And you can use the Activity.Baggage to attribute your logs:

var appVersion =  Activity.Current.Baggage.FirstOrDefault(b => b.Key == "appVersion").Value;
using (_logger.BeginScope($"appVersion={appVersion}"))
{
    _logger.LogInformation("this weather forecast is from random source");
}

And you see the scope now contains an appVersion:

info: FrontEndApp.Controllers.WeatherForecastController[0]
      => ConnectionId:0HLQV353507UG
      => RequestPath:/weatherforecast 
         RequestId:0HLQV353507UG:00000001, 
         SpanId:37a0f7ebf3ecac42, 
         TraceId:c7e07b7719a7a3489617663753f985e4, 
         ParentId:f5df77ba38504846
      => FrontEndApp.Controllers.WeatherForecastController.Get (BackEndApp) 
      => appVersion=v1.0
      this weather forecast is from random source

Next steps

With the improvements for ASP.NET Core 3.0 we hear that some of the features included in ASP.NET Core is hard to consume. Developers and DevOps wants a turnkey telemetry solution that will work with many APM vendors. We believe that investments we are making in OpenTelemetry will allow more people to benefit from investments we are making in ASP.NET Core monitoring and troubleshooting. This is one of the big areas of investments for a team.

We also help people adopt W3C Trace Context everywhere and will be making it a default distributed trace context propagation format in future versions of ASP.NET Core.

Another area of investments is to improve distributed context propagation scenarios. Distributed apps comparing to monoliths are lacking common shared state with the lifetime of a single distributed trace. This shared state (or context) can be used for basic logging as was described in this article, as well as for advanced routing of requests, experimentation, A/B testing, business context propagation, etc. Some of scenarios are described in this epic: Distributed Context in ASP.NET and Open Telemetry.

Please send us your feedback and tell what improvements in distributed apps troubleshooting and monitoring we need to make.

The post Improvements in .NET Core 3.0 for troubleshooting and monitoring distributed apps appeared first on ASP.NET Blog.

.NET Framework November 13, 2019, Update for .NET Framework 4.8

$
0
0

Today, we released an update for .NET Framework 4.8 to Microsoft Update Catalog.

Quality and Reliability

This release contains the following reliability improvement.

CLR1

  • Addresses an issue where some ClickOnce applications or applications creating the default AppDomain with a restricted permission set may observe application launch or application runtime failures, or unexpected behaviors. The observable issue was the System.AppDomainSetup.TargetFrameworkName was null, leading to any quirks enabling reverting back to .NET Framework 4.0 behaviors.

1 Common Language Runtime (CLR)

Getting the Update

The update for .NET Framework 4.8 is available via Microsoft Update Catalog for supported versions of Windows. The reliability improvement will be available on all regular distribution channels through upcoming releases.

Microsoft Update Catalog

Windows Version Update for .NET Framework 4.8
Windows 10 1909 and Windows Server, version 1909 Catalog KB4530743
Windows 10 1903 and Windows Server, version 1903 Catalog KB4530743
Windows 10 1809 (October 2018 Update) Windows Server 2019 Catalog KB4530742
Windows 10 1803 (April 2018 Update) Catalog KB4530741
Windows 10 1709 (Fall Creators Update) Catalog KB4530740
Windows 10 1607 (Anniversary Update) Windows Server 2016 Catalog KB4530738
Windows 8.1, Windows RT 8.1 and Windows Server 2012 R2 Catalog KB4530745
Windows Server 2012 Catalog KB4530744
Windows 7 SP1 Windows Server 2008 R2 SP1 Catalog KB4530746

Note: The November 13, 2019 Update for .NET Framework 4.8 is not a cumulative update.

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

The post .NET Framework November 13, 2019, Update for .NET Framework 4.8 appeared first on .NET Blog.

Azure Container Registry: Preview of diagnostic and audit logs

$
0
0

The Azure Container Registry team is happy to announce the preview of audit logs – one of our top items on UserVoice. In this release, we have new Azure portal and command-line interface (CLI) experiences to enable resource logs for diagnostic and audit evaluation of your registry logs.

This feature enables a capability to monitor your container registry by providing an audit trail of all relevant user driven activities on the registry. These logs contain information related to authentication, login details, repository level activities, and other user-driven events. In addition to these logs, Azure also provides a generic activity log which maintains a range of Azure Resource Manager information, including service health and other Azure management operations on the registry.

This feature also enables a user to turn on the resource logs for their container registry and can help facilitate with some of their compliance and diagnosing needs related to:

  • Security and compliance related tracking.
  • Diagnosing operational issues related to registry activities such as pull, push events.

Collection of resource logs for your registry however requires some additional steps as they are not turned on by default. Figure one displays how to configure diagnostics settings to enable Log Analytics. The logs can be viewed in Azure Monitor but would first require to be collected into a Log Analytics workspace.

A screenshot shoing how to configure diagnostic settings to enable Log Analytics.

Figure one

You can find the detailed steps to set up diagnostic workspace for collecting the logs and to use Azure Monitor for viewing the registry logs.

Azure Monitor is the consistent means to view and visualize your resource logs in Azure. Once the logs collections has been setup in Log Analytics, you can begin to view the logs data by running these queries. Figure 2 shows an example of running one of the sample queries.

A screenshot showing an example of running a samply query.

Figure two

The current release is preview, in the future we will provide logs on other registry events like Delete, Untag, Replication, and more. Please continue to provide your feedback to help prioritize these feature asks.

Availability and feedback

Push, Pull, and Login event logs are currently available with delete and untag event logs to follow shortly.  As always, we love to hear your feedback on existing features as well as ideas for product roadmap.

Here’s a list of resources how you can use to engage with our team and provide feedback:

Democratizing Smart City solutions with Azure IoT Central

$
0
0

One of the most dynamic landscapes embracing Internet of Things (IoT) is the modern city. As urbanization grows, city leaders are under increasing pressure to make cities safer, accessible, sustainable, and prosperous.

Underlying all these important goals is the bedrock that makes a city run: infrastructure. Whether it be water, electricity, streets, traffic lights, cities are increasingly using the Internet of Things (IoT) to manage their infrastructure by capturing and analyzing data from connected devices and sensors. This gives city managers real-time insights to improve operational efficiency and outcomes and to altogether rethink and reinvent city government functions and operations.

Microsoft and its ecosystem of service and hardware providers are deeply engaged with cities and communities around the world, addressing the most pressing issues that government leaders face. For instance, traffic congestion continues to increase in most urban areas, placing growing pressure on existing physical infrastructure. In the emerging world, new physical infrastructure needs to be built altogether. Citizens also have growing concerns about public safety and security. Investments in IoT-based solutions for city operations are accelerating to address these concerns, led by applications like smart street lighting, smart waste, and smart parking. Cities are also realizing the benefit of IoT for optimizing the management of globally scarce resources, such as water and energy. Amidst this growing investment, early results from the world's leading smart cities are promising. Some cities have seen approximately 60 percent in energy savings from leveraging LED-based smart streetlights, while others have been able to save 25-80 liters of water per person per day. Optimized traffic flow in some areas is helping commuters shave 15-30 minutes daily, resulting in a 10-15 percent reduction in emissions, and 66 percent operational cost reduction from smart waste management.

Despite a growing consensus around the benefits of adopting IoT solutions, scaling beyond the proof of concept remains difficult. Most smart city solutions today consist of bespoke pilots, unable to scale or repeat due to growing costs, complexity, and lack of specialized technical talent, in a market landscape that is already incredibly fragmented. Earlier this year we surveyed 3,000 enterprise decision-makers across the world, including government organizations, of whom 83 percent consider IoT “critical” to success, notably for public safety and infrastructure and facilities management. At the same time, the vast majority of the decision-makers expressed concerns about persistent knowledge gaps for how to scale their solutions securely, reliably, and affordably, the main reason why the average maturity of production-level IoT projects remains extremely low (read the full IoT Signals report). In order to help IoT solution builders navigate the complexity of designing enterprise-grade IoT systems, we published our learnings in a whitepaper called “The 8 attributes of successful IoT solutions” to help IoT solution builders ask the right questions up front as they design their systems, and to help them select the right technology platforms.

Building Smart Cities IoT solutions with Azure IoT Central

To further help IoT solution builders confidently scale their projects, we recently announced updates to Azure IoT Central, our IoT app platform for designing, deploying, and managing enterprise-grade solutions. IoT Central provides a fully managed platform for building and customizing solutions, designed to support solution builders with each of the attributes of successful IoT systems, including security, disaster recovery, high availability, and more. By removing the complexity and overhead of setup, management, and operations, IoT Central is lowering the barrier for IoT solution builders across industries, and accelerates the creation of innovative solutions across all industries, from retail to healthcare to energy to government. Check out our recent IoT Central blog for a full list of our updates and examples of solution builders across different industries.

As part of our mission to democratize IoT for all, we released an initial set of Azure IoT Central government app templates to help solution builders start building IoT solutions quickly with out-of-box device command and control, monitoring and alerting, a user interface with built-in permissions, configurable dashboards, and extensibility APIs. Solution builders can brand, customize, and easily connect their solutions to their line of business applications, such as Dynamics 365 for integrated field service, Azure ML services, or their third-party services of choice.

Developers can get started today with any of the government app templates for free and access starter resources, including sample operator dashboards, simulated devices, pre-configured rules, and alerting to explore what is possible. We’ve also provided guidance for customizing and extending solutions with documentation, tutorials, and how-to’s. Ultimately you can brand and sell your finished solution to your customers, either directly or through Microsoft AppSource.

IoT Central Government App templates

Government app templates available today:

Connected waste management: Sensors deployed in garbage containers in cities can inform how full a trash bin is and optimize waste collection routes. Moreover, advanced capabilities for smart waste applications involve the use of analytics to detect bin contamination.

Water quality monitoring: Traditional water quality monitoring relies on manual sampling techniques and field laboratory analysis, which is both time consuming and costly. By remotely monitoring water quality in real-time, water quality issues can be managed before citizens are impacted.

Water consumption monitoring: Traditional water consumption tracking relies on water operators manually reading water meters across various sites. More and more cities are replacing traditional meters with advanced smart meters, enabling remote monitoring of consumption as well as remote control of valves to manage water flow. Water consumption monitoring coupled with information and insights flowing back to individual households can increase awareness and reduce water consumption.

Water Consumption Monitoring Blog screenshot

Expect to see more app templates for solution builders over time to cover other smart city scenarios, with templates for smart streetlights, air quality monitoring, smart parking, and more.

Innovative smart cities solution partners using Azure IoT Central

From established leading research organizations to enterprises to public utilities, we are seeing solution builders leverage Azure IoT Central to transform their public sector services.

Smart water infrastructure

Dutch-based company, Oasen, supplies 48 billion liters of high-quality drinking water every year to 750,000 residents across municipalities in the South Holland region. Oasen turned to Microsoft and OrangeNXT to digitally transform its water structure. Using Azure IoT Central, the company is introducing scalability, flexibility, and greater innovation to its operations through remote management of its water distribution network. Leveraging Azure Digital Twins and Azure IoT Central, Oasen connects multiple sources of data (including data extracted from smart water meters and smart valves in pipelines), to create a true digital twin of the water grid.

By remotely controlling and monitoring valves, Oasen can now automatically test grid sections (step-testing) to radically improve grid quality, as well as predict burst water mains and assess which pipelines are most at risk of damage and need repair. These smart water shutters and smart meter implementations significantly reduce manual work. Furthermore, the smart grid solution allows the automatic shutdown of sections of the distribution network if a leak is detected, preventing damage, and reducing water quality hazards.

Water quality monitoring

Other solution builders have built solutions for water quality management. According to the World Health Organization, nearly one-fourth of people across the globe drink water contaminated with feces, with an estimated 50 percent of the global population projected to live in water-stressed areas by 2025, (either in close proximity to polluted or otherwise scarce water sources). There has never been a greater need for high-quality data from liquid sensor networks to track ion levels in the water, which can fluctuate dramatically within the scope of several hundred meters and can have devastating impacts on public health. Imec, a leading international research and development firm specializing in nanoelectronics and digital technology, has developed water sensor devices from inexpensive ion sensors on silicone substrates for monitoring water quality in real-time.

Imec, together with partners, will pilot this solution in a testbed of about 2,500 sensors installed across the Flanders region in Belgium. The sensors detect salinity in the water in real-time, allowing officials to track water quality fluctuations over time. Imec’s water quality monitoring solution was built on Azure IoT Central, which provides the flexible foundation required to design, test, and scale the solution across the city.

“IoT Central is a fast and easy to use platform suitable for an innovative R&D organization such as ours. This means we can dedicate ourselves to enable large fine-grained networks of water quality sensors and, through the collected data, improve visibility into water quality and enable better water management to the mission to make water quality better visible. ”—Marcel Zevenbergen, Program Manager, Imec

Smart street lighting

Combined with LED conversion, smart street lighting solutions have helped uncover massive efficiency opportunities for cities, with operational savings typically reaching over 65 percent. Telensa is a world leader in connected streetlight solutions, managing over 1.7 million poles in 400 cities around the world. Telensa PLANet is an end-to-end smart street lighting system consisting of wireless nodes that connect individual lights to a dedicated network and a central management application. The system helps cities reduce energy and maintenance costs while improving the efficiency of maintenance through automatic fault reporting and turning streetlight poles into hubs for other smart city sensors, such as for air quality and traffic monitoring. Since no two cities are the same, Telensa has developed its Urban IQ solution to enables cities to add any 3rd party sensors to their connected street lighting, make the insights available across city departments, and to provide sophisticated real-time visualization out of the box. Telensa built its Urban IQ solution with Azure IoT Central, to fit with current systems and to be ready for future directions. By moving device management and connectivity functions to IoT Central,  and dramatically lowering the cost of adding other sense and control apps to their Azure data fabric, Telensa can focus on enhancing smart city functionality and adding value for its customers.

Connecting the dots for smarter cities

With solutions that take full advantage of the intelligent cloud and intelligent edge, we continue to demonstrate how cloud, IoT, and artificial intelligence (AI) have the power to drastically transform and enhances cities to be more sustainable, enjoyable, and inclusive. Azure IoT continues to accelerate results with a growing and diverse set of partners creating solutions relevant to smart cities from spatially-aware solutions that provide real-world context, to smart grids of the future, to urban mobility and spatial intelligence. Together, we can build more intelligent and connected cities that empower people and organizations to achieve more.

Get started today with Azure IoT Central.

Smart City Expo World Congress

Microsoft will be at Smart City Expo World Congress, the industry-leading event for urbanization, to connect smart city technologies and partners with cities on a digital transformation journey. Visit our booth at Gran Via, Hall P2, Stand B223 and learn more about our conference presence at SCEWC 2019. We also encourage you to meet with us at the following sessions:

How to build globally distributed applications with Azure Cosmos DB and Pulumi

$
0
0

This post was co-authored by Mikhail Shilkov, Software Engineer, Pulumi.

Pulumi is reinventing how people build modern cloud applications, with a unique platform that combines deep systems and infrastructure innovation with elegant programming models and developer tools.

We live in amazing times when people and businesses on different continents can interact at the speed of light. Numerous industries and applications target users around the globe: e-commerce websites, multiplayer online games, connected IoT devices, collaborative work and leisure experiences, and many more. All of these applications demand computing and data infrastructure in proximity to the end-customers to minimize latency and keep the user experience engaging. The modern cloud makes these scenarios possible. 

Azure infrastructure

Azure Cosmos DB provides a turn-key data distribution to any number of regions, meaning that locations can be added or removed along the way while running production workloads. Azure takes care of data replication, resiliency, and efficiency while providing APIs for read and write operations with a latency of less than 10 milliseconds.

In contrast, compute services—virtual machines, container instances, Azure App Services, Azure Functions, and managed Azure Kubernetes Service—are located in a single Azure region. To make good use of the geographic redundancy of the database, users should deploy their application to each of the target regions.

 

An image showing globally distributed applications.

Globally distributed application

Application regions must stay in sync with Azure Cosmos DB regions to enjoy low-latency benefits. Operational teams must manage the pool of applications and services to provide the correct locality in addition to auto-scaling configuration, efficient networking, security, and maintainability.

To help manage the complexity, the approach of infrastructure as code comes to the rescue.

Infrastructure as code

While the Azure portal is an excellent pane-of-glass for all Azure services, it shouldn’t be used directly to provision production applications. Instead, we should strive to describe the infrastructure in terms of a program which can be executed to create all the required cloud resources.

Traditionally, this could be achieved with an automation script, e.g., a PowerShell Cmdlet or a bash script calling the Azure CLI. However, this approach is laborious and error prone. Bringing an environment from its current state to the desired is often non-trivial. A failure in the middle of the script often requires manual intervention to repair environments, leading to downtime.

Desired state configuration is another style of infrastructure definition. A user describes the desired final state of infrastructure in a declarative manner, and the tooling takes care of bringing an environment from its current state to the parity with the desired state. Such a program is more natural to evolve and track changes.

Azure Resource Manager Templates is the bespoke desired-state-configuration tool in the world of Azure. The state is described as a JSON template, listing all the resources and properties. However, large JSON templates can be quite hard to write manually. They have a high learning curve and quickly become large, complex, verbose, and repetitive. Developers find themselves missing simple programming language possibilities like iterations or custom functions.

Pulumi solves this problem by using general-purpose programming languages to describe the desired state of cloud infrastructure. Using JavaScript, TypeScript, or Python reduces the amount of code many-fold, while bringing constructs like functions and components to the DevOps toolbox.

Global applications with Pulumi

To illustrate the point, we develpoed a TypeScript program to provision a distributed application in Azure.

The target scenario requires quite a few resources to distribute the application across multiple Azure regions, including:

  • Provision an Azure Cosmos DB account in multiple regions
  • Deploy a copy of the application layer to each of those regions
  • Connect each application to the Azure Cosmos DB local replica
  • Add a Traffic Manager to route user requests to the nearest application endpoint

A diagram showing the flow of global application with Azure and Pulumi. Global application with Azure and Pulumi

 

However, instead of coding this manually, we can rely on Pulumi's CosmosApp component as described in How To Build Globally Distributed Applications with Azure Cosmos DB and Pulumi. The component creates distributed Azure Cosmos DB resources, as well as the front-end routing component while allowing pluggable compute layer implementation.

You can find the sample code in Reusable Component to Create Globally-distributed Applications with Azure Cosmos DB.

Pulumi CLI executes the code, translate it to the tree of resources to create, and deploys all of them to Azure:

A screenshot showing Pulumi's CLI executing the code.

After the command succeeds, the application is up and running in three regions of my choice.

Next steps

Infrastructure as code is instrumental in enabling modern DevOps practices in the universe of global and scalable cloud applications.

Pulumi lets you use a general-purpose programming language to define infrastructure. It brings the best tools and practices from the software development world to the domain of infrastructure management.

Try the CosmosApp (available on GitHub—TypeScript, C#) with serverless functions, containers, or virtual machines to get started with Pulumi and Azure.


Forrester names Microsoft a leader in Wave report for Industrial IoT Software Platforms

$
0
0

As a company, we work every day to empower every person on the planet to achieve more. As part of that, we’re committed to investing in IoT and intelligent edge, two technology trends accelerating ubiquitous computing and bringing unparalleled opportunity for transformation across industries. We’ve been working hard to make our Azure IoT platform more open, security-enhanced, and scalable, as well as to create opportunities in new market areas and our growing partner ecosystem. Our core focus is addressing the industry challenge of securing connected devices at every layer and advancing IoT to create a more seamless experience between the physical and digital worlds.

Today, Microsoft is positioned as a leader in The Forrester Wave™: Industrial IoT Software Platforms, Q4 2019, receiving of the highest score possible, 5.00, in partner strategy, innovation roadmap, and platform differentiation criteria, the highest score in the market presence category, and the second-highest score in the current offering category.

According to the Forrester report, “Microsoft powers industrial partners but also delivers a credible platform of its own. Microsoft continues to add features to the platform at an impressive rate, with the richer edge capabilities of Azure IoT Edge and the simplified application and device onboarding offered by Azure IoT Central formally launching since we last evaluated this market.”

We believe this latest recognition spotlights our commitment and ability to:

Support a comprehensive set of deployment models, from edge to cloud. According to our own IoT Signals research, the decision-makers surveyed believe that in the next two years, AI, edge computing, and 5G will be critical technological drivers for IoT success. And they want tools that can drive success across diverse deployment models.

Deliver business integration that goes beyond connectivity and device management. It’s become increasingly important for businesses to be able to link IoT workflows to data and processes across the operation, and we’re helping customers accelerate time to value.

Turn analytics into actionable intelligence. Industrial firms capture and generate mountains of time-series data in real-time. Transforming this data into timely insights is key to turning that data into decisions that move the business forward.

Forrester Wave Solutions

We’re committed to making Azure the ideal IoT platform, and this recognition comes at a great point in our journey. Download this complimentary full report and read the analysis behind Microsoft’s positioning as a Leader.

More information on our Azure IoT Industrial platform.

The Forrester Wave™: Industrial IoT Software Platforms, Q4 2019, Michele Pelino and Paul Miller, November 13, 2019. This graphic was published by Forrester Research as part of a larger research document and should be evaluated in the context of the entire document. 

ASP.NET Core updates in .NET Core 3.1 Preview 3

$
0
0

.NET Core 3.1 Preview 3 is now available. This release is primarily focused on bug fixes.

See the release notes for additional details and known issues.

Get started

To get started with ASP.NET Core in .NET Core 3.1 Preview 3 install the .NET Core 3.1 Preview 3 SDK.

If you’re on Windows using Visual Studio, for the best experience we recommend installing the latest preview of Visual Studio 2019 16.4. Installing Visual Studio 2019 16.4 will also install .NET Core 3.1 Preview 3, so you don’t need to separately install it. For Blazor development with .NET Core 3.1, Visual Studio 2019 16.4 is required.

Alongside this .NET Core 3.1 Preview 3 release, we’ve also released a Blazor WebAssembly update. To install the latest Blazor WebAssembly template also run the following command:

dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.1.0-preview3.19555.2

Upgrade an existing project

To upgrade an existing ASP.NET Core 3.1 Preview 2 project to 3.1 Preview 3:

  • Update all Microsoft.AspNetCore.* package references to 3.1.0-preview3.19555.2

See also the full list of breaking changes in ASP.NET Core 3.1.

That’s it! You should now be all set to use .NET Core 3.1 Preview 3!

Give feedback

We hope you enjoy the new features in this preview release of ASP.NET Core! Please let us know what you think by filing issues on GitHub.

Thanks for trying out ASP.NET Core!

The post ASP.NET Core updates in .NET Core 3.1 Preview 3 appeared first on ASP.NET Blog.

Announcing .NET Core 3.1 Preview 3

$
0
0

Today, we’re announcing .NET Core 3.1 Preview 3. .NET Core 3.1 is a small and short release focused on key improvements in Blazor and Windows desktop, the two big additions in .NET Core 3.0.. It will be a long term support (LTS) release. We are coming near the end of the 3.1 release and expect to release it in early December.

You can download .NET Core 3.1 Preview 3 on Windows, macOS, and Linux.

ASP.NET Core and EF Core are also releasing updates today.

Visual Studio 16.4 Preview 5 and Visual Studio for Mac 8.4 Preview 5 are also releasing today. They are required updates to use .NET Core 3.1 Preview 3. Visual Studio 16.4 includes .NET Core 3.1, so just updating Visual Studio to 16.4 Preview 5 will give you the latest version of both products.

Details:

Closing

The primary goal of .NET Core 3.1 is to polish the features and scenarios we delivered in .NET Core 3.0. .NET Core 3.1 will be a long term support (LTS) release, supported for at least 3 years.

The initial download numbers for .NET Core 3.0 are even higher than we expected. We guess that 80-90% (or even higher) of the .NET Core ecosystem will move to .NET Core 3.1 within the first 6 months of the release. We are encouraging everyone to move to the 3.1 release as soon as they can, given that it has a lot of improvements (largely via 3.0) and is the newest LTS release.

Please install and test .NET Core 3.1 Preview 3 and give us feedback. It is not yet officially supported although we believe it is now safe for limited use in production. For example, the dotnet.microsoft.com site (see version in footer) has been running in the production since Preview 1 without issue and will be updated to Preview 3 shortly.

If you missed it, check out the .NET Core 3.0 announcement from earlier this year.

The post Announcing .NET Core 3.1 Preview 3 appeared first on .NET Blog.

What’s new in Azure DevOps Sprint 160

$
0
0

Sprint 160 has just finished rolling out to all organizations and you can check out all the new features in the release notes. Here are some of the features that you can start using today.



ReviewApp in Environments

Pull requests are a very useful tool that allows developers to review new code before it is merged into the master branch. But in the new microservices-oriented world, we need to check not just the code, but the service itself. Even when the deployment is targeting a development environment, we want to verify that we aren’t breaking any of our dependencies.

To enable this scenario, ReviewApp deploys every pull request from your Git repository to a dynamic environment resource. Reviewers can see how those changes look as well as work with other dependent services before they’re merged into the main branch and deployed to production

Approval policies for YAML pipelines

In YAML pipelines, we follow a resource owner-controlled approval configuration. Resource owners configure approvals on the resource and all pipelines that use the resource pause for approvals before start of the stage consuming the resource.

You can now use advanced approval options to configure approval policies like requester should not approve, require approval from a subset of users and approval timeout.

ACR as a first-class pipeline resource

If you need to consume a container image published to ACR (Azure Container Registry) as part of your pipeline and trigger your pipeline whenever a new image got published, you can use ACR container resource.

Orchestrate canary deployment strategy on environment for Kubernetes

One of the key advantages of continuous delivery of application updates is the ability to quickly push updates into production for specific microservices. With support for canary strategy in multi-stage pipelines, you can now reduce the risk by slowly rolling out the change to a small subset. As you gain more confidence in the new version, you can start rolling it out to more servers in your infrastructure and route more users to it.

These are just the tip of the iceberg, and there are plenty more features that we’ve released in Sprint 160. Check out the full list of features for this sprint in the release notes.

The post What’s new in Azure DevOps Sprint 160 appeared first on Azure DevOps Blog.

Top Stories from the Microsoft DevOps Community – 2019.11.15

$
0
0

This week was the week of GitHub Universe, with some fantastic announcements coming out. If you’ve missed it, it is definitely worth taking a look at the day one and two keynotes.

This is also one of those weeks when it is difficult to choose between all of the amazing content this community shared. If you have written an article that I’ve missed, please feel free to reach out. In the meantime, let’s talk about pipelines!

Simplifying Azure DevOps Pipelines with Decorators
Have you ever found yourself copying over parts of your CI/CD pipeline across projects? Large companies often have hundreds of projects which require the same steps to ensure compliance or create a repeatable configuration. Luckily, there’s a solution to this problem! Azure Pipelines decorators let you add steps to the beginning and end of every pipeline in your organization. In this great post, Bryan Soltis walks us through the decorators setup. Thank you Bryan!

Using Helm 3 with Azure DevOps
Folks in the Kubernetes community are very excited to start using Helm 3, which simplifies the security model for Helm by using the latest Kubernetes security features. But is it easy to use with Azure DevOps? In this great post, Jessica Deen shows us a couple of workarounds needed to start using Helm 3 in Azure Pipelines. Hopefully, Jessica’s pull request for the Helm task will get merged soon, so the next time you use the task you won’t need the workarounds!

Container image promotion across environments – Build Artifact
There are many different approaches to promoting code between environments. Now that the world is progressively moving to containers, we need to implement these approaches for container images as well. In this post, Davide Benvegnu shows us one of the strategies for promoting containers across different stages. Thank you Davide!

Azure DevOps multi-stage pipeline environments
With the introduction of YAML Pipelines, we’ve also introduced the concept of environments. In this post, Ricci Gian Maria introduces some of the concepts that can be used in Azure YAML Pipelines environments. And you can also check out the Azure DevOps Sprint 160 release notes to see additional environment features that just came out!

Code analysis using SonarCloud in Azure DevOps
Code quality and security are, perhaps, even more important today than in the past. Luckily, the code analysis tools are quickly evolving to help us protect our applications from security breaches. In this article, Ashish Raj walks through setting up the integration between SonarCloud and Azure DevOps. Thank you Ashish!

Versioning and CI/CD for Power BI with Azure DevOps
Lately, I see a lot of excitement in the community around data platform content automation. In this post, Marc Lelijveld, Dave Rujiter, and Ton Swart show us a multi-tier setup for CI/CD and versioning of PowerBI content with Azure DevOps. Thank you for the detailed walkthrough!

Azure DevOps Generator – New Content
You may have seen our recent announcement that we have open-sourced the Azure DevOps demo generator. The demo generator is a tool that can create sample Azure DevOps projects showcasing various technologies, which is tremendously helpful for new user training. In this post, Gregor Suttie walks us through the usage of the demo generator. I cannot wait for Gregor’s next posts. Perhaps we will see a new project template? Thank you Gregor!

If you’ve written an article about Azure DevOps or find some great content about DevOps on Azure, please share it with the #AzureDevOps hashtag on Twitter!

The post Top Stories from the Microsoft DevOps Community – 2019.11.15 appeared first on Azure DevOps Blog.

Multi-granularity matching for Bing Image Search

$
0
0

A while back we shared how internet-scale deep learning techniques deployed to Bing image search stack helped us improve results to a variety of tricky queries. Today we would like to share how our image search is evolving further towards a more intelligent and more precise search engine through multi-granularity matches, improved understanding of user queries, images and webpages, as well as the relationships between them.

As we have discussed in last post, Bing image search has employed many deep learning techniques to map both query and document into semantic space greatly improving our search quality. There are however still many hard cases where users search for objects with specific context or attributes (for example: {blonde man with a mustache}, {dance outfits for girls with a rose}) which cannot be satisfied by current search stack. This prompted us to develop further enhancements. The new vector-match, attribute-match and Best Representative Query match techniques help address these problems.

 

Vector match

Take the image below as an example, humans can easily relate the query {dance outfits for girls with a rose} to this image or its surrounding text, but for machines it’s much harder. It is obvious to humans that the query and the image/webpages are semantically similar. As we explained in the previous post, Bing maps every query and document to semantic space which helps us find better results. With recent advancements we incorporated BERT/Transformer technology leveraging 1) pre-trained knowledge to better interpret text information - especially for above mentioned hard cases; 2) attention mechanism to embed the image and webpage with the awareness of each other, so that the embedded document is a good summarization of the salient areas of the image and the key points on the webpage. 

 

Attribute match

In many cases, a user’s query may contain multiple fine-grained attributes (not likely to be found in the text on page) all of which need to be satisfied. As their number grows, it gets harder to represent the whole query (or a suitable result image) with a single vector. To handle this increased complexity, we started developing techniques to extract a select set of object attributes from both query and candidate documents, and to use these attributes for matching. As shown in the example below, we applied attribute detectors to the query {elderly man swimming pictures} and found that there are some attributes describing the person’s appearance and behavior. Despite the webpage having insufficient textual information for this image, we are now able to detect certain similar attributes from the image content and its surrounding text. Now the query and document can be considered a “precise match” since they share the same attributes. The attribute detectors were trained at the same time using multi-task optimization strategy and can be easily scaled to any attributes of an object.


While we welcome you to try it out, note that this technology is in its early stages and currently supports only a limited set of scenarios and attributes.

BRQ match

In addition to above matching solutions, we also worked to enrich the ‘classic’ metadata for the images as much as possible. With higher quality metadata not only can the traditional query term matching methods retrieve more relevant documents but also the described vector match and attribute match approaches work so much better. 

One of the most useful types of metadata is called “BRQ” - Best Representative Query. Best Representative Query for a given image is a query that the image would be a good result for. Because BRQs resemble user queries, they can be naturally and easily matched to incoming queries. BRQs are typically a good summarization of the main topics of the webpage and the major image content. The process of generating BRQs for Bing images also heavily relies on many modern deep learning techniques.

The picture below depicts two approaches to generating BRQs used in our search stack. 

The first approach is to use Machine Translation model (usually in encoder-decoder mode) to generate BRQs. Traditionally Machine Translation techniques are used for translating from a source language to a target language, for example, from English to French. Here, however, the text on the webpage is considered as ‘source’ and fed into the encoder, and the generated query-like text from the decoder is in this case our ‘target’.  In this way, we ‘translate’ the long text on the webpage into short phrases/queries. The above method however only leverages textual information to generate our BRQs. 

The other approach takes advantage of the vector search technique and additionally incorporates image information. We embed the text from the webpage together with the image into a single semantic vector, and then search for the nearest neighbors in a query repository. Only the queries within a similarity threshold will be considered representative for this image. 

Combining these two approaches lets us generate a much richer set of BRQ candidates which leads to better search results.

Summary

When all described multi-granularity and BRQ based enhancements were incorporated into the Bing stack, Bing Image Search took another step away from simple query term matching towards deeper semantic understanding of user queries and moving us even further along the way from being an excellent search engine to a truly intelligent one.

The following examples show how Bing results for one of the tricky queries ({car seat for Chevy impala 96}) evolved over the past two years continuously improving with incremental incorporation of deep learning techniques in the Bing stack.

Two years ago Bing Image Search was showing mostly cars instead of car seats for this query:

Half a year ago, after initial wave of deep learning integration we could see a definite reduction in undesired car images:

 

Finally, today Bing returns much cleaner and more relevant results: 

 

We hope that our improvements will help you find on Bing what you are looking for even faster. Deep learning techniques are a set of very exciting and promising tools lending themselves very well to both text and image. We are hard at work applying them to an ever-increasing number of aspects in our stack to take our search quality even higher. Keep your eyes peeled for new features and enhancements!

In the meantime if you have any feedback, do let us know using Bing Listens or simply click on ‘Feedback’ on Bing Image Search!

- Bing Image Search Relevance Team



 

Helping Smart Cities become more Inclusive

$
0
0

According to the UN, we will see the world's urban populations grow from today's 55 percent to 68 percent by 2050. With almost a billion people on the path to be urban dwellers, most cities are still unfriendly to people with disabilities. As more people flock to cities, making our cities smarter and more inclusive will become increasingly important. The concept of smart cities is all about developing strategies that leverage data and technology to enhance urban life. The IoT plays a central role in collecting sensor data and then using the insights gained from that data to manage assets, resources, and services efficiently.

As city planners tackle the complex challenges of increasing urbanization, managing scarce resources, climate change, and creating safer more accessible cities, Azure Maps (a collection of geospatial APIs) becomes a critical tool for city planners. A key aspect of IoT & technology solutions is that they should be intuitive, easy to use, and accessible.

Azure Maps and accessibility

Azure Maps makes it easy for all users to navigate an interactive map experience. Users can interact with maps using a mouse, touch, or keyboard. Azure Maps provides screen readers with enhanced descriptions that can combine multiple updates into a single message that is easier to digest and understand. Recently Azure Maps achieved exciting new capabilities & Microsoft certification around accessibility. All apps, both Microsoft owned and third party, that use Azure Maps will benefit from the accessibility features that are provided out of the box.

Azure Maps also relies on best of breed content partnerships for everything from the maps data, traffic, real time transit, ride share, to weather data.

Moovit helps people with disabilities ride transit

One of the Azure Maps content partnerships is with Moovit. Launched in 2011 in Israel, Moovit has become the world’s most popular transit-planning and navigation app, with more than 500 million users and service in over 3,000 cities across 94 countries. The company is also a leader in inclusive technology, with innovative work that helps people across the disability spectrum use buses, trains, subways, ride-hailing services, and other modes of public transit.

In addition to offering a consumer app in 45 languages, Moovit has partnered with Microsoft to provide its multi-modal transit data to developers who use Azure Maps, and a set of mobility-as-a-service solutions to cities, governments, and organizations. The partnership will enable the creation of more inclusive smart cities and more accessible transit apps.

How Aira helps smart cities become more accessible

One of the companies that is leveraging the geospatial and mapping capabilities from Azure Maps and the transit capabilities from Moovit, is Aira. Aira is a technology company dedicated to making lives simpler and more engaging. Based in San Diego, California, they build solutions to connect people who are blind, have low vision, or are simply aging into a digital world, with highly-trained professionals who provide visual information on demand.

Public transportation is the lifeline to jobs, education, healthcare, and more, yet many blind or low vision riders still have trouble getting to their destination. They may be uncertain that they’ve caught the right bus, or unable to read the entrance sign they need to follow in order to access the subway. In addition, as populations age, the number of people experiencing age-related vision loss rises every day. Moovit, Microsoft, and Aira are joining forces in order to challenge these obstacles and make public transit more accessible and inclusive, empowering blind and low vision riders to travel with more confidence.

“In Azure Maps, we invested significant time and resources to define accessibility requirements, implementing capabilities for those with needs, and pushing ourselves to service this segment of users” says Chris Pendleton, Head of Azure Maps at Microsoft Corp. “I’m elated to see Aira, Moovit, and Azure Maps providing services together further justifying our investments in the benefit of those who need it most”

Smart City Expo World Congress

In order to connect with cities on their journeys for digital transformation, the Azure Maps team, along with Moovit and Aira, will be at Smart City Expo World Congress, the industry-leading event for urbanization, showcasing technologies and partners enabling the digital transformation of smart cities. For updates from SCEWC, follow us on twitter.    

 


Accelerate IoMT on FHIR with new Microsoft OSS Connector

$
0
0

Microsoft is expanding the ecosystem of FHIR® for developers with a new tool to securely ingest, normalize, and persist Protected Health Information (PHI) from IoMT devices in the cloud.  

Continuing our commitment to remove the barriers of interoperability in healthcare, we are excited to expand our portfolio of Open Source Software (OSS) to support the HL7 FHIR Standard (Fast Healthcare Interoperability Resource). The release of the new IoMT FHIR Connector for Azure is available today in GitHub.


An illustration of medical data being connected to FHIR with IoMT FHIR Connector for Azure

The Internet of Medical Things (IoMT) is the subset of IoT devices that capture and transmit patient health data. It represents one of the largest technology revolutions changing the way we deliver healthcare, but IoMT also presents a big challenge for data management.

Data from IoMT devices is often high frequency, high volume, and requires sub-second measurements. Developers have to deal with a range of devices and schemas, from sensors worn on the body, ambient data capture devices, applications that document patient reported outcomes, and even devices that only require the patient to be within a few meters of a sensor.

Traditional healthcare providers, innovators, and even pharma and life sciences researchers are ushering in a new era of healthcare that leverages machine learning and analytics from IoMT devices. Most see a future where devices monitoring patients in their daily lives will be used as a standard approach to deliver cost savings, improve patient visibility outside of the physician’s office, and to create new insights for patient care. Yet as new IoMT apps and solutions are developed, two consistent barriers are preventing broad scalability of these solutions: interoperability of IoMT device data with the rest of the healthcare data, such as clinical or pharmaceutical records, and the security and private exchange of protected health information (PHI) from these devices in the cloud.

In the last several years, the provider ecosystem began to embrace the open source standard of FHIR as a solution for interoperability. FHIR is rapidly becoming the preferred standard for exchanging and managing healthcare information in electronic format and has been most successful in the exchange of clinical health records. We wanted to expand the ecosystem and help developers working with IoMT devices to normalize their data output in FHIR. The robust, extensible data model of FHIR standardizes the semantics of healthcare data and defines standards for exchange, so it fuels interoperability across data systems. We imagined a world where data from multiple device inputs and clinical health data sets could be quickly normalized around FHIR and work together in just minutes, without the added cost and engineering work to manage custom configurations and integration with each and every device and app interface. We wanted to deliver foundational technology that developers could trust so they could focus on innovation. And today, we’re releasing the IoMT FHIR Connector for Azure.

This OSS release opens an exciting new horizon for healthcare data management. It provides a simple tool that can empower application developers and technical professionals working with data from devices to quickly ingest and transform that data into FHIR. By connecting to the Azure API for FHIR, developers can set up a robust and secure pipeline to manage data from IoMT devices in the cloud.

The IoMT FHIR Connector for Azure enables easy deployment in minutes, so developers can begin managing IoMT data in a FHIR Server that supports the latest R4 version of FHIR:

  • Rapid provisioning for ingestion of IoMT data and connectivity to a designated FHIR Server for secure, private, and compliant persistence of PHI data in the cloud
  • Normalization and integrated mapping to transform data to the HL7 FHIR R4 Standard
  • Seamless connectivity with Azure Stream Analytics to query and refine IoMT data in real-time
  • Simplified IoMT device management and the ability to scale through Azure IoT services (including Azure IoT Hub or Azure IoT Central)
  • Secure management for PHI data in the cloud, the IoMT FHIR Connector for Azure has been developed for HIPAA, HITRUST, and GDPR compliance and in full support of requirements for protected health information (PHI)

To enhance scale and connectivity with common patient-facing platforms that collect device data, we’ve also created a FHIR HealthKit framework that works with the IoMT FHIR Connector. If patients are managing data from multiple devices through the Apple Health application, a developer can use the IoMT FHIR Connector to quickly ingest data from all of the devices through the HealthKit API and export it to their FHIR server.

Playing with FHIR
The Microsoft Health engineering team is fully backing this open source project, but like all open source, we are excited to see it grow and improve based on the community's feedback and contributions. Next week we’ll be joining developers around the world for FHIR Dev Days in Amsterdam to play with the new IoMT FHIR Connector for Azure. Learn more about the architecture of the IoMT FHIR Connector and how to contribute to the project on our GitHub page.


FHIR® is the registered trademark of HL7 and is used with the permission of HL7

New Azure HPC and partner offerings at Supercomputing 19

$
0
0

For more than three decades, the researchers and practitioners that make up the high-performance computing (HPC) community will come together for their annual event. More than ten thousand strong in attendance, the global community will converge on Denver, Colorado to advance the state-of-the-art in HPC. The theme for Supercomputing ‘19 is “HPC is now” - a theme that resonates strongly with the Azure HPC team given the breakthrough capabilities we’ve been working to deliver to customers.

Azure is upending preconceptions of what the cloud can do for HPC by delivering supercomputer-grade technologies, innovative services, and performance that rivals or exceeds some of the most optimized on-premises deployments. We’re working to ensure Azure is paving the way for a new era of innovation, research, and productivity.

At the show, we’ll showcase Azure HPC and partner solutions, benchmarking white papers and customer case studies – here’s an overview of what we’re delivering.

  • Massive Scale MPI – Solve problems at the limits of your imagination, not the limits of other public cloud’s commodity network. Azure supports your tightly coupled workloads up to 80,000 cores per job featuring the latest HPC-grade CPUs and ultra-low latency InfiniBand hDR networking.
  • Accelerated Compute – Choose from the latest GPUs, field programmable gate array (FPGAs), and now IPUs for maximum performance and flexibility across your HPC, AI, and visualization workloads.
  • Apps and Services – Leverage advanced software and storage for every scenario: from hybrid cloud to cloud migration, from POCs to production, from optimized persistent deployments to agile environment reconfiguration. Azure HPC software has you covered.

Azure HPC unveils new offerings

  • The preview of new second gen AMD EPYC based HBv2 Azure Virtual Machines for HPC – HBv2 virtual machines (VMs) deliver supercomputer-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world HPC workloads. HBv2 VMs feature 120 non-hyperthreaded CPU cores from the new AMD EPYC™ 7742 processor, up to 45 percent more memory bandwidth than x86 alternatives, and up to 4 teraFLOPS of double-precision compute. Leveraging the cloud’s first deployment of 200 Gigabit HDR InfiniBand from Mellanox, HBv2 VMs support up to 80,000 cores per MPI job to deliver workload scalability and performance that rivals some of the world’s largest and most powerful bare metal supercomputers. HBv2 is not just one of the most powerful HPC servers Azure has ever made, but also one of the most affordable. HBv2 VMs are available now.
  • The preview of new NVv4 Azure Virtual Machines for virtual desktops – NVv4 VMs enhance Azure’s portfolio of Windows Virtual Desktops with the introduction of the new AMD EPYC™ 7742 processor and the AMD RADEON INSTINCT™ MI25 GPUs. NVv4 gives customers more choice and flexibility by offering partitionable GPUs. Customers can select a virtual GPU size that is right for their workloads and price points, with as little as 2GB of dedicated GPU frame buffer for an entry-level desktop in the cloud, up to an entire MI25 GPU with 16GB of HBM2 memory to provide powerful workstation-class experience. NVv4 VMs are available now in preview.
  • The preview NDv2 Azure GPU Virtual Machines – NDv2 VMs are the latest, fastest, and most powerful addition to Azure GPU family, and are designed specifically for the most demanding distributed HPC, artificial intelligence (AI), and machine learning (ML) workloads. These VMs feature 8 NVIDIA Tesla V100 NVLink interconnected GPUs each with 32 GB of HBM2 memory, 40 non-hyperthreaded cores from the Intel Xeon Platinum 8168 processor, and 672 GiB of system memory. NDv2-series virtual machines also now feature 100 Gigabit EDR InfiniBand from Mellanox with support for standard OFED drivers and all MPI types and versions. NDv2-series virtual machines are ready for the most demanding machine learning models and distributed AI training workloads with out-of-box NCCL2 support for InfiniBand- allowing for easy scale-up to supercomputer-sized clusters that can run workloads utilizing CUDA, including popular ML tools and  frameworks. NDv2 VMs are available now in preview.
  • Azure HPC Cache now available – When it comes to file performance, the new Azure HPC Cache delivers flexibility and scale. Use this service right from the Azure Portal to connect high-performance computing workloads to on-premises network-attached storage or run Azure Blob as the portable operating system interface.
  • The preview of new NDv3 Azure Virtual Machines for AI –The preview of NDv3 VMs are Azure’s first offering featuring the Graphcore IPU, designed from the ground up for AI training and inference workloads. The IPU novel architecture enables high-throughput processing of neural networks even at small batch sizes, which accelerates training convergence and enables short inference latency. With the launch of NDv3, Azure is bringing the latest in AI silicon innovation to the public cloud and giving customers additional choice in how they develop and run their massive scale AI training and inference workloads. NDv3 VMs feature 16 IPU chips, each with over 1200 cores and over 7000 independent threads and a large 300MB on-chip memory that delivers up to 45 TB/s of memory bandwidth. The 8 Graphcore accelerator cards are connected through a high-bandwidth, low-latency interconnect enabling large models to be trained in a model-parallel (including pipelining) or data parallel way. An NDv3 VM also includes 40 cores of CPU, backed by Intel Xeon Platinum 8168 processors and 768 GB of memory. Customers can develop applications for IPU technology using Graphcore’s Poplar® software development toolkit, and leverage the IPU-compatible versions of popular machine learning frameworks. NDv3 VMs are now available in preview.
  • NP Azure Virtual Machines for HPC coming soon – Our Alveo U250 FPGA-Accelerated VMs offer from 1-to-4 Xilinx U250 FPGA devices as an Azure VM- backed by powerful Xeon Platinum CPU cores, and fast NVMe-based storage. The NP series will enable true lift-and-shift and single-target development of FPGA applications for a general purpose cloud. Based on a board and software ecosystem customers can buy today, RTL and high-level language designs targeted at Xilinx’s U250 card and SDAccel 2019.1 runtime will run on Azure VMs just as they do on-premises and on the edge, enabling the bleeding edge of accelerator development to harness the power of the cloud without additional development costs.

  • Azure CycleCloud 7.9 Update We are excited to announce the release of Azure CycleCloud 7.9. Version 7.9 focuses on improved operational clarity and control, in particular for large MPI workloads on Azure’s unique InfiniBand interconnected infrastructure. Among many other improvements, this release includes:

    • Improved error detection and reporting user interface (UI) that greatly simplify diagnosing VM issues.

    • ​Node time-to-live capabilities via a “Keep-alive” function, allowing users to build and debug MPI applications that are not affected by autoscaling policies.

    • VM placement group management through the UI that provides users direct control into node topology for latency sensitive applications.

    • Support for Ephemeral OS disks, which improve virtual machines and virtual machines scale sets start-up performance and cost.

  • Microsoft HPC Pack 2016, Update 3 – released in August, Update 3 includes significant performance and reliability improvements, support for Windows Server 2019 in Azure, a new VM extension for deploying Azure IaaS Windows nodes, and many other features, fixes, and improvements.

In all of our new offerings and alongside our partners, Azure HPC aims to consistently offer the latest in capabilities for HPC oriented use cases. Together with our partners Intel, AMD, Mellanox, NVIDIA, Graphcore, Xilinx, and many more, we look forward to seeing you next week in Denver!

Bing delivers its largest improvement in search experience using Azure GPUs

$
0
0

Over the last couple of years, deep learning has become widely adopted across the Bing search stack and powers a vast number of our intelligent features. We use natural language models to improve our core search algorithm’s understanding of a user’s search intent and the related webpages so that Bing can deliver the most relevant search results to our users. We rely on deep learning computer vision techniques to enhance the discoverability of billions of images even if they don’t have accompanying text descriptions or summary metadata. We leverage machine-based reading comprehension models to retrieve captions within larger text bodies that directly answer the specific questions users have. All these enhancements lead toward more relevant, contextual results for web search queries.

Recently, there was a breakthrough in natural language understanding with a type of model called transformers (as popularized by Bidirectional Encoder Representations from Transformers, BERT). Unlike previous deep neural network (DNN) architectures that processed words individually in order, transformers understand the context and relationship between each word and all the words around it in a sentence. Starting from April of this year, we used large transformer models to deliver the largest quality improvements to our Bing customers in the past year. For example, in the query "what can aggravate a concussion", the word "aggravate" indicates the user wants to learn about actions to be taken after a concussion and not about causes or symptoms. Our search powered by these models can now understand the user intent and deliver a more useful result. More importantly, these models are now applied to every Bing search query globally making Bing results more relevant and intelligent.

bing search

Deep learning at web-search scale can be prohibitively expensive

Bing customers expect an extremely fast search experience and every millisecond of latency matters.  Transformer-based models are pre-trained with up to billions of parameters, which is a sizable increase in parameter size and computation requirement as compared to previous network architectures. A distilled three-layer BERT model serving latency on twenty CPU cores was initially benchmarked at 77ms per inference. However, since these models would need to run over millions of different queries and snippets per second to power web search, even 77ms per inference remained prohibitively expensive at web search scale, requiring tens of thousands of servers to ship just one search improvement.

Model Optimization

Leveraging Azure Virtual Machine GPUs to achieve 800x inference throughput

One of the major differences between transformers and previous DNN architectures is that it relies on massive parallel compute instead of sequential processing. Given that graphics processing unit (GPU) architecture was designed for high throughput parallel computing, Azure’s N-series Virtual Machines (VM) with GPU accelerators built-in were a natural fit to accelerate these transformer models. We decided to start with the NV6 Virtual Machine primarily because of the lower cost and regional availability.  Just by running the three-layer BERT model on that VM with GPU, we observed a serving latency of 20ms (about 3x improvement). To further improve the serving efficiency, we partnered with NVIDIA to take full advantage of the GPU architecture and re-implemented the entire model using TensorRT C++ APIs and CUDA or CUBLAS libraries, including rewriting the embedding, transformer, and output layers.  NVIDIA also contributed efficient CUDA transformer plugins including softmax, GELU, normalization, and reduction.

We benchmarked the TensorRT-optimized GPU model on the same Azure NV6 Virtual Machine and was able to serve a batch of five inferences in 9ms, an 8x latency speedup and 43x throughput improvement compared to the model without GPU acceleration. We then leveraged Tensor Cores with mixed precision on a NC6s_v3 Virtual Machine to even further optimize the performance, benchmarking a batch size of 64 inferences at 6ms (~800x throughput improvement compared to CPU).

Transforming the Bing search experience worldwide using Azure’s global scale

With these GPU optimizations, we were able to use 2000+ Azure GPU Virtual Machines across four regions to serve over 1 million BERT inferences per second worldwide. Azure N-series GPU VMs are critical in enabling transformative AI workloads and product quality improvements for Bing with high availability, agility, and significant cost savings, especially as deep learning models continue to grow in complexity. Our takeaway was very clear, even large organizations like Bing can accelerate their AI workloads by using N-series virtual machines on Azure with built-in GPU acceleration. Delivering this kind of global-scale AI inferencing without GPUs would have required an exponentially higher number of CPU-based VMs, which ultimately would have become cost-prohibitive.  Azure also provides customers with the agility to deploy across multiple types of GPUs immediately, which would have taken months of time if we were to install GPUs on-premises.  The N-series Virtual Machines were essential to our ability to optimize and ship advanced deep learning models to improve Bing search, available globally today.

N-series general availability

Azure provides a full portfolio of Virtual Machine capabilities across the NC, ND, and NV series product lines. These Virtual Machines are designed for application scenarios for which GPU acceleration is common, such as compute-intensive, graphics-intensive, and visualization workloads.

  • NC-series Virtual Machines are optimized for compute-intensive and network-intensive applications.
  • ND-series Virtual Machines are optimized for training and inferencing scenarios for deep learning.
  • NV-series Virtual Machines are optimized for visualization, streaming, gaming, encoding, and VDI scenarios.

See our Supercomputing19 blog for recent product additions to the ND and NV-series Virtual Machines.

Learn more

Join us at Supercomputing19 to learn more about our Bing optimization journey, leveraging Azure GPUs.

PyTorch on Azure with streamlined ML lifecycle

$
0
0

It's exciting to see the Pytorch Community continue to grow and regularly release updated versions of PyTorch! Recent releases improve performance, ONNX export, TorchScript, C++ frontend, JIT, and distributed training. Several new experimental features, such as quantization, have also been introduced.

At the PyTorch Developer Conference earlier this fall, we presented how our open source contributions to PyTorch make it better for everyone in the community. We also talked about how Microsoft uses PyTorch to develop machine learning models for services like Bing. Whether you are an individual, a small team, or a large enterprise, managing the machine learning lifecycle can be challenging. We'd like to show you how Azure Machine Learning can make you and your organization more productive with PyTorch.

Streamlining the research to production lifecycle with Azure Machine Learning

One of the benefits of using PyTorch 1.3 in Azure Machine Learning is Machine Learning Operations (MLOps). MLOps streamlines the end-to-end machine learning (ML) lifecycle so you can frequently update models, test new models, and continuously roll out new ML models alongside your other applications and services. MLOps provides:

  • Reproducible training with powerful ML pipelines that stitch together all the steps involved in training your PyTorch model, from data preparation, to feature extraction, to hyperparameter tuning, to model evaluation.
  • Asset tracking with dataset and model registries so you know who is publishing PyTorch models, why changes are being made, and when your PyTorch models were deployed or used in production.
  • Packaging, profiling, validation, and deployment of PyTorch models anywhere from the cloud to the edge.
  • Monitoring and management of your PyTorch models at scale in an enterprise-ready fashion with eventing and notification of business impacting issues like data drift.

 A diagram showing the cycle of training PyTorch models.

Training PyTorch Models

With MLOps, data scientists write and update their code as usual and regularly push it to a GitHub repository. This triggers an Azure DevOps build pipeline that performs code quality checks, data sanity tests, unit tests, builds an Azure Machine Learning pipeline, and publishes it to your Azure Machine Learning workspace.

The Azure Machine Learning pipeline does the following tasks:

  • Train model task executes the PyTorch training script on Azure Machine Learning compute. It outputs a model file which is stored in the run history.
  • Evaluate model task evaluates the performance of the newly trained PyTorch model with the model in production. If the new model performs better than the production model, the following steps are executed. If not, they will be skipped.
  • Register model task takes the improved PyTorch model and registers it with the Azure Machine Learning model registry. This allows us to version control it.

You can find example code for training a PyTorch model, doing hyperparameter sweeps, and registering the model in this PyTorch MLOps example.

Deploying PyTorch models

The Machine Learning extension for DevOps helps you integrate Azure Machine Learning tasks in your Azure DevOps project to simplify and automate model deployments. Once a new model is registered in your Azure Machine Learning workspace, you can trigger a release pipeline to automate your deployment process. Models can then be automatically packaged and deployed as a web service across test and production environments such as Azure Container Instances and Azure Kubernetes Service (AKS). You can even enable gated releases so that, once the model is successfully deployed to the staging or quality assurance (QA) environment, a notification is sent to approvers to review and approve the release to production. You can see sample code for this in the PyTorch ML Ops example.

Next steps

We’re excited to support the latest version of PyTorch in Azure. With Azure Machine Learning and its MLOps capabilities, you can use PyTorch in your enterprise with a reproducible model lifecycle. Check out the MLOps example repository for an end to end example of how to enable a CI/CD workflow for PyTorch models.

Unlocking the promise of IoT: A Q&A with Vernon Turner

$
0
0

Vernon Turner is the Founder and Chief Strategist at Causeway Connections, an information and communications technology research firm. For nearly a decade, he’s been serving on global, national, and state steering committees, advising governments, businesses, and communities on IoT-based solution implementation. He recently talked with us about the importance of distinguishing between IoT hype and reality, and identifies three steps businesses need to take to make a successful digital transformation.

What is the promise of IoT?

The promise of more and more data from more and more connected sensors boils down to unprecedented insights and efficiencies. Businesses get more visibility into their operations, a better understanding of their customers, and the ability to personalize offerings and experiences like never before, as well as the ability to cut operational costs via automation and business-process efficiencies.

But just dabbling with IoT won’t unlock real business value. To do that, companies need to change everything, how they make products, how they go to market, their strategy, and their organizational structure. They need to really transform. And to do that, they need to do three things, lead with the customer experience, migrate to offering subscription-based IoT-enabled services, and have a voice in an emergent ecosystem of partners related to their business.

Why is the customer experience so important to fulfilling the promise of IoT?

There can be a lot of hype around IoT-enabled offerings. 

I recently toured several so-called smart buildings with a friend in the construction industry. He showed me that just filling a building with IoT-enabled gadgets doesn’t make it smart. A truly smart building goes beyond connected features and addresses the specific, real-world needs of tenants, leaseholders, and building managers.

If it doesn’t radically change the customer experience, it doesn’t fulfill the promise of IoT.

What’s the disconnect? Why aren’t “smart” solution vendors delivering what customers want?

Frankly, it’s easier to sell a product than an experience.

Customer experience should be at the center of the pitch for IoT, because IoT enables customers to have much more information about the product, in real-time, across the product lifecycle. But putting customer experience first requires making hard changes. It means adopting new strategies, business models, and organization charts, as well as new approaches to product development, sales and marketing, and talent management. And it means asking suppliers to create new business models to support sharing data across the product lifecycle.

Why is the second step to digital transformation, migrating to offering subscription-based, IoT-enabled services, so important?

To survive in our digitally transforming economy, it’s essential for businesses and their suppliers, to move from selling static products to a subscription-based services business model.

As sensors and other connected devices become increasingly omnipresent, customers see more real-time data showing them exactly what they’re consuming, and how the providers of the services they’re consuming are performing. By moving to a subscription (or “X as a service”) model, businesses can provide more tailored offerings, grow their customer base, and position themselves for success in the digital age.

When companies embrace transformation, it can have a ripple effect across their operations. Business units can respond to market needs to create a new service by combining microservices using the rapid software development techniques of DevOps. These services drive a shift from infrequent, low-business-value interactions with customers to continuous engagement between customers and companies’ sales and business units. This improves customer relationships, staving off competition, and introducing new sales opportunities.

What challenges should companies be prepared for as they migrate to offering subscription services?

For a subscription-based services model to work, most companies need to make significant changes to their culture and organizational structure.

Financial planning needs to stop reviewing past financial statements and start focusing on future recurring revenue. Instead of concentrating on margin-based products, sales should start selling outcomes that add value for customers. Marketing must be driven by data about the customer experience and what the customer needs, rather than what serves the branding campaign.

From now on, rapid change, responsiveness to the customer, and the ability to customize and scale services are going to be the norm in business.

You mentioned the importance of participating in an emergent ecosystem of partners. What does that mean? Why does it matter?

As digital business processes mature and subscription models become the standard, customers will demand ways to integrate their relationships with IT and business vendors in an ecosystem connected by a single platform.

Early results show that vendors who actively participate in their solution platform’s ecosystem enjoy a higher net promoter score (NPS). In the short term, they gain stickiness with customers. And in the long run, they become more relevant across their ecosystem, gain a competitive advantage over peers inside and outside their ecosystem, and deliver more value to customers.

How does ecosystem participation increase the value delivered to customers?

Because everyone’s using the same platform, customers get transparency into the performance of suppliers. Service-level management becomes the first point of contact between businesses and suppliers. Key performance indicators trigger automatic responses to customer experiences. Response times to resolve issues are mediated by the platform.

These tasks and functions are carried out within the ecosystem and orchestrated by third-party service management companies. But that’s not to say businesses in the ecosystem don’t still have an individual, separate relationship with their customers. Rather, the ecosystem acts as a gateway for IT and business suppliers to integrate their offerings into customer services. Business and product outcomes from the ecosystem feed research and development, product design, and manufacturing, leading to continual improvement in services delivery and customer experience.

To conclude, let’s go back to something we talked about earlier. For builders, a truly smart building is one that does more than just keep the right temperature. It also monitors and secures wireless networks, optimizes lighting based on tenants’ specific needs, manages energy use, and so on to deliver comfortable, customized work, living, or shopping environments. To deliver that kind of customer experience takes an ecosystem of partners, all working in concert. For companies to unlock the value of IoT, they need to participate actively in that ecosystem.

Learn how Azure helps businesses unlock the value of IoT.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>