Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

The Evolving Infrastructure of .NET Core

$
0
0

With .NET Core 3.0 Preview 6 out the door, we thought it would be useful to take a brief look at the history of our infrastructure systems and the significant improvements that have been made in the last year or so.

This post will be interesting if you are interested in build infrastructure or want a behind-the-scenes look at how we build a product as big as .NET Core. It doesn’t describe new features or sample code that you should use in your next application. Please tell us if you like these types of posts. We have a few more like this planned, but would appreciate knowing if you find this type of information helpful.

A little history

Begun over 3 years ago now, the .NET Core project was a significant departure from traditional Microsoft projects.

  • Developed publicly on GitHub
  • Composed of isolated git repositories that integrate together vs. a monolithic repository.
  • Targets many platforms
  • Its components may ship in more than one ‘vehicle’ (e.g. Roslyn ships as a component of Visual Studio as well as the SDK)

Our early infrastructure decisions were made around necessity and expediency. We used Jenkins for GitHub PR and CI validation because it supported cross-platform OSS development. Our official builds lived in Azure DevOps (called VSTS at the time) and TeamCity (used by ASP.NET Core), where signing and other critical shipping infrastructure exists. We integrated repositories together using a combination of manually updating package dependency versions and somewhat automated GitHub PRs. Teams independently built what tooling they needed to do packaging, layout, localization and all the rest of the usual tasks that show up in big development projects. While not ideal, on some level this worked well enough in the early days. As the project grew from .NET Core 1.0 and 1.1 into 2.0 and beyond we wanted to invest in a more integrated stack, faster shipping cadences and easier servicing. We wanted to produce a new SDK with the latest runtime multiple times per day. And we wanted all of this without reducing the development velocity of the isolated repositories.

Many of the infrastructure challenges .NET Core faces stem from the isolated, distributed nature of the repository structure. Although it’s varied quite a bit over the years, the product is made up of anywhere from 20-30 independent git repositories (ASP.NET Core had many more until recently). On one hand, having many independent development silos tends to make development in those silos very efficient; a developer can iterate very quickly in the libraries without much worry about the rest of the stack. On the other hand, it makes innovation and integration of the overall project much less efficient. Some examples:

  • If we need to roll out new signing or packaging features, doing so across so many independent repos that use different tools is very costly.
  • Moving changes across the stack is slow and costly. Fixes and features in repositories ‘low’ in the stack (e.g. corefx libraries) may not be seen in the SDK (the ‘top’ of the stack) for several days. If we make a fix in dotnet/corefx, that change must be built and the new version flowed into any up-stack components that reference it (e.g. dotnet/core-setup and ASP.NET Core), where it will be tested, committed and built. Those new components will then need to flow those new outputs further up the stack, and so on and so forth until the head is reached.

In all of these cases, there is chance for failure at many levels, further slowing down the process. As .NET Core 3.0 planning began in earnest, it became clear that we could we could not create a release of the scope that we wanted without significant changes in our infrastructure.

A three-pronged approach

We developed a three-pronged approach to ease our pain:

  • Shared Tooling (aka Arcade) – Invest in shared tooling across our repositories.
  • System Consolidation (Azure DevOps) – Move off of Jenkins and into Azure DevOps for our GitHub CI. Move our official builds from classic VSTS-era processes onto modern config-as-code.
  • Automated Dependency Flow and Discovery (Maestro) – Explicitly track inter-repo dependencies and automatically update them on a fast cadence.

Arcade

Prior to .NET Core 3.0, there were 3-5 different tooling implementations scattered throughout various repositories, depending on how you counted.

While in this world each team gets to customize their tooling and only build exactly what they need, it does have some significant downsides:

  • Developers move between repositories less efficiently

    Example: When a developer moves from dotnet/corefx into dotnet/core-sdk, the ‘language’ of the repository is different. What does she type to build and test? Where do the logs get placed? If she needs to add a new project to the repo, how is this done?

  • Each required feature gets built N times

    Example: .NET Core produces tons of NuGet packages. While there is some variation (e.g. shared runtime packages like Microsoft.NETCore.App produced out of dotnet/core-setup are built differently than ‘normal’ packages like Microsoft.AspNet.WebApi.Client), the steps to produce them are fairly similar. Unfortunately, as repositories diverge their layout, project structure, etc. it generates differences in how these packaging tasks need to be implemented. How does a repository define what packages should be generated, what goes in those packages, their metadata, and so on. Without shared tooling, it is often easier for a team to just implement another packaging task rather than reuse another. This is of course a strain on resources.

With Arcade, we endeavored to bring all our repos under a common layout, repository ‘language’, and set of tasks where possible. This is not without its pitfalls. Any kind of shared tooling ends up solving a bit of a ‘Goldilocks’ problem. If the shared tooling is too prescriptive, then the kind of customization required within a project of any significant size becomes difficult, and updating that tooling becomes tough. It’s easy to break a repository with new updates. BuildTools suffered from this. The repositories that used it became so tightly coupled to it that it was not only unusable for other repositories, but making any changes in buildtools often broke consumers in unexpected ways. If shared tooling is not prescriptive enough, then repositories tend to diverge in their usage of the tooling, and rolling out updates often requires lots of work in each individual repository. At that point, why have shared tooling in the first place?

Arcade actually tries to go with both approaches at the same time. It defines a common repository ‘language’ as set of scripts (see eng/common), a common repository layout, and common set of build targets rolled out as an MSBuild SDK. Repositories that choose to fully adopt Arcade have predictable behavior, making changes easy to roll out across repositories. Repositories that do not wish to do so can pick and choose from a variety of MSBuild task packages that provide basic functionality, like signing and packaging, that tend to look the same across all repositories. As we roll out changes to these tasks, we try our best to avoid breaking changes.

Let’s take a look at the primary features that Arcade provides and how they integrate into our larger infrastructure.

  • Common build task packages – These are a basic layer of MSBuild tasks which can either be utilized independently or as part of the Arcade SDK. They are “pay for play” (hence the name ‘Arcade’). They provide a common set of functionality that is needed in most .NET Core repositories:
  • Common repo targets and behaviors – These are provided as part of an MSBuild SDK called the “Arcade SDK”. By utilizing it, repositories opt-in to the default Arcade build behaviors, project and artifact layout, etc.
  • Common repository ‘language’ – A set of common script files that are synchronized between all the Arcade repositories using dependency flow (more on that later). These script files introduce a common ‘language’ for repositories that have adopted Arcade. Moving between these repositories becomes more seamless for developers. Moreover, because these scripts are synced between repositories, rolling out new changes to the original copies located in the Arcade repo can quickly introduce new features or behavior into repositories that have fully adopted the shared tooling.
  • Shared Azure DevOps job and step templates – While the scripts that define the common repository ‘language’ are primarily targeted towards interfacing with humans, Arcade also has a set of Azure DevOps job and step templates that allow for Arcade repositories to interface with the Azure DevOps CI systems. Like the common build task packages, the step templates form a base layer that can be utilized by almost every repository (e.g. to send build telemetry). The job templates form more complete units, allowing repositories to worry less about the details of their CI process.

Moving to Azure DevOps

As noted above, the larger team used a combination of CI systems through the 2.2 release:

  • AppVeyor and Travis for ASP.NET Core’s GitHub PRs
  • TeamCity for ASP.NET’s official builds
  • Jenkins for the rest of .NET Core’s GitHub PRs and rolling validation.
  • Classic (non-YAML) Azure DevOps workflows for all non-ASP.NET Core official builds.

A lot of differentiation was simply from necessity. Azure DevOps did not support public GitHub PR/CI validation, so ASP.NET Core turned to AppVeyor and Travis to fill the gap while .NET Core invested in Jenkins. Classic Azure DevOps did not have a lot of support for build orchestration, so the ASP.NET Core team turned to TeamCity while the .NET Core team built a tool called PipeBuild on top of Azure DevOps to help out. All of this divergence was very expensive, even in some non-obvious ways:

  • While Jenkins is flexible, maintaining a large (~6000-8000 jobs), stable installation is a serious undertaking.
  • Building our own orchestration on top of classic Azure DevOps required a lot of compromises. The checked in pipeline job descriptions were not really human-readable (they were just exported json descriptions of manually created build definitions), secrets management was ugly, and they quickly became over-parameterized as we attempted to deal with the wide variance in build requirements.
  • When official build vs. nightly validation vs. PR validation processes are defined in different systems, sharing logic becomes difficult. Developers must take additional care when making process changes because and breaks are common. We defined Jenkins PR jobs in a special script file, TeamCity had lots of manually configured jobs, AppVeyor and Travis used their own yaml formats, and Azure DevOps had the obscure custom system we built on top of it. It was easy to make a change to build logic in a PR and break the official CI build. To mitigate this, we did work to keep as much logic in scripting common to official CI and PR builds, but invariably differences creep in over time. Some variance, like in build environments, is basically impossible to entirely remove.
  • Practices for making changes to workflows varied wildly and were often difficult to understand. What a developer learned about Jenkins’s netci.groovy files for updating PR logic did not translate over to the PipeBuild json files for official CI builds. As a result, knowledge of the systems was typically isolated to a few team members, which is less than ideal in large organizations.

When Azure DevOps began to roll out YAML based build pipelines and support for public GitHub projects as .NET Core 3.0 began to get underway, we recognized we had a unique opportunity. With this new support, we could move all our existing workflows out of the separate systems and into modern Azure DevOps and also make some changes to how we deal with official CI vs. PR workflows. We started with the following rough outline of the effort:

  • Keep all our logic in code, in GitHub. Use the YAML pipelines everywhere.
  • Have a public and private project.
    • The public project will run all the public CI via GitHub repos and PRs as we always have
    • The private project will run official CI be the home of any private changes we need to make, in repositories matching the public GitHub repositories
    • Only the private project will have access to restricted resources.
  • Share the same YAML between official CI and PR builds. Use template expressions to differentiate between the public and private project where behavior must diverge, or resources only available in the private project would be accessed. While this often makes the overall YAML definition a little messier, it means that:
    • The likelihood of a build break when making a process change is lower.
    • A developer only really needs to change one set of places to change official CI and PR process.
  • Build up Azure DevOps templates for common tasks to keep duplication of boilerplate YAML to a minimum, and enable rollout of updates (e.g. telemetry) easy using dependency flow.

As of now, all of the primary .NET Core 3.0 repositories are on Azure DevOps for their public PRs and official CI. A good example pipeline is the official build/PR pipeline for dotnet/arcade itself.

Maestro and Dependency Flow

The final piece of the .NET Core 3.0 infrastructure puzzle is what we call dependency flow. This is not a unique concept to .NET Core. Unless they are entirely self-contained, most software projects contain some kind of versioned reference to other software. In .NET Core, these are commonly represented as NuGet packages. When we want new features or fixes that libraries have shipped, we pull those new updates by updating the referenced version numbers in our projects. Of course, these packages may also have versioned references to other packages, those other packages may have more references, so on and so forth. This creates a graph. Changes flow through the graph as each repository pulls new versions of their input dependencies.

A Complex Graph

The primary development life-cycle (what developers regularly work on) of most software projects typically involves a small number of inter-related repositories. Input dependencies are typically stable and updates are sparse. When they do need to change, it’s usually a manual operation. A developer evaluates the available versions of the input package, chooses an appropriate one, and commits the update. This is not the case in .NET Core. The need for components to be independent, ship on different cadences and have efficient inner-loops development experiences has led to a fairly large number of repositories with a large amount of inter-dependency. The inter-dependencies also form a fairly deep graph:

The dotnet/core-sdk repository serves as the aggregation point for all sub-components. We ship a specific build of dotnet/core-sdk, which describes all other referenced components.

 

We also expect that new outputs will flow quickly through this graph so that the end product can be validated as often as possible. For instance, we expect the latest bits of ASP.NET Core or the .NET Core Runtime to express themselves in the SDK as often as possible. This essentially means updating dependencies in each repository on a regular, fast cadence. In a graph of sufficient size, like .NET Core has, this quickly becomes an impossible task to do manually. A software project of this size might go about solving this is a number of ways:

  • Auto-floating input versions – In this model, dotnet/core-sdk might reference the Microsoft.NETCore.App produced out of dotnet/core-setup by allowing NuGet to float to the latest prerelease version. While this works, it suffers from major drawbacks. Builds become non-deterministic. Checking out an older git SHA and building will not necessarily use the same inputs or produce the same outputs. Reproducing bugs becomes difficult. A bad commit in dotnet/core-setup can break any repository pulling in its outputs, outside of PR and CI checks. Orchestration of builds becomes a major undertaking, because separate machines in a build may restore packages at different times, yielding different inputs. All of these problems are ‘solvable’, but require huge investment and unnecessary complication of the infrastructure.
  • ‘Composed’ build – In this model, the entire graph is built all at once in isolation, in dependency order, using the latest git SHAs from each of the input repositories. The outputs from each stage of the build are fed into the next stage. A repository effectively has its input dependency version numbers overwritten by its input stages. At the end of a successful build, the outputs are published and all the repositories update their input dependencies to match what was just built. This is a bit of an improvement over auto-floating version numbers in that individual repository builds aren’t automatically broken by bad check-ins in other repos, but it still has major drawbacks. Breaking changes are almost impossible to flow efficiently between repositories, and reproducing failures is still problematic because the source in a repository often doesn’t match what was actually built (since input versions were overwritten outside of source control).
  • Automated dependency flow – In this model, external infrastructure is used to automatically update dependencies in a deterministic, validated fashion between repositories. Repositories explicitly declare their input dependencies and associated versions in source, and ‘subscribe’ to updates from other repositories. When new builds are produced, the system finds matching subscriptions, updates any of the declared input dependencies, and opens a PR with the changes. This method improves reproducibility, the ability to flow breaking changes, and allows a repository owner to have control over how updates are done. On the downside, it can be significantly slower than either of the other two methods. A change can only flow from the bottom of the stack to the top as fast as the total sum of the PR and Official CI times in each repository along the flow path.

.NET Core has tried all 3 methods. We floated versions early on in the 1.x cycle, had some level of automated dependency flow in 2.0 and went to a composed build for 2.1 and 2.2. With 3.0 we decided to invest heavily in automated dependency flow and abandon the other methods. We wanted to improve over our former 2.0 infrastructure in some significant ways:

  • Ease traceability of what is actually in the product – At any given repository, it’s generally possible to determine what versions of what components are being used as inputs, but almost always hard to find out where those components were built, what git SHAs they came from, what their input dependencies were, etc.
  • Reduce required human interaction – Most dependency updates are mundane. Auto-merge the update PRs as they pass validation to speed up flow.
  • Keep dependency flow information separate from repository state – Repositories should only contain information about the current state of their node in the dependency graph. They should not contain information regarding transformation, like when updates should be taken, what sources they pull from, etc.
  • Flow dependencies based on ‘intent’, not branch – Because .NET Core is made up of quite a few semi-autonomous teams with different branching philosophies, different component ship cadences, etc. do not use branch as a proxy for intent. Teams should define what new dependencies they pull into their repositories based on the purpose of those inputs, not where they came from. Furthermore, the purpose of those inputs should be declared by those teams producing those inputs.
  • ‘Intent’ should be deferred from the time of build – To improve flexibility, avoid assigning the intent of a build until after the build is done, allowing for multiple intentions to be declared. At the time of build, the outputs are just a bucket of bits built at some git SHA. Just like running a release pipeline on the outputs of an Azure DevOps build essentially assigns a purpose for the outputs, assigning an intent to a build in the dependency flow system begins the process of flowing dependencies based on intent.

With these goals in mind, we created a service called Maestro++ and a tool called ‘darc’ to handle our dependency flow. Maestro++ handles the data and automated movement of dependencies, while darc provides a human interface for Maestro++ as well as a window into the overall product dependency state. Dependency flow is based around 4 primary concepts: dependency information, builds, channels and subscriptions.

Builds, Channels, and Subscriptions

  • Dependency information – In each repository, there is a declaration of the input dependencies of the repository along with source information about those input dependencies in the eng/Version.Details. Reading this file, then transitively following the repository+sha combinations for each input dependency yields the product dependency graph.
  • Builds – A build is just the Maestro++ view on an Azure DevOps build. A build identifies the repository+sha, overall version number and the full set of assets and their locations that were produced from the build (e.g. NuGet packages, zip files, installers, etc.).
  • Channels – A channel represents intent. It may be useful to think of a channel as a cross repository branch. Builds can be assigned to one or more channels to assign intent to the outputs. Channels can be associated with one or more release pipelines. Assignment of a build to a channel activates the release pipeline and causes publishing to happen. The asset locations of the build are updated based on release publishing activities.
  • Subscriptions – A subscription represents transform. It maps the outputs of a build placed on a specific channel onto another repository’s branch, with additional information about when those transforms should take place.

These concepts are designed so that repository owners do not need global knowledge of the stack or other teams’ processes in order to participate in dependency flow. They basically just need to know three things:

  • The intent (if any) of the builds that they do, so that channels may be assigned.
  • Their input dependencies and what repositories they are produced from.
  • What channels they wish to update those dependencies from.

As an example, let’s say I own the dotnet/core-setup repository. I know that my master branch produces bits for day to day .NET Core 3.0 development. I want to assign new builds to the pre-declared ‘.NET Core 3.0 Dev’ channel. I also know that I have several dotnet/coreclr and dotnet/corefx package inputs. I don’t need to know how they were produced, or from what branch. All I need to know is that I want the newest dotnet/coreclr inputs from the ‘.NET Core 3.0 Dev’ channel on a daily basis, and the newest dotnet/corefx inputs from the ‘.NET Core 3.0 Dev’ channel every time they appear.

First, I onboard by adding an eng/Version.Details file. I then use the ‘darc’ tool to ensure that every new build of my repository on the master branch is assigned by default to the ‘.NET Core 3.0 Dev’ channel. Next, I set up subscriptions to pull inputs from .NET Core 3.0 Dev for builds of dotnet/corefx, dotnet/coreclr, dotnet/standard, etc. These subscriptions have a cadence and auto-merge policy (e.g. weekly or every build).

As the trigger for each subscription is activated, Maestro++ updates files (eng/Version.Details.xml, eng/Versions.props, and a few others) in the core-setup repo based on the declared dependencies intersected with the newly produced outputs. It opens a PR, and once the configured checks are satisfied, will automatically merge the PR.

This in turn generates a new build of core-setup on the master branch. Upon completion, automatic assignment of the build to the ‘.NET Core 3.0 Dev’ channel is started. The ‘.NET Core 3.0 Dev’ channel has an associated release pipeline which pushes the build’s output artifacts (e.g. packages and symbol files) to a set of target locations. Since this channel is intended for day to day public dev builds, packages and symbols are pushed to various public locations. Upon release pipeline completion, channel assignment is finalized and any subscriptions that activate on this event are fired. As more components are added, we build up a full flow graph representing all of the automatic flow between repositories.

Flow graph for the .NET Core 3 Dev channel, including other channels that (e.g. Arcade’s ‘.NET Tools Latest’) that contribute to the .NET Core 3 Dev flow.

 

Coherency and Incoherency

The increased visibility into the state of .NET Core’s dependency graph highlighted an existing question: What happens when multiple versions of the same component are referenced at various nodes in the graph? Each node in .NET Core’s dependency graph may flow dependencies to more than one other node. For instance, the Microsoft.NETCore.App dependency, produced out of dotnet/core-setup, flows to dotnet/toolset, dotnet/core-sdk, aspnet/extensions and a number of other places. Updates of this dependency will be committed at different rates in each of those places, due to variations in pull request validation time, need for reaction to breaking changes, and desired subscription update frequencies. As those repositories then flow elsewhere and eventually coalesce under dotnet/core-sdk, there may be a number of different versions of Microsoft.NETCore.App that have been transitively referenced throughout the graph. This is called incoherency. When only a single version of each product dependency is referenced throughout the dependency graph, the graph is coherent. We always strive to ship a coherent product if possible.

What kinds of problems of does incoherency cause? Incoherency represents a possible error state. For an example let’s take a look at Microsoft.NETCore.App. This package represents a specific API surface area. While multiple versions of Microsoft.NETCore.App may be referenced in the repository dependency graph, the SDK ships with just one. This runtime must satisfy all of the demands of the transitively referenced components (e.g. WinForms and WPF) that may execute on that runtime. If the runtime does not satisfy those demands (e.g. breaking API change), failures may occur. In an incoherent graph, because all repositories have not ingested the same version of Microsoft.NETCore.App, there is a possibility that a breaking change has been missed.

Does this mean that incoherency is always an error state? No. For example, let’s say that the the incoherency of Microsoft.NETCore.App in the graph only represents a single change in coreclr, a single non-breaking JIT bug fix. There would technically be no need to ingest the new Microsoft.NETCore.App at each point in the graph. Simply shipping the same components against the new runtime will suffice.

If incoherency only matters occasionally, why do we strive to ship a coherent product? Because determining when incoherency does not matter is hard. It is easier to simply ship with coherency as the desired state than attempt to understand any semantic effects differences between incoherent components will have on the completed product. It can be done, but on a build to build basis it is time intensive and prone to error. Enforcing coherency as the default state is safer.

Dependency Flow Goodies

All this automation and tracking has a ton of advantages that become apparent as the repository graph gets bigger. It opens up a lot of possibility to solve real problems we have on a day to day basis. While we have just begun to explore this area, the system can begin to answer interesting questions and handling scenarios like:

  • What ‘real’ changes happened between git SHA A and SHA B of dotnet/core-sdk? – By building up a full dependency graph by walking the Version.Details.xml files, I can identify the non-dependency changes change happened in the graph.
  • How long will it take for a fix to appear in the product? – By combining the repository flow graph and per-repository telemetry, we can estimate how long it will take to move a fix from repo A to repo B in the graph. This is especially valuable late in a release, as it helps us make a more accurate cost/benefit estimation when looking at whether to take specific changes. For example: Do we have enough time to flow this fix and complete our scenario testing?
  • What are the locations of all assets produced by a build of core-sdk and all of its input builds?
  • In servicing releases, we want to take specific fixes but hold off on others. Channels could be placed into modes where a specific fix is allowed to flow automatically through the graph, but others are blocked or require approval.

What’s next?

As .NET Core 3.0 winds down, we’re looking for new areas to improve. While planning is still in the (very) early stages, we expect investments in a some key areas:

  • Reduce the time to turn a fix into a shippable, coherent product – The number of hops in our dependency graph is significant. This allows repositories a lot of autonomy in their processes, but increases our end to end ‘build’ time as each hop requires a commit and official build. We’d like to significantly reduce that end-to-end time.
  • Improving our infrastructure telemetry – If we can better track where we fail, what our resource usage looks like, what our dependency state looks like, etc. we can better determine where our investments need to be to ship a better product. In .NET Core 3.0 we took some steps in this direction but we have a ways to go.

We’ve evolved our infrastructure quite a bit over the years. From Jenkins to Azure DevOps, from manual dependency flow to Maestro++, and from many tooling implementations to one, the changes we’ve made to ship .NET Core 3.0 are a huge step forward. We’ve set ourselves up to develop and ship a more exciting product more reliably than ever before.

The post The Evolving Infrastructure of .NET Core appeared first on .NET Blog.


Windows 10 SDK Preview Build 18917 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 18917 or greater). The Preview SDK Build 18917 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017 and 2019. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1903 or earlier to the Microsoft Store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2019 here.
  • This build of the Windows SDK will install ONLY on Windows 10 Insider Preview builds.
  • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following static URL: https://software-download.microsoft.com/download/sg/Windows_InsiderPreview_SDK_en-us_18917_1.iso.

Tools Updates

Message Compiler (mc.exe)

  • Now detects the Unicode byte order mark (BOM) in .mc files. If the If the .mc file starts with a UTF-8 BOM, it will be read as a UTF-8 file. Otherwise, if it starts with a UTF-16LE BOM, it will be read as a UTF-16LE file. If the -u parameter was specified, it will be read as a UTF-16LE file. Otherwise, it will be read using the current code page (CP_ACP).
  • Now avoids one-definition-rule (ODR) problems in MC-generated C/C++ ETW helpers caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of MCGEN_EVENTWRITETRANSFER are linked into the same binary, the MC-generated ETW helpers will now respect the definition of MCGEN_EVENTWRITETRANSFER in each .cpp file instead of arbitrarily picking one or the other).

Windows Trace Preprocessor (tracewpp.exe)

  • Now supports Unicode input (.ini, .tpl, and source code) files. Input files starting with a UTF-8 or UTF-16 byte order mark (BOM) will be read as Unicode. Input files that do not start with a BOM will be read using the current code page (CP_ACP). For backwards-compatibility, if the -UnicodeIgnore command-line parameter is specified, files starting with a UTF-16 BOM will be treated as empty.
  • Now supports Unicode output (.tmh) files. By default, output files will be encoded using the current code page (CP_ACP). Use command-line parameters -cp:UTF-8 or -cp:UTF-16 to generate Unicode output files.
  • Behavior change: tracewpp now converts all input text to Unicode, performs processing in Unicode, and converts output text to the specified output encoding. Earlier versions of tracewpp avoided Unicode conversions and performed text processing assuming a single-byte character set. This may lead to behavior changes in cases where the input files do not conform to the current code page. In cases where this is a problem, consider converting the input files to UTF-8 (with BOM) and/or using the -cp:UTF-8 command-line parameter to avoid encoding ambiguity.

TraceLoggingProvider.h

  • Now avoids one-definition-rule (ODR) problems caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of TLG_EVENT_WRITE_TRANSFER are linked into the same binary, the TraceLoggingProvider.h helpers will now respect the definition of TLG_EVENT_WRITE_TRANSFER in each .cpp file instead of arbitrarily picking one or the other).
  • In C++ code, the TraceLoggingWrite macro has been updated to enable better code sharing between similar events using variadic templates.

Breaking Changes

Removal of IRPROPS.LIB

In this release irprops.lib has been removed from the Windows SDK. Apps that were linking against irprops.lib can switch to bthprops.lib as a drop-in replacement.

API Updates, Additions and Removals

The following APIs have been added to the platform since the release of Windows 10 SDK, version 1903, build 18362.

Additions:


namespace Windows.Foundation.Metadata {
  public sealed class AttributeNameAttribute : Attribute
  public sealed class FastAbiAttribute : Attribute
  public sealed class NoExceptionAttribute : Attribute
}
namespace Windows.Graphics.Capture {
  public sealed class GraphicsCaptureSession : IClosable {
    bool IsCursorCaptureEnabled { get; set; }
  }
}
namespace Windows.Management.Deployment {
  public enum DeploymentOptions : uint {
    AttachPackage = (uint)4194304,
  }
  public sealed class PackageManager {
    IIterable<Package> FindProvisionedPackages();
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackagesByFullNameAsync(IIterable<string> packageFullNames, DeploymentOptions deploymentOptions);
  }
}
namespace Windows.Networking.BackgroundTransfer {
  public sealed class DownloadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
  public sealed class UploadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
}
namespace Windows.Security.Authentication.Web.Core {
  public sealed class WebAccountMonitor {
    event TypedEventHandler<WebAccountMonitor, WebAccountEventArgs> AccountPictureUpdated;
  }
}
namespace Windows.Storage {
  public sealed class StorageFile : IInputStreamReference, IRandomAccessStreamReference, IStorageFile, IStorageFile2, IStorageFilePropertiesWithAvailability, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFile> GetFileFromPathForUserAsync(User user, string path);
  }
  public sealed class StorageFolder : IStorageFolder, IStorageFolder2, IStorageFolderQueryOperations, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFolder> GetFolderFromPathForUserAsync(User user, string path);
  }
}
namespace Windows.UI.Composition.Particles {
  public sealed class ParticleAttractor : CompositionObject
  public sealed class ParticleAttractorCollection : CompositionObject, IIterable<ParticleAttractor>, IVector<ParticleAttractor>
  public class ParticleBaseBehavior : CompositionObject
  public sealed class ParticleBehaviors : CompositionObject
  public sealed class ParticleColorBehavior : ParticleBaseBehavior
  public struct ParticleColorBinding
  public sealed class ParticleColorBindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleColorBinding>>, IMap<float, ParticleColorBinding>
  public enum ParticleEmitFrom
  public sealed class ParticleEmitterVisual : ContainerVisual
  public sealed class ParticleGenerator : CompositionObject
  public enum ParticleInputSource
  public enum ParticleReferenceFrame
  public sealed class ParticleScalarBehavior : ParticleBaseBehavior
  public struct ParticleScalarBinding
  public sealed class ParticleScalarBindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleScalarBinding>>, IMap<float, ParticleScalarBinding>
  public enum ParticleSortMode
  public sealed class ParticleVector2Behavior : ParticleBaseBehavior
  public struct ParticleVector2Binding
  public sealed class ParticleVector2BindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleVector2Binding>>, IMap<float, ParticleVector2Binding>
  public sealed class ParticleVector3Behavior : ParticleBaseBehavior
  public struct ParticleVector3Binding
  public sealed class ParticleVector3BindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleVector3Binding>>, IMap<float, ParticleVector3Binding>
  public sealed class ParticleVector4Behavior : ParticleBaseBehavior
  public struct ParticleVector4Binding
  public sealed class ParticleVector4BindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleVector4Binding>>, IMap<float, ParticleVector4Binding>
}
namespace Windows.UI.ViewManagement {
  public enum ApplicationViewMode {
    Spanning = 2,
  }
}
namespace Windows.UI.WindowManagement {
  public sealed class AppWindow {
    void SetPreferredTopMost();
    void SetRelativeZOrderBeneath(AppWindow appWindow);
  }
  public sealed class AppWindowChangedEventArgs {
    bool DidOffsetChange { get; }
  }
  public enum AppWindowPresentationKind {
    Spanning = 4,
  }
  public sealed class SpanningPresentationConfiguration : AppWindowPresentationConfiguration
}

The post Windows 10 SDK Preview Build 18917 available now! appeared first on Windows Developer Blog.

.NET Framework June 2019 Preview of Quality Rollup

$
0
0

Today, we are releasing the June 2019 Preview of Quality Rollup.

Quality and Reliability

This release contains the following quality and reliability improvements.

WPF1

  • Addresses an issue in which applications that target .NET Framework 4.7 and later, or that set Switch.System.Windows.Controls.Grid.StarDefinitionsCanExceedAvailableSpace to “false,” experienced hangs because of the manner in which the algorithm allocated space to grid columns or rows whose width or height include an asterisk (*). [806901]
  • Improves the memory allocation and cleanup scheduling behavior of the weak-event pattern. To opt-in to these improvements, set AppContext switches to “true”: Switch.MS.Internal.EnableWeakEventMemoryImprovements and Switch.MS.Internal.EnableCleanupSchedulingImprovements. [763101]
  • Addresses an InvalidOperationException that can arise during weak-event cleanup, if called re-entrantly while a weak-event delivery is in progress. [812614, 822169]

 

ASP.NET

  • Addresses InvalidOperationException errors in System.Web.Hosting.RecycleLimitMonitor+RecycleLimitMonitorSingleton.AlertProxyMonitors. Worker processes for ASP.Net 4.7 and later are vulnerable to unexpected crashes because of this exception if the worker process consumes close to its configured Private Bytes recycling limit and application domains are being created or recycled (perhaps because of configuration file changes, or the presence of more than one application per worker process). [776516, 856170]

 

Workflow

  • Addresses an issue in which it was possible for a Workflow Service to get into a looping situation if an unhandled exception occurs during Cancel processing. To break this cycle, the Web.config file for the workflow service can have the following AppSetting specified that will cause the workflow service instance to terminate, instead of abort, if an unhandled exception occurs during Cancel processing. [721251, 866801

<appSettings>

<add key=”microsoft:WorkflowServices:TerminateOnUnhandledExceptionDuringCancel” value=”true”/>

</appSettings>

 

1 Windows Presentation Framework (WPF)

 

Getting the Update

The Preview of Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10, NET Framework 4.8 updates are available via Windows Update, Windows Server Update Services, Microsoft Update Catalog.  Updates for other versions of .NET Framework are part of the Windows 10 Monthly Cumulative Update.

The following table is for Windows 10 and Windows Server 2016+ versions.

Product Version Cumulative Update
Windows 10 1809 (October 2018 Update)
Windows Server 2019

4503864
.NET Framework 3.5, 4.7.2 Catalog
4502559
.NET Framework 3.5, 4.8 Catalog
4502564
Windows 10 1803 (April 2018 Update)
4502563
.NET Framework 3.5, 4.7.2 Catalog
4503288
.NET Framework 4.8 Catalog
4502563
Windows 10 1709 (Fall Creators Update)
4502562
.NET Framework 3.5, 4.7.1, 4.7.2 Catalog
4503281
.NET Framework 4.8 Catalog
4502562
Windows 10 1703 (Creators Update)
4502561
.NET Framework 3.5, 4.7, 4.7.1, 4.7.2 Catalog
4503289
.NET Framework 4.8 Catalog
4502561
Windows 10 1607 (Anniversary Update)
Windows Server 2016

4502560
.NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4503294
.NET Framework 4.8 Catalog
4502560

 

The following table is for earlier Windows and Windows Server versions.

Product Version Preview of Quality Rollup
Windows 8.1
Windows RT 8.1
Windows Server 2012 R2

Catalog
4503867
.NET Framework 3.5 Catalog
4495608
.NET Framework 4.5.2 Catalog
4495592
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4502557
.NET Framework 4.8 Catalog
4502567
Windows Server 2012 Catalog
4503866
.NET Framework 3.5 Catalog
4495602
.NET Framework 4.5.2 Catalog
4495594
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4502556
.NET Framework 4.8 Catalog
4502566
Windows 7 SP1
Windows Server 2008 R2 SP1

Catalog
4503865
.NET Framework 3.5.1 Catalog
4495606
.NET Framework 4.5.2 Catalog
4495596
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4502558
.NET Framework 4.8 Catalog
4502568
Windows Server 2008
Catalog
4503868
.NET Framework 2.0, 3.0 Catalog
4495604
.NET Framework 4.5.2 Catalog
4495596
.NET Framework 4.6 Catalog
4502558

 

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

The post .NET Framework June 2019 Preview of Quality Rollup appeared first on .NET Blog.

First Microsoft cloud regions in Middle East now available

$
0
0

This blog post was co-authored by Paul Lorimer, Distinguished Engineer, Office 365.

35400-DC-Launch-BLOGS-(960x540)

Azure and Office 365 generally available today, Dynamics 365 and Power Platform available by end of 2019

Today, Microsoft Azure and Microsoft Office 365 are taking a major step together to help support the digital transformation of our customers. Both Azure and Office 365 are now generally available from our first cloud datacenter regions in the Middle East, located in the United Arab Emirates (UAE). Dynamics 365 and Power Platform, offering the next generation of intelligent business applications and tools, are anticipated to be available from the cloud regions in UAE by the end of 2019.

The opening of the new cloud regions in Abu Dhabi and Dubai marks the first time Microsoft will deliver cloud services directly from datacenter locations in UAE and expands upon Microsoft’s existing investments in the Gulf and the wider Middle East region. By delivering the complete Microsoft cloud – Azure, Office 365, and Dynamics 365 – from datacenters in a given geography, we offer scalable, highly available, and resilient cloud services for organizations while helping them meet their data residency, security, and compliance needs.

Our new cloud regions adhere to Microsoft’s trusted cloud principles and join one of the largest and most secure cloud infrastructures in the world, already serving more than a billion customers and 20 million businesses. Microsoft has deep expertise in data protection, security, and privacy, including the broadest set of compliance certifications in the industry, and we are the first cloud service provider in UAE to achieve the Dubai Electronic Security Center certification for its cloud services. Our continued focus on our trusted cloud principles and leadership in compliance means customers in the region can accelerate their digital transformation with confidence and with the foundation to achieve compliance for their own applications.

Local datacenter infrastructure stimulates economic development for both customers and partners alike, enabling companies, governments, and regulated industries to realize the benefits of the cloud for innovation, as well as bolstering the technology ecosystem that supports the innovation. We anticipate the cloud services delivered from UAE to have a positive impact on job creation, entrepreneurship, and economic growth across the region. The International Data Corporation (IDC) predicts that cloud services could bring more than half a million jobs to the Middle East, including the potential of more than 55,000 new jobs in UAE, between 2017 and 2022.

Microsoft also continues to help bridge the skills gap amongst the IT community and to enhance technical acumen for cloud services. Cloud Society, a Middle East and Africa focused program building upon Microsoft Learn, has trained over 150,000 IT professionals in MEA. The community will further benefit from the increased availability and performance of cloud services delivered from UAE to help realize enterprise benefits of cloud, upskill in migration, and more effectively manage their cloud infrastructure.

You can learn more by following these links: Microsoft Middle East and Africa News Center, Microsoft Azure United Arab Emirates, Microsoft Office 365, Microsoft Dynamics 365, and Microsoft Power Platform.

Link unfurling (preview) in Azure Pipelines app for Slack

$
0
0

Lots of conversations in Slack start with links. With the new release of the Azure Pipelines app for Slack, pasting a URL to a build or release in Azure Pipelines now shows a rich preview of the pipeline. This adds context to the link and save users clicks to navigate to Azure DevOps and see if they need to act on it.

For example, pasting a link to a build shows the details of the build and the current status.

Build Link unfurl

The release card also shows details of the release and status of the first 3 stages of the release.

URL unfurling for releases

Enable link unfurling

There are a couple of things needed to get URL unfurling to work:

If you are already using the Azure Pipelines app, please click here and accept the Slack permissions needed for the feature to work.

Instead, if you are installing the app for the first time, you will automatically go through this process as part of the app installation.

The user pasting the URL needs to be logged in to the app. The preview only works if the poster has access to the build or release pipeline

Do let us know if you have any feedback on the Azure DevOps feedback portal or by using the /azpipelines feedback command in the app.

The post Link unfurling (preview) in Azure Pipelines app for Slack appeared first on Azure DevOps Blog.

Virtual machine scale set insights from Azure Monitor

$
0
0

In October 2018 we announced the public preview of Azure Monitor for Virtual Machines (VMs). At that time, we included support for monitoring your virtual machine scale sets from the at scale view under Azure Monitor.

Today we are announcing the public preview of monitoring your Windows and Linux VM scale sets from within the scale set resource blade. This update includes several enhancements:

  • In-blade monitoring for your scale set with “Top N”, aggregate, and list views across the entire scale set.
  • Drill down experience to identify issues on a particular scale set instance.
  • Updated mapping UI to display the entire dependency diagram across your scale set while supporting drill down maps for a single instance.
  • UI based enablement of monitoring from the scale set resource blade.
  • Updated examples for enabling monitoring using Azure Resource Manager templates.
  • Use of policy to enable monitoring for your scale set.

Performance

The performance views are powered using log analytics queries, offering “Top N”, aggregate, and list views to quickly find outliers or issues in your scale set based on guest level metrics for CPU, available memory, bytes sent and received, and logical disk space used. 

These views will help you quickly determine if a particular instance is having an issue, and provide the means to troubleshoot the issue, specified down to the process that is having a failed connection to a backend service or a particular logical disk running out of space.

An image of the Insights (preview) webpage.

Maps

Our dependency maps and network connection data sets are powered by the service map solution and it's Azure Virtual Machine extension. Maps in this context deliver a view that is specific to your scale set, automatically discovering the processes on the instances that are accepting in bound connections and making out bound connections to backend servers. This allows you to identify surprise dependencies to third party services, monitor failed connections, see live connection counts, monitor bytes sent and received per process, and identify service level latency.

In addition to the map view, you can analyze the network connection data set in our connections overview workbook or directly in log analytics.

An image of the Insights (preview) maps page.

Workbooks

We have brought our workbooks from Azure Monitor for Virtual Machines to the scale set view. These workbooks query the monitoring data we collect and allow you to modify them to create custom reports that you can share with colleagues in the portal.

An image of the Insights (preview) workbooks page.

Getting started

If you’re running VM scale sets you can use the performance and map capabilities from the “Insights (preview)” menu on the scale set resource blade to find resource constraints and visualize dependencies.

To get started, go to the resource blade for your VM scale set and click on “Insights (preview)” in the monitoring section. When you click “Try now” you’ll be prompted to choose a log analytics workspace, or we can generate one for you. You can view your resources at scale in Azure Monitor under “Virtual Machines (preview)” and on-board to entire resource groups and subscriptions using Azure Policy or using Powershell.

Announcing availability of Microsoft cloud datacenter regions in the Middle East

AzureVM update: flexible and powerful deployment and management of VMs in Azure

$
0
0

by Hong Ooi, senior data scientist, Microsoft Azure

I'm happy to announce version 2.0 of AzureVM, a package for deploying and managing virtual machines in Azure. This is a complete rewrite of the package, with the objective of making it a truly generic and flexible tool for working with VMs and VM scale sets (clusters).

AzureVM 1.0 was a rather limited package in many respects: it came with only a small selection of DSVM templates, and didn't give you many options for changing them. While you could deploy any arbitrary template, this functionality was actually provided by the underlying AzureRMR package, rather than something that AzureVM added.

Here are the main changes in AzureVM 2.0:

  • Separate out deployment of VMs and VM clusters; the latter are implemented as scalesets, rather than simplistic arrays of individual VMs. The methods to work with scalesets are named get_vm_scaleset, create_vm_scaleset and delete_vm_scaleset; get/create/delete_vm_cluster are now defunct.
  • New UI for VM/scaleset creation, with many more ways to fine-tune the deployment options, including specifying the base VM image; networking details like security rules, load balancers and autoscaling; datadisks to attach; use of low-priority VMs for scalesets; etc.
  • Several predefined configurations supplied to allow quick deployment of commonly used images (Ubuntu, Windows Server, RHEL, Debian, Centos, DSVM).
  • Allow referring to existing resources in a deployment (eg placing VMs into an existing vnet), by supplying AzureRMR::az_resource objects as arguments.
  • Clear distinction between a VM deployment template and a resource. get_vm and get_vm_scaleset will always attempt to retrieve the template; to get the resource, use get_vm_resource and get_vm_scaleset_resource.
  • New VM resource methods: get_public_ip_address, get_private_ip_address.
  • New cluster/scaleset resource methods: get_public_ip_address (technically the address for the load balancer, if present), get_vm_public_ip_addresses, get_vm_private_ip_addresses, list_instances, get_instance.
  • Use a pool of background processes to talk to scalesets in parallel when carrying out instance operations. The pool size can be controlled with the global options azure_vm_minpoolsize and azure_vm_maxpoolsize.

See the README and/or the vignette for more information.

Here are some example deployments. A basic Ubuntu VM:

library(AzureVM)

sub <- AzureRMR::get_azure_login()$
    get_subscription("sub_id")

# default is an Ubuntu 18.04 VM, size Standard_DS3_v2, login via SSH key
# call sub$list_vm_sizes() to get the sizes available in your region
sub$create_vm("myubuntuvm", user_config("myname", "~/.ssh/id_rsa.pub"),
                    location="australiaeast")

A more complex deployment (Windows 10 Pro):

## this assumes you have a valid Win10 desktop license
user <- user_config("myname", password="Use-strong-passwords!")
image <- image_config(
     publisher="MicrosoftWindowsDesktop",
     offer="Windows-10",
     sku="19h1-pro"
)
datadisks <- list(
    datadisk_config(250, type="Premium_LRS"),
    datadisk_config(1000, type="Standard_LRS")
)
nsg <- nsg_config(
    list(nsg_rule_allow_rdp)
)
sub$create_vm("mywin10vm", user,
    config=vm_config(
        image=image,
        keylogin=FALSE,
        datadisks=datadisks,
        nsg=nsg,
        properties=list(licenseType="Windows_Client")
    ),
    location="australiaeast"
)
 

Creating a scaleset:

sub$create_vm_scaleset("myubuntuss", user_config("myname", "~/.ssh/id_rsa.pub"), instances=5,
                       location="australiaeast")

 

Sharing a subnet between a VM and a scaleset:

# first, create the resgroup
rg <- sub$create_resource_group("rgname", "australiaeast")

# create the master
rg$create_vm("mastervm", user_config("myname", "~/.ssh/id_rsa.pub"))

# get the vnet resource
vnet <- rg$get_resource(type="Microsoft.Network/virtualNetworks", name="mastervm-vnet")

# create the scaleset
# since the NSG is associated with the vnet, we don't need to create a new NSG either
rg$create_vm_scaleset("slavess", user_config("myname", "~/.ssh/id_rsa.pub"),
                      instances=5, vnet=vnet, nsg=NULL)

AzureVM 2.0 is available on CRAN now. If you have any questions, or if you run into problems, please feel free to email me or file an issue at the Github repo.


Introducing Microsoft Edge preview builds for Windows 7, Windows 8, and Windows 8.1

$
0
0

Today we are excited to make preview builds from the Microsoft Edge Canary channel available on Windows 7, Windows 8, and Windows 8.1. This rounds out the initial set of platforms that we began to roll out back in April, so developers and users alike can try out the next version of Microsoft Edge on every major desktop platform.

Visit the Microsoft Edge Insider site from your Windows 7, 8, or 8.1 device to download and install the preview today! The Microsoft Edge Dev channel will be coming to previous versions of Windows soon.

Screen capture showing Microsoft Edge Canary running on Windows 7

You will find the experience and feature set on previous versions of Windows to be largely the same as on Windows 10, including forthcoming support for Internet Explorer mode for our enterprise customers.

Delivering the next version of Microsoft Edge to all supported versions of Windows is part of our goal to improve the web browsing experience for our customers on every device, and to empower developers to build great experiences with less fragmentation. Microsoft Edge will have the same always up-to-date platform and the same developer tools on all supported versions of Windows and macOS. This will reduce developer pain on the web, while ensuring all Windows customers have the latest browsing options.

Getting your feedback is an important step in helping us make a better browser – we consider it essential to create the best possible browsing experience. If you run into any issues or have feedback, please use the “Send Feedback” tool in Microsoft Edge. Simply click the smiley face next to the Menu button and let us know what you like or if there’s something we can improve.

The first Canary builds do have a few known issues, including the lack of dark mode support and no support for AAD sign-in, which we are working to resolve soon. If you need help or support, just press F1 from within Microsoft Edge Canary or Dev to visit our support website.

We hope you’ll try the preview out today, and share your feedback in the Microsoft Edge Insider community. We look forward to hearing what you think!

The post Introducing Microsoft Edge preview builds for Windows 7, Windows 8, and Windows 8.1 appeared first on Microsoft Edge Blog.

Introducing next generation reading with Immersive Reader, a new Azure Cognitive Service

$
0
0

This blog post was authored by Tina Coll, Senior Product Marketing Manager, Azure Marketing.

Today, we’re unveiling the preview of Immersive Reader, a new Azure Cognitive Service in the Language category. Developers can now use this service to embed inclusive capabilities into their apps for enhancing text reading and comprehension for users regardless of age or ability. No machine learning expertise is required. Based on extensive research on inclusivity and accessibility, Immersive Reader’s features are designed to read the text aloud, translate, focus user attention, and much more. Immersive Reader helps users unlock knowledge from text and achieve gains in the classroom and office.

Over 15 million users rely on Microsoft’s immersive reading technologies across 18 apps and platforms including Microsoft Learning Tools, Word, Outlook, and Teams. Now, developers can deliver this proven literacy-enhancing experience to their users too.

People like Andrzej, a child with dyslexia, have learned to read with the Immersive Reader experience embedded into apps like Microsoft Learning Tools. His mother, Mitra, shares their story:

Literacy is key to unlocking knowledge and realizing one’s potential. Educators see this reality in the classroom every day, yet hurdles to reading are commonplace for people with dyslexia, ADHD, or visual impairment, as well as emerging readers, non-native speakers, and others. In the spirit of empowering every person to achieve more, the features of Immersive Reader help readers overcome these challenges.

Immersive Reader Cognitive Services GIF

Azure is the only major cloud provider that offers this type experience as an easy-to-use AI service. Skooler, an ISV on a mission “to do education technology better,” integrated Immersive Reader. As Tor Henriksen, Skooler’s CEO and CTO remarks, “In 27 years of software development, this was the easiest integration we’ve ever done.” Multiple businesses to date have already started embedding Immersive Reader into their apps, including: Logos of businesses embedding Immersive Reader into their apps

With millions of users like Andrzej having discovered the power of the written word with Immersive Reader, we look forward to seeing what people can achieve with what you build.

To start embedding Immersive Reader into your apps, visit the Immersive Reader product page. The service is available for free while in preview.

Azure HC-series Virtual Machines crosses 20,000 cores for HPC workloads

$
0
0

New HPC-targeted cloud virtual machines

Azure HC-series Virtual Machines are now generally available in the West US 2 and East US regions. HC-series virtual machines (VMs) are optimized for the most at-scale, computationally intensive HPC applications. For this class of workload, HC-series VMs are the most performant, scalable, and price-performant ever launched on Azure or elsewhere on the public cloud.

With the Intel® Xeon® Scalable processors, codenamed Skylake, the HC-series delivers up to 3.5 teraFLOPS (double precision) with AVX-512 instructions, 190 GB/s of memory bandwidth, rich support for Intel® Parallel Studio XE HPC software, and SR-IOV-based 100 Gb/s InfiniBand. For a single VM scale set, a customer can utilize up to 13,200 physical CPU cores and more than 100 TB of memory for a single distributed memory workload.

HC extends Azure’s commitment to delivering supercomputer-class scale and performance for tightly-coupled workloads to the public cloud, and doing so at price points every customer can afford. Today we can happily say that Azure has once again achieved a new milestone in cloud HPC scalability.

Cutting edge HPC technology

HC-series VMs feature Intel® Xeon® Platinum 8168 processors that offer the fastest AVX, AVX2, and AVX-512 clock frequencies from the Intel Xeon® Scalable first generation family. This enables customers to realize a greater performance uplift when utilizing AVX-optimized applications.

HC-series VMs expose 44 non-hyperthreaded CPU cores and 352 GB of RAM, with a baseclock of 2.7 GHz, an all-cores Turbo speed of 3.4 GHz, and a single-core Turbo speed of 3.7 GHz. HC VMs also feature a 700 GB local NVMe SSD, and support up to four Managed Disks including the new Azure P60/P70/P80 Premium Disks.

A flagship feature of HC-series VMs is 100 Gb/s InfiniBand from Mellanox. HC-series VMs expose the Mellanox ConnectX-5 dedicated back-end NIC via SR-IOV, meaning customers can use the same OFED driver stack that they’re accustomed to in a bare metal context. HC-series VMs deliver MPI latencies as low as 1.7 microseconds, with consistency, bandwidth, and message rates in line with bare-metal InfiniBand deployments. For context, this is 8x to 16x lower network latency than found elsewhere on the public cloud.

Molecular dynamics beyond 20,000 cores

The Azure HPC team benchmarked many widely used HPC applications to reflect the diverse needs of our customers. One common class of applications are those that simulate the physical and chemical properties of molecules, otherwise known as molecular dynamics. To see how far HC-series VMs could scale, we benchmarked it using CP2K. We chose CP2K for several reasons. For one, it’s widely-used both in academia and industry. In fact, CP2K is one of 13 applications used by PRACE as part of the Unified European Applications Benchmark Suite to drive acceptance testing of supercomputers deployed in Europe. For another, CP2K benefits from AVX-512 and so it is a good demonstration of what is possible when the latest hardware and software capabilities come together. Anyone can install and run CP2K as we have tested by following the procedure in our documentation here.

Our results from this scaling exercise as follows:

H20-DFT-LS test case results

 

Nodes

Ranks/Node

Threads/Rank

Cases/day

Time to Solution

8

8

5

101

852.715

16

4

11

210

410.224

32

8

5

390

221.202

64

8

5

714

121.192

108

4

11

1028

84.723

128

8

5

1289

67.876

192

12

3

1515

57.827

256

4

11

3756

23.789

288

2

22

3927

22.009

392

2

22

4114

21.818

 

For the H20-DFT-LS benchmark (Figure 1), which a single-point energy calculation using linear-scaling DFT and 2048 water molecules, HC-series VMs successfully scaled to 392 VMs and 17,248 cores. Most impressively, at the largest level of scale and compared to our baseline of 8 VMs, HC VMs provided a 40.7x improvement in cases-per-day throughput as compared to only a 49x increase in VM resources. Here, 288 VMs offers the optimal balance in price-performance for large scaling.

LiHFX test case results

 

Nodes

Ranks/Node

Threads/Rank

Cases/day

Time to Solution

24

6

7

55

1556.201

36

4

11

86

1002.111

44

11

4

219

394.847

64

8

5

294

293.091

108

4

11

482

179.469

112

7

6

482

179.344

128

8

5

530

163.095

176

11

4

685

126.899

256

4

11

960

90.14

324

4

11

1016

85.871

512

2

22

1440

60.176

 

For the LiHFX benchmark, which is a single-point energy calculation simulating a 216 atom Lithium Hydride crystal with 432 electrons, HC-series VMs successfully scaled to 289 VMs and 12,716 cores. Most impressively, at the largest level of scale and compared to our baseline of 24 VMs, HC VMs provided a 26.2x improvement in cases-per-day throughput for a 21.3x increase in VM resources.

Delighting HPC customers on Azure

The unique capabilities and cost-performance of HC-series VMs are a big win for scientists and engineers who depend on high-performance computing to drive their research and productivity to new heights. Organizations spanning aerospace, automotive, defense, financial services, heavy equipment, manufacturing, oil and gas, public sector academic, and government research can now use HC-series VMs to increase HPC application performance and deliver faster time-to-insight.

“The Azure HC-series, powered by Intel architecture, gives our customers the high-demand workloads and application performance needed to deliver quicker insights at scale. Our collaboration with Microsoft brings HPC in the Cloud leadership to the forefront, showing performance efficiency and capacity needed for data-intense compute jobs across industries.”
 
- Trish Damkroger, VP & GM of the Extreme Computing Organization in Intel’s Data Center Group
 

Available now

Azure Virtual Machine HC-series are currently available in West US 2 and East US, with additional regions rolling out soon.
•    Find out more about high performance computing (HPC) in Azure.
•    Learn about Azure Virtual Machines.

Gartner names Microsoft a leader in 2019 Gartner Magic Quadrant for Enterprise iPaaS

$
0
0

Microsoft accelerates application development with Azure Integration Services

Personal computers revolutionized the way work was done. New software unlocked unprecedented levels of productivity, and for a time, business flourished. As the personal computer exploded in popularity, more and more software was created. For the individual, this was a golden age. For the enterprise, this was also a golden age ... with an asterisk.

As it was when you add more people to an organization, so too it was with software. Making software work cooperatively with other unrelated software ended up being a very tricky problem to solve. The more software that was added, the more that overhead was introduced. This was an unfortunate consequence. The cost of doing business increased, meaningful results decreased, and organizational productivity plummeted.

Large businesses and enterprises were locked in this pattern, until a new category of software was created, integration software. And for many years, on-premise integration tools, such as Microsoft BizTalk Server, helped mitigate the issue created by the rapid proliferation and adoption of new software.

And then one day, everything changed. The cloud was born, and with it, the need for new ways to connect everything together.

The adoption of cloud-native integration platforms to support business workflows

As before, a new category of software has come into existence to help solve the challenges organizations are struggling with. iPaaS, or Enterprise Integration Platform as a Service tools are key to a successful integration strategy, and in turn, a successful application development strategy.

Microsoft is once again named a leader in the 2019 Gartner Magic Quadrant for Enterprise Integration Platform as a Service (iPaaS.)

Image of the Magic Quadrant for Enterprise Integration Platform as a Service.

Microsoft is powering enterprises across industry verticals in adopting comprehensive app innovation and modernization strategies, with integration as the backbone to these efforts. In fact, most modern application design makes use of integration capabilities, without being cognizant that they are doing so. Application development and application integration are becoming more and more intertwined, making it almost impossible to figure out where one starts and the other one ends

We are continuously investing in our integration offerings, including how APIs play a role in the modern enterprise, how business units increasingly need more and more flexible rules and logic to accommodate changing market demands, and more.

Integration is the surface upon which strong application infrastructure stands

Microsoft goes way beyond just integration, and instead focuses on helping you make better applications. Companies like Finastra, Evoqua, and Vipps are using a wide variety of Azure services, such as Azure Kubernetes Service, Azure API Management, Azure Logic Apps, Azure Functions, and more to create applications faster, easier, and better connected with the rest of their application ecosystem.

“Our platform intersects a great deal of data and technology,” says Félix Grévy, Global Head of Product Management at FusionFabric.cloud, Finastra, “yet our complete integration with Azure streamlines our infrastructure, simplifies our processes and makes our lives infinitely easier.”

Register for Manage Your Microservices, a webinar about how application integration enables application innovation and development. Learn how to use Azure API Management, Azure Functions, Azure Kubernetes Service, and more, to create a comprehensive microservice infrastructure. 

Forwarded Headers Middleware Updates in .NET Core 3.0 preview 6

$
0
0

With the ASP.NET Core 2.1 release, we included UseHsts and UseHttpRedirection by default. These methods put a site into an infinite loop if deployed to an Azure Linux App Service, Azure Linux virtual machine (VM), or behind any other reverse proxy besides IIS. TLS is terminated by the reverse proxy, and Kestrel isn’t made aware of the correct request scheme.

OAuth and OIDC also fail in this configuration because they generate incorrect redirects. Calls to UseIISIntegration add and configure forwarded headers middleware when running behind IIS, but there’s no matching automatic configuration for Linux (Apache or Nginx integration). The fix for this issue is discussed in more detail in the doc article Forward the scheme for Linux and non-IIS reverse proxies.

Configuration-only Wire-up in Preview 6

With the updates in .NET Core 3 preview 6, you no longer need to call the middleware explicitly, as the host logic has been pre-wired to enable the Forwarded Headers Middleware by default as long as the ASPNETCORE_FORWARDEDHEADERS_ENABLED environment variable has been set to true. Turning on the Forwarded Headers Middleware is as simple as setting the ASPNETCORE_FORWARDEDHEADERS_ENABLED setting in the Azure Portal’s configuration blade for any App Service running on Linux or in a container.

Enabling the Forwarded Headers Middleware via config

Once this setting is set to true, the middleware starts working, and features dependent on Request.IsHttps resulting to true begin to function as expected.

Resolving the issue with ASP.NET Core 2.x Apps Today

If you’re currently building an ASP.NET Core 2.x app and want to run it on App Service for Linux now, there’s a workaround that will be future-proof when the updates come out for 3.0.

To forward the scheme from the proxy in non-IIS scenarios, add and configure Forwarded Headers Middleware. In Startup.cs, use the following code:

// using Microsoft.AspNetCore.HttpOverrides;

public void ConfigureServices(IServiceCollection services)
{
    if (string.Equals(
        Environment.GetEnvironmentVariable("ASPNETCORE_FORWARDEDHEADERS_ENABLED"), 
        "true", StringComparison.OrdinalIgnoreCase))
    {
        services.Configure<forwardedheadersoptions>(options =>
        {
            options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | 
                ForwardedHeaders.XForwardedProto;
            // Only loopback proxies are allowed by default.
            // Clear that restriction because forwarders are enabled by explicit 
            // configuration.
            options.KnownNetworks.Clear();
            options.KnownProxies.Clear();
        });
    }
}

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    app.UseForwardedHeaders();
}

If you enable your ASP.NET Core 2.x apps with this workaround today, when you’re ready to upgrade to 3.0, you’ll already have the right configuration setting in place.

Base Image Update

The base images used by the App Service team to streamline the creation of ASP.NET Core apps will soon be updated so that the ASPNETCORE_FORWARDEDHEADERS_ENABLED environment variable will be set to true. Once they’re updated, you won’t even need to explicitly set the environment variable; it’ll be enabled by default.

Try it Out

If you’re new to building ASP.NET Core apps using containers, the App Service options for Linux and Container-based hosting offer a great place to get started. The docs are loaded with guidance and examples, from how to Run a .NET Core app in App Service on Linux to accessing a SQL Server Database from an ASP.NET Core app running in App Service Linux.

The post Forwarded Headers Middleware Updates in .NET Core 3.0 preview 6 appeared first on ASP.NET Blog.

Create interactive documentation with the new Try .NET template

$
0
0
In our previous post, we announced dotnet try a global tool which allows developers to create interactive workshops and documentation. Tutorials created with dotnet try let users start learning without having to install an editor. Features like IntelliSense and live diagnostics give users a sophisticated learning and editing experience. Today, we are releasing a new dotnet new template called trydotnet-tutorial. This template can be installed next to existing dotnet new templates. It creates a project and associated files to help content authors understand the basics of dotnet try. This can serve as the foundation of your own awesome documentation!

Setup

To set this up, let’s begin by installing the template. In a command prompt execute,

dotnet new -i Microsoft.DotNet.Try.ProjectTemplate.Tutorial

If the installation succeeds, it will print the available templates for dotnet new, including trydotnet-tutorial.

dotnet new templates

Also, you need to install the dotnet try global tool, if you haven’t already

dotnet tool install -g dotnet-try

Using the template

Navigate to an empty directory (or create a new one). Inside that directory, execute the following command:

dotnet new trydotnet-tutorial

For example, if the directory name is “myTutorial”, it will result in the following layout:

Created layout

Tip: You can also use the --name option to automatically create a directory with the appropriate name.

dotnet new trydotnet-tutorial --name myTutorial

Now, let’s see the template in action. In the “myTutorial” directory, execute the following command:

dotnet try

This will start the dotnet try tool and open a browser window with the interactive readme. You can click the “Run” button in the browser and see the output of the program. If you type in the editor you will also get live diagnostics and IntelliSense. Try modifying the code here and clicking run again to see the effect of your changes.

dotnet try in action

Understanding the template

The files in a dotnet try tutorial will typically be of one of three categories:

Markdown files

These are the files that will serve as your documentation. These files will be rendered normally by other markdown engines and include special settings to enable them to be rendered interactively by the dotnet try engine.

In Readme.md, notice that the code fences (the three tick notation () used to denote code in markdown format) have some special arguments like --source-file, --region,  etc and you actually don't see any code inside the fences. However when we run dotnet try, the code fence is replaced with an interactive editor.

Project File

myTutorial.csproj is a normal C# project file that targets .NET Core. Any NuGet packages you add to this file will be available to the users.

Note: This project references System.CommandLine.DragonFruit. The usage is explained below.

Source Files

These are the files that contain the code that will be executed. For simplicity, the template has only one source file: Program.cs. However, since your project is a .NET Core project, any .cs files added to the directory will be a part of the compilation. You can also reference any of these files from a code fence in your markdown file.

Looking at the contents of Program.cs, you will notice that instead of the familiar Main(string[] args) entry point, this program's entry point uses the new experimental library System.CommandLine.DragonFruit to parse the arguments that were specified in your Markdown file's code fence. The Readme.md sample uses these arguments to call different methods. You're not required to use any particular library in your backing project. The command line arguments are available if you want to respond to them, and DragonFruit is a concise option for doing so.

What's happening behind the scenes

Code fences are a standard way to include code in your markdown files. The only change you need to make is to add a few options immediately following the in the first line of your code fence. If you notice the below code fence (excerpted from Readme.md), there are three options in use.

```cs --source-file ./Program.cs --project ./myTutorial.csproj --region HelloWorld
```
Option   What it does
--project ./myTutorial.csproj Points to the project that the snippet is part of.
--region HelloWorld Identifies a C# code #region to focus on.
--source-file ./Program.cs Points to the file where the sample code is pulled from.

 

The code in Program.cs demonstrates one way to use regions. Here regions are being used to determine which method to execute as well as determining which part of the code to display in the editor.

You’re all set! Now you can tweak and play around with the template and create your own awesome interactive tutorials.

You can learn more or reach out to us on GitHub.

The post Create interactive documentation with the new Try .NET template appeared first on .NET Blog.

Microsoft positioned as a leader in the Forrester WaveTM: Database-as-a-service

$
0
0

We’re excited to share that Forrester has named Microsoft as a leader in The Forrester Wave™: Database-as-a-service, Q2 2019. This decision is based on their evaluation of Azure relational and non-relational databases. We believe Microsoft’s position as a leader is further underscored by its standing in the recent Q1 2019 NoSQL Forrester WaveTM.

Forrester Wave DBaaS tracking chart

Database-as-a-service has come a long way

Microsoft provides the freedom to operate wherever you are in your digital transformation, whether modernizing on-premises, migrating to the cloud, or using a hybrid solution. Database-as-a-service (DBaaS) has evolved into a very popular option for organizations looking to reduce their operational and capital expenses, while tapping into the performance and scale benefits of the cloud. Azure database services not only automate many of the daily database chores like updates, patches, and backups, but also use built-in intelligent features. These are on by default to optimize database performance and secure your data, keeping you one step ahead of potential threats.

According to Forrester, “33 percent of global infrastructure business decision makers already support a DBaaS deployment in production, and this will likely double over the next three to four years. In addition, 61 percent of global data and analytics technology decision makers plan to increase their investment for DBaaS in the coming year by at least 5 percent, and 22 percent of them plan to increase it by more than 10 percent compared to the previous year.”

Azure databases built on choice and flexibility

According to the Forrester report, customers “like Microsoft’s automation, ease of provisioning, high availability, security, and technical support.” Microsoft has a comprehensive portfolio of database services on Azure that are grounded in choice and flexibility, providing the right tool for maximizing productivity, efficiency, and return on investment for every use case that customers encounter.

Whether migrating on-premises databases at scale, building multi-tenant software-as-a-service (SaaS) or developing new, cloud native apps, Azure databases comprise the range of relational and non-relational, community-based and proprietary engines that provide a variety of deployment options and support an array of application types. All the Azure databases are managed by Microsoft, so you can focus more on building great apps and growing your business:

Relational databases

Azure SQL Database provides broad SQL Server compatibility and is optimized for SQL Server migrations, OLTP and multi-tenant SaaS applications. Significantly expand the potential for application growth without being limited by storage size with Hyperscale.

Community-based Azure Database for PostgreSQL, Azure Database for MySQL, and Azure Database for MariaDB are enterprise-ready, secure, and ideal for low-latency scenarios such as online gaming and digital marketing. Bring high-performance scaling to your low latency, high-throughput PostgreSQL workloads with Hyperscale powered by Citus Data technology.

NoSQL database

Azure Cosmos DB provides turnkey global distribution and transparent multi-master replication and is great for scalable IoT applications and real-time personalization and analytics.

Forrester states in its report that “Microsoft offers a mature, scalable, secure, and hybrid DBaaS offering.” Azure databases have been tried and tested over the years, becoming more scalable, performant, secure, and intelligent in the process.

Next steps

We’re committed to making Azure the ideal destination for your data migration and the best platform to build powerful, intelligent, and modern apps upon. If you haven’t tried Azure database services, you can try them for free today without signing-up or providing a credit card.

Download the full Forrester WaveTM Database-as-a-Service Q2 2019 report for more details.


New to Azure? Follow these easy steps to get started

$
0
0

Today, many organizations are leveraging digital transformation to deliver their applications and services in the cloud. At Microsoft Build 2019, we announced the general availability of Azure Quickstart Center and received positive feedback from customers. Azure Quickstart Center brings together the step-by-step guidance you need to easily create cloud workloads. The power to easily set up, configure, and manage cloud workloads while being guided by best practices is now built right into the Azure portal.

How do you access Azure Quickstart Center?

There are two ways to access Azure Quickstart Center in the Azure portal. Go to the global search and type in Quickstart Center or select All services on the left nav and type Quickstart Center. Select the star button to save it under your favorites.

Screenshot of Azure portal search for quickstart center

image

Get started

Azure Quickstart Center is designed with you in mind. We created setup guides, start a project, and curated online training for self-paced learning so that you can manage cloud deployment according to your business needs.

Screenshot of setup guides, start a project, and online trainings to help manage cloud deployment

Setup guides

To help you prepare your organization for moving to the cloud, our guides Azure setup and Azure migration in the Quickstart Center give you a comprehensive view of best practices for your cloud ecosystem. The setup guides are created by our FastTrack for Azure team who has supported customers in cloud deployment and turned these valuable insights to easy reference guides for you.

The Azure setup guide walks you through how to:

  • Organize resources: Set up a management hierarchy to consistently apply access control, policy, and compliance to groups of resources and use tagging to track related resources.
  • Manage access: Use role-based access control to make sure that users have only the permissions they really need.
  • Manage costs: Identify your subscription type, understand how billing works, and how you can control costs.
  • Governance, security, and compliance: Enforce and automate policies and security settings that help you follow applicable legal requirements.
  • Monitoring and reporting: Get visibility across resources to help find and fix problems, optimize performance, or get insight to customer behavior.
  • Stay current with Azure: Track product updates so you can take a proactive approach to change management.

The Azure migration guide is focused on re-host also known as lift and shift, and gives you a detailed view of how to migrate applications and resources from your on-premises environment to Azure. Our migration guide covers:

  • Prerequisites: Work with your internal stakeholders to understand the business reasons for migration, determine which assets like infrastructure, apps, and data are being migrated and set the migration timeline.
  • Assess the digital estate: Assess the workload and each related asset such as infrastructure, apps, and data to ensure the assets are compatible with cloud platforms.
  • Migrate assets: Identify the appropriate tools to reach a "done state" including native tools, third-party tools, and project management tools.
  • Manage costs: Cost discussion is a critical step in migration. Use the guidance in this step to drive the discussion.
  • Optimize and transform: After migration, review the solution for possible areas of optimization. This could include reviewing the design of the solution, right-sizing the services, and analyzing costs.
  • Secure and manage: Enforce and set up policies to manage the environment to ensure operations efficiency and legal compliance.
  • Assistance: Learn how to get the right support at the right time to continue your cloud journey in Azure.

Start a project

Compare frequently used Azure services available for different solution types, and discover the best fit for your cloud project. We’ll help you quickly launch and create workloads in the cloud. Pick one of the five common scenarios shown below to compare the deployment options and evaluate high-level architecture overviews, prerequisites, and associated costs.

Screenshot of five common scenarios for comparing deployment options and high-level architecture overviews

After you select a scenario, choose an option, and understand the requirements, select Create.

Screenshot displaying the creation of a web app

We’ll take you to the create resource page where you’ll follow the steps to create a resource.

Screenshot of the create resource page

Take an online course

Our recommended online learning options let you take a hands-on approach to building Azure skills and knowledge.

Screenshot showing the available online learning options to build Azure skills and knowledge

Get started today

Use the rich capabilities of the Azure Quickstart Center to create your first cloud solution like a pro.

Azure Security Expert Series: Learn best practices and Customer Lockbox general availability

$
0
0

image

With more computing environments moving to the cloud, the need for stronger cloud security has never been greater. But what constitutes effective cloud security, and what best practices should you be following?

While Microsoft Azure delivers unmatched built-in security, it is important that you understand the breadth of security controls and take advantage of them to protect your workloads.

We launched the Azure Security Expert Series, which will provide on-going virtual content to help security professionals protect hybrid cloud environments. Ann Johnson, CVP of Cybersecurity Solutions Group at Microsoft, kicked off the series and shared five cloud security best practices:

  1. Strengthen Access Control
  2. Increase your security posture
  3. Secure apps and data
  4. Manage networking
  5. Mitigate threats

Make sure you are up to speed with each of these important best practices as you secure your own organization.

Customer Lockbox for Microsoft Azure

During Ann’s main talk, she announced the general availability of Customer Lockbox for Microsoft Azure. Customer Lockbox for Azure extends our commitment to customer privacy while also giving you help when you need it most. With Customer Lockbox for Microsoft Azure, customers can review and approve or reject requests from Microsoft engineers to access their data during a support case. Access is granted only if approved and the entire process is audited with records stored in the Activity Logs.

Customer Lockbox is now generally available and currently enabled for remote desktop access to virtual machines. To learn more, please go to Customer Lockbox for Microsoft documentation.

What will you learn?

Missed the broadcast or want to dive deeper into SIEM, IOT, Networking or Security Center?

Check out the Azure Security Expert series which includes the best practice session with Ann, and additional drill-down sessions including:

  • Get started with Azure Sentinel a cloud-native SIEM
  • What is cloud-native Azure Network Security?
  • Securing the hybrid cloud with Security Center
  • What makes IoT Security different?

Until June 26th, 2019, you will have a chance to win a Microsoft Xbox One S. To enter, watch the sessions and complete the knowledge check on the entry form and submit the entry.**

‘Ask Us Anything’ with Azure security experts

Have more questions? The Azure security team will be hosting an ‘Ask Us Anything’ session on Twitter on Monday June 24, 2019 from 10 am – 11:30 am PT (1 pm – 2:30 pm ET). Our product and engineering teams will be available to answer questions about Azure security services.

Post your questions to Twitter by mentioning @AzureSupport and using the hashtag #AzureSecuritySeries.

If there are follow-ups or additional questions that come up after the Twitter session, no problem! We’re happy to continue the dialogue afterward through Twitter or send your questions to Azuresecurityexpert@microsoft.com.

Save the date
 

How do I learn more about Azure security and connect with the tech community?

There are several ways to stay connected and access new executive talks, on-demand sessions, or other types of valuable content covering a range of cloud security topics to help you get started or accelerate your cloud security plan.

**The Sweepstakes will run exclusively between June 19 – June 26 11:59 PM Pacific Time. No purchase necessary. To enter, you must be a legal resident of the 50 United States (including the District of Columbia), and be 18 years of age or older. You will need to complete all the Knowledge Check questions in the entry form to qualify for the sweepstakes. Please refer to our official rules for more details.

Help us shape the future of .NET for Apache Spark

$
0
0

Apache Spark™ is a general-purpose distributed processing engine for analytics over large data set typically terabytes or petabytes of data. Apache Spark can be used for processing batches of data, real-time streams, machine learning, and ad-hoc query. So far Spark has been accessible through Scala, Java, Python and R but not .NET.

At Spark + AI summit earlier this year, we released .NET for Apache Spark which makes Spark accessible to .NET developers. The initial reception for .NET for Apache Spark has been very positive and as we build .NET for Apache Spark in the open we would love to get your feedback.

Please fill out the survey below and help shape how we can improve .NET for Apache Spark for your needs by sharing your experiences and challenges.

.NET For Apache Spark Survey

The survey should take less than 10 minutes  to complete!

The post Help us shape the future of .NET for Apache Spark appeared first on .NET Blog.

Microsoft’s MT-DNN Achieves Human Performance Estimate on General Language Understanding Evaluation (GLUE) Benchmark

$
0
0

Understanding natural language is one of the longest running goals of AI, which can trace back to 1950s when the Turing test defines an “intelligent” machine. In recent years, we have observed promising results in many Natural Language Understanding (NLU) tasks both in academia and industry, as the breakthroughs in deep learning are applied to NLU, such as the BERT model developed by Google in 2018.

The General Language Understanding Evaluation (GLUE) is a well-known benchmark consisting of nine NLU tasks, including question answering, sentiment analysis, text similarity and textual entailment; it is considered well-designed for evaluating the generalization and robustness of NLU models.  Since its release in early 2018, many (previous) state-of-the-art NLU models, including BERT, GPT, Stanford Snorkel, and MT-DNN, have been benchmarked on it, as shown in GLUE leaderboard. Top research teams in the world are collaboratively developing new models, approaching human performance on GLUE.

In the last few months, Microsoft has significantly improved the MT-DNN approach to NLU, and finally surpassed the estimate for human performance on the overall average score on GLUE (87.6 vs. 87.1) on June 6, 2019. The MT-DNN result is also substantially better than the second-best method (86.3) on the leaderboard.

The latest improvement is primarily due to incorporating into MT-DNN a new method developed for the Winograd Natural Language Interface (WNLI) task in which an AI model must correctly identify the antecedent of an ambiguous pronoun in a sentence.

For example, the task provides the sentence: “The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.”

If the word “feared” is selected, then “they” refers to the city council. If “advocated” is selected, then “they” presumably refers to the demonstrators.

Such tasks are considered intuitive for people due to their world knowledge, but difficult for machines. This task has proven the most challenging in GLUE, where previous state-of-the-art ML models can hardly outperform the naïve baseline of majority voting (scored at 65.1), including BERT.

Although the early versions of MT-DNN (documented in Liu et al. 2019a and Liu et al. 2019b) already achieve better scores than humans on several tasks including MRPC, QQP and QNI, they perform much worse than humans on WNLI (65.1 vs. 95.9). Thus, it is widely believed that improving the test score on WNLI is critical to reach human performance on the overall average score on GLUE. The Microsoft team approached WNLI by a new method based on a novel deep learning model that frames the pronoun-resolution problem as computing the semantic similarity between the pronoun and its antecedent candidates. The approach lifts the final test score to 89.0.  Combined with other improvements, the overall average score is lifted to 87.6, which surpasses the conservative estimate for human performance on GLUE, marking a milestone toward the goal of understanding natural language.

The Microsoft team (from left to right): Weizhu Chen and Pengcheng He of Microsoft Dynamics 365 AI, and Xiaodong Liu and Jianfeng Gao of Microsoft Research AI.

Previous blogs on MT-DNN could be found at: MT-DNN-Blog and MT-DNN-KD-Blog.

Cheers,

Guggs

 

Top Stories from the Microsoft DevOps Community – 2019.06.21

$
0
0

I’m at NDC Oslo and it’s summer solstice. So today’s the longest day of the year, and up here that makes for a really long day. But that’s good news for me, it gives me more time to catch up on the amazing articles that the Microsoft DevOps community is writing. Check them out – and if you’re also at NDC Oslo this week, be sure to say hi!

Everything as Code with Azure DevOps Pipelines: C#, ARM, and YAML: Part #1
Check in all the things! Jeremy Lindsay doesn’t stop with just putting the source code and the unit tests into his repository, he also defines his infrastructure as code using ARM templates, and his pipelines configuration as code using YAML with the new multi-stage pipelines.

Hosted Agents plus Docker, perfect match for Azure DevOps and Open source Project
The hosted Azure Pipelines build agents come with a lot of tools installed, but sometimes your prerequisites are tricky. The easiest way to build with them might just be to create a docker container with them pre-installed. Gian Maria Ricci shows how to build inside containers on the hosted agents for maximum flexibility.

Setting default repository permissions on your Azure DevOps Organization
Azure DevOps has a powerful permission model, with basically every object within all of the services being able to be controlled by Role-Based Access Control (RBAC). If you want to dig deep in the permission model, Jesse Houwing shows you how to debug and extend permissions for arbitrary objects.

Passing variables from stage to stage in Azure DevOps release
It’s common to want to update and pass variables from one stage of your pipeline to another – Donovan Brown shows how you can leverage his VSTeam PowerShell module to create a variable in one stage of the process, to use it, and how to update it in another stage. Very useful!

Five steps to add automated performance quality gates to Azure DevOps pipelines
I love the idea of adding automated performance testing to my pipeline, and Rob Jahn introduces how to actually add theses perf tests to an actual open source project. Even better, he shows you how to add them both to a classic (designer-based) Azure Pipeline and the new YAML-based multi-stage pipelines.

As always, if you’ve written an article about Azure DevOps or find some great content about DevOps on Azure then let me know! I’m @ethomson on Twitter.

The post Top Stories from the Microsoft DevOps Community – 2019.06.21 appeared first on Azure DevOps Blog.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>