Desktop Analytics—the cloud-connected service that helps IT professionals take a data-driven approach to Windows endpoint management—is now generally available.
As stated inthisarticle, Windows 10, version 1909 is a scoped set of features for select performance improvements, enterprise features and quality enhancements. Developers should be aware of this release, but no action is necessary at this time.
A new Windows SDK will not be issuedin conjunction with this version of Windows, since this release doesn’t introduce new APIs.That means there’s no need to target Windows 10, version 1909 or modify your project files.
WinUI 2.2 released in August.WinUI is open source and everyone can check out the WinUI GitHub repo to file issues, discuss new features, and even contribute code. Inside WinUI 2.2, we’ve added a new TabViewcontrol. In addition to introducing new Visual Style updates, there have been updates to the NavigationView as well.We encourage everyone to use WinUI in their UWP apps – it’s the best way to get the latest Fluent design, controls, and is backward-compatible to Windows 10 Anniversary Update.
2 simple steps for updating your dev environment
If you would like to update your system to Windows 10, version 1909, you may do so either by downloading it via your VSS subscription, or by using theWIP (Windows Insider Program)Release Preview Ring. The Insider team has a great blog post that will walk you through how to get on the Release Preview Ring. Once you do that, just go into Visual Studio 2019 and install the latest SDK and you’re good to go. In the latest Visual Studio, the Windows 10 SDK (10.0.18362) is already selected by default.
As you may know, the next version of Microsoft Edge will adopt the Chromium open source project to create better web compatibility and less fragmentation of different underlying web platforms. If you haven’t already, you can try out preview builds of Microsoft Edge from https://www.microsoftedgeinsider.com which is now available on Windows 10, 8.1, 8, 7, and macOS!
With Visual Studio today, you can already debug JavaScript running in the current version of Microsoft Edge, built on top of the EdgeHTML web platform. Starting with Visual Studio 2019 version 16.2, we’ve extended support to the preview builds of Microsoft Edge, which leverage Chromium. Head to visualstudio.com/downloads/ to download the latest Visual Studio now!
Create a new ASP.NET Core Web Application
You can now debug JavaScript in Microsoft Edge for your ASP.NET Framework and ASP.NET Core applications. To try out this feature, let’s start by creating a new ASP.NET Core Web Application.
To show off support for debugging JavaScript, we’ll use the React.js template which shows you how to integrate React.js with an ASP.NET Core application. Once your project has been created, open ClientApp/src/App.js which you’ll see is a React component for our app.
Using JavaScript to calculate the Fibonacci sequence
Let’s assume that as part of this app, a user will input the term in the Fibonacci sequence they want to know and our client-side JavaScript code will be responsible for calculating it and displaying the result to the user. The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, starting with 0 and 1.
Position
1
2
3
4
5
6
7
Value
0
1
1
2
3
5
8
To handle this calculation, let’s create a new Fibonacci component and add it to our app. Start by modifying App.js to import our soon-to-be-created Fibonacci component and route to it:
import React, { Component } from 'react';
import { Route } from 'react-router';
import { Layout } from './components/Layout';
import { Home } from './components/Home';
import { FetchData } from './components/FetchData';
import { Counter } from './components/Counter';
import { Fibonacci } from './components/Fibonacci';
export default class App extends Component {
displayName = App.name
render() {
return (
<Layout>
<Route exact path='/' component={Home} />
<Route path='/counter' component={Counter} />
<Route path='/fetchdata' component={FetchData} />
<Route path='/fibonacci' component={Fibonacci} />
</Layout>
);
}
}
Now that our app will handle routing the /fibonacci endpoint, let’s modify the NavMenu to navigate to that endpoint. Open ClientApp/src/components/NavMenu.js and add this LinkContainer component at line 36:
before the </Nav> tag. Now, you’ll be able to easily test our new /fibonacci route from the NavMenu. Finally, let’s create our Fibonacci component. Create a new JavaScript file (Ctrl+N) and save it as Fibonacci.js in the ClientApp/src/components/ folder. Add the code below to your new component:
import React, { Component } from 'react';
export class Fibonacci extends Component {
displayName = Fibonacci.name
constructor(props) {
super(props);
this.state = {
n: 8,
f_n: null,
};
this.calculateFibonacci = this.calculateFibonacci.bind(this);
}
calculateFibonacci() {
var f_0 = 0;
var f_1 = 1;
for (var i = 3; i < this.state.n; i++) {
var f_2 = f_0 + f_1;
f_0 = f_1;
f_1 = f_2;
};
this.setState({
f_n: f_2
})
console.log("The " + i.toString() + "th Fibonnaci number is:", f_2);
}
render() {
return (
<div>
<h1>Fibonacci</h1>
<p>This is a simple example of a React component.</p>
<p>The {this.state.n}th Fibonacci number is: <strong>{this.state.f_n}</strong></p>
<button onClick={this.calculateFibonacci}>Calculate</button>
</div>
);
}
}
Eventually, we’ll add a form to the render function for the user to supply n, the variable we’re using to represent the term in the Fibonacci sequence that they want to know. For now, just to test our logic, we’ll assume that the user wants to know what the 8th term in the sequence is, which is 13. Let’s build our app in the new Microsoft Edge to see if our code is calculating the right answer.
If you don’t have it installed already, head to https://www.microsoftedgeinsider.com to download the preview builds of Microsoft Edge. In Visual Studio, click the dropdown next to IIS Express and select the version of Microsoft Edge (Beta, Dev, or Canary) that you have installed. If you don’t see Microsoft Edge Beta, Dev or Canary in the dropdown, you may need to restart Visual Studio.
Now click the green Play button or press F5 on your keyboard. Visual Studio will start your web application and Microsoft Edge will automatically launch and navigate to your app.
You’ll see the entry we added for our Fibonacci component in the NavMenu on the left. Click on Fibonacci.
Now click the Calculate button.
We know that the 8th term should be 13 but our code is saying that the 8th Fibonacci number is 8!
Debugging JavaScript in Visual Studio
Switching back to Visual Studio, since our calculateFibonacci() function prints to the Console, you can actually see that log in the Debug Output.
To figure out where our code is going wrong, let’s set a breakpoint on Line 19 inside the for loop in Visual Studio. We’ll start by checking if our code is calculating the 3rd and 4th terms in the Fibonacci sequence correctly. Click the Restart button next to the Stop button or press Ctrl+Shift+F5 to bind the breakpoint and start debugging.
Note: If you have not enabled JavaScript debugging before, your JavaScript breakpoint will not bind successfully. Visual Studio will ask if you want to enable JavaScript debugging and then restart the debugging process and bind your breakpoint. Click Enable JavaScript Debugging (Debugging Will Stop and Restart).
We know that the first two terms in the Fibonacci sequence are 0 and 1. The third term should also be 1. Switch from the Output view to Watch 1 and add f_2, f_1, and f_0 to watch. This is what Visual Studio should look like now:
Click the Step Over button or press F10. You will now see that our code correctly calculated the third Fibonacci number, 1, as the value of f_2.
Position
3
4
Value
1
2
Did our function compute this term correctly?
Let’s keep stepping to see if there’s a bug somewhere else in the loop. Step Over two more times and you should see both f_0 and f_1 are now equal to 1, which they need to be to calculate the 4th term in the sequence.
You will now see that our code is paused at Line 18. Let’s add i to our watch as it will tell us which term we’re computing in the Fibonacci sequence. Step Over one more time and you’ll see that the value of i is now 4. Now the code is checking to see if the value of i is less than n, the variable we’re using to represent which term in the Fibonacci sequence we’re trying to find. In this example, we’ve hardcoded n as 8 since we’re trying to calculate the 8th term in the sequence. Since 4 < 8, step over again and we’ll continue looping.
Step Over now and you should see that f_2 is now 2, and since i is 4, we know that our code has successfully computed the 4th term in the Fibonacci sequence as 2.
Position
3
4
Value
1
2
Did our function compute this term correctly?
We could keep stepping over and over again until we find the problem but since we’ve already proven that we’re calculating the 3rd and 4th terms in the Fibonnaci sequence correctly, let’s jump ahead to the 7th term since it’s the last term we calculate before we see the bug.
Using a conditional breakpoint to jump ahead in the for loop
Stop debugging for now by clicking the Stop button or pressing Shift+F5. Right click your breakpoint and select Conditions… or press Alt+F9. This will allow us to set a condition for our breakpoint and we’ll only break when that condition is true.
Enter i == 7 as the condition we want to break on, which means we’ll only break in the last loop before we see the bug. Start your web app again by clicking the green Play button or pressing F5. This time, we’ll break only when i is 7. Here’s what Visual Studio looks like now:
Step over once and you will see that we’ve calculated the 7th term in the Fibonacci sequence correctly since f_2 is equal to 8.
Step over three times and we’ll now be paused at i < n. Since i is now 8, i < n actually evaluates to false which means we’re going to break out of the loop. We’ve found the bug: we aren’t going through the loop to calculate the 8th Fibonacci number!
We can fix this bug by changing the calculateFibonacci() function to:
calculateFibonacci() {
var f_0 = 0;
var f_1 = 1;
for (var i = 3; i <= this.state.n; i++) {
var f_2 = f_0 + f_1;
f_0 = f_1;
f_1 = f_2;
};
this.setState({
f_n: f_2
})
console.log("The " + (i - 1).toString() + "th Fibonnaci number is:", f_2);
}
Now when i is 8, we’ll actually go through the for loop since since 8 <= 8. Remove the breakpoint, click the Restart button next to the Stop button or press Ctrl+Shift+F5. Click on Fibonacci in the NavMenu and click on the Calculate button to see that we’ve correctly calculated the 8th term in the Fibonacci sequence as 13! We did it!
Attaching to Microsoft Edge
So far in this post, we’ve been using the green Play button in Visual Studio to build our web application, launch Microsoft Edge, and automatically have Edge navigate to our app. Starting in Visual Studio 2019 version 16.3 Preview 3, you can now attach the Visual Studio debugger to an already running instance of Microsoft Edge.
First, ensure that there are no running instances of Microsoft Edge. Now, from your terminal, run the following command:
start msedge –remote-debugging-port=9222
From Visual Studio, open the Debug menu and select Attach to Process or press Ctrl+Alt+P.
From the Attach to Process dialog, set Connection type to Chrome devtools protocol websocket (no authentication). In the Connecting target textbox, type in http://localhost:9222/ and press Enter. You should see the list of open tabs you have in Microsoft Edge listed out in the Attach to Process dialog.
Click Select… and check JavaScript (Microsoft Edge – Chromium). You can add tabs, navigate to new tabs, and close tabs and see those changes reflected in the Attach to Process dialog by clicking the Refresh button. Select the tab you want to debug and click Attach.
The Visual Studio debugger is now attached to Microsoft Edge! You can pause execution of JavaScript, set breakpoints, and see console.log() statements directly in the Debug Output window in Visual Studio.
Conclusion
To recap:
We created an ASP.NET Core web application in Visual Studio 2019 version 16.2 and built it in a preview build of Microsoft Edge
We added a new component to our web application that contained a bug
We found the bug by setting breakpoints and debugging our web app running in Microsoft Edge from Visual Studio!
We showed you how to attach the Visual Studio debugger to an existing instance of Microsoft Edge
We’re eager to learn more about how you work with JavaScript in Visual Studio! Please send us feedback by clicking the Feedback icon in Visual Studio or by tweeting @VisualStudio and @EdgeDevTools.
Within this blog we will a cover a range of Azure services and a new GitHub repository which can support operational efficiencies for your SAP applications running on Azure.
Let’s get started.
Simplifying SAP Shared Storage architecture with Azure NetApp Files
Azure NetApp Files (ANF) can be used to simplify your SAP on Azure deployment architecture, providing an excellent use case for high availability (HA) of your SAP shared files based on Enterprise NFS.
SAP Shared Files are critical for SAP systems with high availability requirements and more than one application server. Additionally, SAP HANA scale-out systems also require a common set of shared files i.e.
/sapmnt which stores SAP kernel files, profiles and job logs.
/hana/shared, which houses binaries, configuration files and traces for SAP HANA scale-out.
Prior to Azure NetApp Files, SAP on Azure customers running Linux with high availability requirements had to protect the SAP Shared Files using Pacemaker clusters and block replication devices. These setups were complex to manage and required a high degree of technical skills to administer. With the introduction of Azure NetApp Files, a Pacemaker cluster can be removed from the architecture which reduces landscape sprawl and maintenance efforts. Moreover, there is no longer a need to stripe disks nor configure block replication technologies for the SAP Shared Files. Rather, Azure NetApp Files volumes can be configured using Azure Portal, Azure CLI or PowerShell and mounted to the SAP Central Services clusters. Azure NetApp Files volumes can also be resized on the fly and protected by way of storage snapshots.
To simplify your SAP on Azure deployment architecture, we have published two scenarios for high availability of your SAP System Central Services and SAP shared files based on Azure NetApp Files with NFS.
Optimizing Dev, Test and Sandbox deployments with Azure Connector for SAP LaMa
Within a typical SAP estate, several application landscapes are often deployed i.e. ERP, SCM, BW etc. and there is an ongoing need to perform SAP system copies and SAP system refreshes, i.e. creating new SAP projects systems for technical/application releases or periodically refreshing QA systems from Production copies. The end-to-end process for SAP system copies and refreshes can be both time-consuming and labor intensive.
SAP LaMa Enterprise Edition can support operational efficiencies in this area where several steps involved in the SAP system copy or refresh can be automated. Our Azure Connector for LaMa enables copying, deletion and relocation of Azure Managed Disks to help your SAP operations team perform SAP system copies and system refreshes rapidly reducing manual efforts.
In terms of virtual machines (VMs) operations, the Azure Connector for LaMa can be used to reduce the run costs for your SAP estate on Azure. You can stop (deallocate) and start your SAP virtual machines which enables you to run certain workloads with a reduced utilization profile i.e. though the LaMa interface scheduling your SAP S/4HANA sandbox virtual machine to be online from 08:00-18:00, 10 hours per day instead of running 24 hours. Furthermore, the Azure Connector for LaMa also allows you to resize your virtual machine when performance demands arise directly from within LaMa.
Save Time and Reduce Errors by Automating SAP Deployments
The manual deployment of your SAP infrastructure and software installation can be time consuming, tedious and error prone. One of the major benefits of Azure is the ability to automate your SAP infrastructure deployment i.e. virtual machines, storage and the installation of your SAP software. Automation reduces errors and deviation and facilitates programmatic and accelerated SAP deployments. As a customer, you have a wide range of automation tools available natively on Azure such as Azure Resource Manager templates and you can also create deployment scripts via both PowerShell and Azure CLI. Moreover, you also have the option to leverage your favorite configuration management tools.
We have included some links below as a kick-starter around Azure automation for your SAP deployment.
Get a Holistic View with Azure Monitor for SAP Solutions
SAP on Azure customers can now benefit from having a central location to monitor infrastructure telemetry as well as database metrics. We have enhanced our Azure Monitor functionality to include SAP Solutions monitoring. This enhancement on Azure Monitor covers both SAP on Azure virtual machines (VMs) and our bare-metal HANA Large Instances (HLI) offering. Azure Monitor for SAP Solution capabilities include:
Monitoring the health & utilization of infrastructure
Correlation of data between infrastructure and the SAP database for troubleshooting
Trending data to identify patterns enabling proactive remediation
Azure Monitor for SAP Solutions does not run an agent on the SAP HANA VM or HLI. Instead, it deploys a managed resource group within your customer subscription which contains resources to collects telemetry from the SAP HANA server and in-turn ingest the data into Azure’s Log Analytics.
Some of the components deployed in managed resource group are:
Azure Key Vault – used to store customer secrets such as database credentials
User-Assigned Identity – assigned to Key Vault as access policy
Log Analytics – workspace to collect and analyze monitoring telemetry
Collector Virtual Machine– runs the logic to collect telemetry from the SAP HANA database server
Our vision here is to enable a single point of monitoring and analysis where your infrastructure and SAP telemetries coincide to ease issue identification and implement remediations before any fatal outage occurs. A simple example is where the memory utilization trajectory is going critical and SAP HANA starts experiencing column unload., When this happens, an alert is triggered to inform the administrators before the issue exacerbates.
At October 2019, Azure Monitor for SAP is able to collect statistics from SAP HANA and is currently in Private Preview, therefore, please reach out to your Microsoft Account team should you have interest in this service.
Additional resources for optimizing your SAP deployments
The AzureCAT SAP Deployment Engineering team provides deep engagement on customer projects where we help our customers successfully deploy their SAP applications on Azure with quality. Throughout the project lifecycle, there can be times where remediation or optimizations of a customer’s SAP deployment architecture is required. For example:
Lifting the Resilience of the SAP Deployment Architecture:
A scenario can arise where a customer may have deployed their SAP system in single instance virtual machines (SLA 99.9 percent) rather than a high availability configuration via Azure Availability Sets (SLA 99.95 percent). Now the customer has a need to move to an Availability Set configuration while retaining their existing network (IP, vNIC) and data disks.
Performance Optimization:
An SAP on Azure customer is already running in Production and would now like to benefit from Proximity Placement Groups to optimize the network performance between their SAP Application and Database virtual machines.
Availability Zones Selection:
A customer requires guidance to select the optimum Azure Availability Zones to minimize network Round-Trip-Time and facilitate a recovery point objective of zero (sync) for their SAP database.
To address the above topics (and more), we have created a new GitHub repository. This repository will be enduring, and our customers and partners can expect new scripts to land on an ongoing basis to support operational efficiencies of SAP deployments on Azure.
Closing
This blog closes out our series on Designing a Great SAP on Azure Architecture. We hope you’ve enjoyed our latest offerings to efficiently operate your SAP assets on Azure and as always, change is the only constant in the world of clouds and we are here to accommodate the change and make it simpler.
As a next step, we recommend you check-out our SAP on Azure Getting Started page.
For the previous blogs in the series you can refer to the links below:
Better scale and more power for IT professionals and developers!
We're excited to announce the general availability of larger, more powerful standard file shares for Azure Files. Azure Files is a secure, fully managed public cloud file storage with full range of data redundancy options and hybrid capabilities using Azure File Sync.
Here is a quick look at some of the improvements in the Azure Files standard file shares' capacity and performance.
With the release of large file shares, a single standard file share in a general purpose account can now support up to 100 TiB capacity, 10K IOPS, and 300 MiB/s throughput. All premium file shares in Azure FileStorage accounts currently support large file shares by default. If your workload is latency sensitive and requires a higher level of performance, you should consider Azure Files premium tier. Visit Azure Files scale limits documentation to get more details.
What’s new?
Since the preview of large file shares, we have been working on making the Azure Files experience even better. Large file shares now has:
Ability to upgrade existing general purpose storage accounts and existing file shares.
Ability to opt in for larger files shares at a storage account instead of subscription level.
Expanded regional coverage.
Support for both locally redundant and zonal redundant storages.
Improvements in the performance and scale of sync to work better with larger file shares. Visit Azure File Sync scalability targets to keep informed of the latest scale.
Pricing and availability
The increased capacity and scale of standard file shares on your general purpose accounts come at zero additional cost. Refer to the pricing page for further details.
Currently, standard large file shares support is available for locally redundant and zone redundant storages and available in 13 regions worldwide. We are quickly expanding the coverage to all Azure regions. Stay up to date on region availability by visiting Azure Files documentation.
Getting started
You no longer need to register your subscription for the large file shares feature.
New storage account
Create a new general purpose storage account in one of the supported regions on a supported redundancy option. While creating storage account, go to Advanced tab and enable Large file shares feature. See detailed steps on how to enable large file shares support on a new storage account. All new shares created under this new account will, by default, have 100 TiB capacity with increased scale.
Existing storage account
On an existing general purpose storage account that resides on one of the supported regions, go to Configuration, enable Large file shares feature, and hit Save. You can now update quota for existing shares under this upgraded account to more than 5 TiB. All new shares created under this upgraded account will, by default, have 100 TiB capacity with increased scale.
Opting in your storage accounts into large file shares feature does not cause any disruption to your existing workloads, including Azure File Sync. Once opted in, you cannot disable the large files shares feature on your account.
It’s been a busy time for .NET Core – we just shipped 3.0, and are currently working on a few updates for v3.1 (due in November.) As we turn our attention to .NET Core 5.0, we want to take a step back and see what you are doing with .NET Core and how we can make it even better.
We have put together a quick survey that will help us understand our customer base a bit better, how you are using .NET core, and what we can do to improve it. So please head over to Survey Monkey and help shape the future of .NET Core.
Surveys help give us a breadth view of .NET Core users, but we also want to understand in more depth what challenges you face in your projects, so if you are willing to participate in more detailed feedback, please provide your contact details in the survey.
Welcome to the October update of Java on Visual Studio Code! This month, we’re bringing some new features for code navigation, code actions and refactoring, code snippet along with Java 13 support. There’re also improvements in debugger, maven, checkstyle and Test Runner. Please checkout and let us know what you think!
Code Navigation
Go to super implementation
You can now keep track of class implementations and overriding methods by clicking the Go to Super Implementation link when hover.
See the code navigation in action.
Code Actions
A couple new code actions have been added to VS Code for Java recently.
Create non existing package
Now when your package name doesn’t match the folder name, you have the options to either change the package name in your code, or move the folder in file system (even when the destination folder doesn’t exist yet).
Add quick fix for non accessible references
This quick fix helps you resolve non accessible reference
Automatically trigger auto-import on paste
If you paste blocks of code that contain references to classes or static methods and fields that are not yet imported, VS Code now can automatically add missing imports. The new feature is enabled via the java.actionsOnPaste.organizeImports preference in VS Code preferences. If true (the default value), triggers “Organize imports” when Java code is pasted into an empty file.
Refactoring
Inline refactoring
The Inline refactoring lets you reverse the refactoring for a local variable, method, and constant.
Convert for-loop to for-each loop
The enhanced for-loop is a popular feature. Its simple structure allows you to simplify code by presenting for-loops that visit each element of an array/collection without explicitly expressing how one goes from element to element.
Convert anonymous class to nested class
This refactoring allows you to convert an anonymous class into a named inner class.
Deprecation tags for symbols and completions
Java extension now shows source code that references deprecated types or members with a strike-through line.
Code Snippets
Now VS Code Java supports server side code snippets, which means it will provide more code snippets options in a context aware way. You can also also see more detailed information during the preview of code snippets during selection.
Java 13 support
Java 13 is out and VS Code is ready for it. It supports Java 13 through latest Java Extension. For developers use Java 12 with preview features, you will need to upgrade to JDK 13 to keep working with them.
Debugger
Show Run/Debug when hover
In case you don’t like the Run/Debug button on the Code Lens of your main method, but still want easy access to the functionality, you can now configure to disable the Code Lens but still accessible by hover.
In this release, we’ve also made a lot of improvements in error handling and message to help user resolve issues during debugging. One example is to add fix suggestions when a build failure occurs when launching the program.
By clicking Fix... a list of suggestions would be provided.
Maven extension now supports searching Maven Central to resolve unknown type in your code. You can achieve this easily by clicking the link in hover.
Other improvements in Maven extension includes
Enable search artifact by groupId and/or artifactId when auto completing dependency.
Add inline action buttons in Maven explorer. Add icons for Maven explorer items.
Checkstyle
Enhanced setting configuration command
Checkstyle: Set the Checkstyle Configuration command will now detect potential Checkstyle configuration files and list them. You can also provide a configuration file by directly writing a URL in the input box now.
Setting checkstyle version support
A new command Checkstyle: Set the Checkstyle Version is added to the extension. It supports:
List the latest Checkstyle version from main repo.
List all the download versions.
List all the supported versions.
Mark the currently used version with a check symbol.
When the version is too high (with breaking changes) for a user defined checkstyle configuration.
And when the version is too low (with new features) for google_check.xml fetched from checkstyle master branch.
Other improvements
Provide more granularity of progress of loading project. We’re working on making the language server more transparent with what it’s working on behind the scene.
Test Runner updates
Add java.test.saveAllBeforeLaunchTest setting to specify whether to automatically save the files before launching the tests.
Add java.test.forceBuildBeforeLaunchTest setting to specify whether to automatically build the workspace before launching the tests.
Sign up
If you’d like to follow the latest of Java on VS Code, please provide your email with us using the form below. We will send out updates and tips every couple weeks and invite you to test our unreleased feature and provide feedback early on.
Try it out
Please don’t hesitate to give it a try! Your feedback and suggestions are very important to us and will help shape our product in future.
SameSite is a 2016 extension to HTTP cookies intended to mitigate cross site request forgery (CSRF). The original design as was a feature web sites would opt into by adding the new parameters, not setting the SameSite property, or setting it to value of Laxindicated the cookie should be sent on navigation within the same site, or through GET navigation to your site from other sites. A value of Strict limited the cookie to requests only from the same site. .NET 4.7.2 and ASP.NET Core 2.0 added support for the SameSite property. OIDC and other features which send POST requests from an external site to the site requesting a login use cookies for correlation and CSRF protection and needed to opt-out of SameSite by not setting the property.
Google is now updating the standard and implementing their proposed changes in Chrome in a method. The change adds a new SameSite value, “None”, and changes the default behavior to “Lax”. This breaks OIDC logins, and potentially other features your web site may rely on, these features will have to use cookies whose SameSite property is set to a value of “None”. However browsers which adhere to the original standard and are unaware of the new value have a different behavior to browsers which use the new standard as the SameSite standard states that if a browser sees a value for SameSite it does not understand it should treat that value as “Strict”. This means your .NET website will now have to add user agent sniffing to decide whether you send the new None value, or not send the attribute at all.
.NET will issue updates to change the behavior of its SameSite attribute behavior in .NET 4.7.2 and in .NET Core 2.1 and above to reflect Google’s introduction of a new value. The updates for the .NET Framework will be available on November 19th as an optional update via Microsoft Update and WSUS if you use the “Check for Update” functionality. On December 10th it will become widely available and appear in Microsoft Update without you having to specifically check for updates. .NET Core updates will be available with .NET Core 3.1 starting with preview 1, in November.
.NET Core 3.1 will contain an updated enum definition, SameSite.Unspecified which will not set the SameSite property.
The OIDC middleware for Katana v4 and .NET Core will be updated at the same time as their .NET Framework and .NET updates however we cannot introduce the user agent sniffing code into the framework, this must be implemented in your site code. The implementation of agent sniffing will vary according to what version of ASP.NET or ASP.NET Core you are using and the browsers you wish to support.
For ASP.NET 4.72 Katana agent sniffing should be implemented in an implementation of ICookieManager;
public class SameSiteCookieManager : ICookieManager
{
private readonly ICookieManager _innerManager;
public SameSiteCookieManager() : this(new CookieManager())
{
}
public SameSiteCookieManager(ICookieManager innerManager)
{
_innerManager = innerManager;
}
public void AppendResponseCookie(IOwinContext context, string key, string value,
CookieOptions options)
{
CheckSameSite(context, options);
_innerManager.AppendResponseCookie(context, key, value, options);
}
public void DeleteCookie(IOwinContext context, string key, CookieOptions options)
{
CheckSameSite(context, options);
_innerManager.DeleteCookie(context, key, options);
}
public string GetRequestCookie(IOwinContext context, string key)
{
return _innerManager.GetRequestCookie(context, key);
}
private void CheckSameSite(IOwinContext context, CookieOptions options)
{
if (DisallowsSameSiteNone(context) && options.SameSite == SameSiteMode.None)
{
options.SameSite = null;
}
}
public static bool DisallowsSameSiteNone(IOwinContext context)
{
// TODO: Use your User Agent library of choice here.
var userAgent = context.Request.Headers["User-Agent"];
return userAgent.Contains("BrokenUserAgent") ||
userAgent.Contains("BrokenUserAgent2")
}
}
And then configure OIDC settings to use the new CookieManager;
app.UseOpenIdConnectAuthentication(
new OpenIdConnectAuthenticationOptions
{
// … Your preexisting options …
CookieManager = new SameSiteCookieManager(new SystemWebCookieManager())
});
For ASP.NET Core you should implement the sniffing code within a cookie policy
private void CheckSameSite(HttpContext httpContext, CookieOptions options)
{
if (options.SameSite > SameSiteMode.Unspecified)
{
var userAgent = httpContext.Request.Headers["User-Agent"].ToString();
// TODO: Use your User Agent library of choice here.
if (/* UserAgent doesn’t support new behavior /*)
{
// For .NET Core < 3.1 set SameSite = -1
options.SameSite = SameSiteMode.Unspecified;
}
}
}
public void ConfigureServices(IServiceCollection services)
{
services.Configure<CookiePolicyOptions>(options =>
{
options.MinimumSameSitePolicy = SameSiteMode.Unspecified;
options.OnAppendCookie = cookieContext =>
CheckSameSite(cookieContext.Context, cookieContext.CookieOptions);
options.OnDeleteCookie = cookieContext =>
CheckSameSite(cookieContext.Context, cookieContext.CookieOptions);
});
}
public void Configure(IApplicationBuilder app)
{
app.UseCookiePolicy(); // Before UseAuthentication or anything else that writes cookies.
app.UseAuthentication();
// …
}
Under testing with the Azure Active Directory team we have found the following checks work for all the common user agents that Azure Active Directory sees that don’t understand the new value.
public static bool DisallowsSameSiteNone(string userAgent)
{
// Cover all iOS based browsers here. This includes:
// - Safari on iOS 12 for iPhone, iPod Touch, iPad
// - WkWebview on iOS 12 for iPhone, iPod Touch, iPad
// - Chrome on iOS 12 for iPhone, iPod Touch, iPad
// All of which are broken by SameSite=None, because they use the iOS networking stack
if (userAgent.Contains("CPU iPhone OS 12") || userAgent.Contains("iPad; CPU OS 12"))
{
return true;
}
// Cover Mac OS X based browsers that use the Mac OS networking stack. This includes:
// - Safari on Mac OS X.
// This does not include:
// - Chrome on Mac OS X
// Because they do not use the Mac OS networking stack.
if (userAgent.Contains("Macintosh; Intel Mac OS X 10_14") &&
userAgent.Contains("Version/") && userAgent.Contains("Safari"))
{
return true;
}
// Cover Chrome 50-69, because some versions are broken by SameSite=None,
// and none in this range require it.
// Note: this covers some pre-Chromium Edge versions,
// but pre-Chromium Edge does not require SameSite=None.
if (userAgent.Contains("Chrome/5") || userAgent.Contains("Chrome/6"))
{
return true;
}
return false;
}
This browser list is by no means canonical and you should validate that the common browsers and other user agents your system supports behave as expected once the update is in place.
Chrome is scheduled to turn on the new behavior in February or March 2020, with a temporary mitigation in Chrome 79 Beta. If you want to test against known breaking browsers older versions of Chromium are available for download, Chromium 76 and Chromium 74 both will exhibit the incompatible behavior with the new standard.
If you cannot update your framework versions by the time Chrome turns the new behavior in early 2020 you may be able to change your ODIC flow to a Code flow, rather than the default implicit flow that ASP.NET and ASP.NET Core uses, but this should be viewed as a temporary measure.
We strongly encourage you to download the updated .NET Framework and .NET Core versions and start planning your update now, before Chrome’s changes are rolled out.
It is the fall conference season, which means that this blog may be brought to you from a different geographical location every week. This week I had the privilege of speaking at All Things Open, and a chance to visit our Raleigh, NC office for the first time ever. Are you participating in any fun events this fall?
To mix things up, this week’s newsletter is featuring a few videos, but we will start with some blogs.
Azure DevOps Migration Tools
Azure DevOps Migration Tools is a community project building tools that allow you to migrate Teams, Backlogs, Tasks, Test Cases, and Plans & Suits from one Project to another in Azure DevOps / TFS both within the same Organization, and between Organizations. It’s been very useful to people in this community who are working with large organizations. The new version v8.3.0 came out a few days ago! It supports restarting the migration, and migrating work items between Team Projects. Huge thanks to all the project contributors!
Azure DevOps – how to package a simple DLL?
Many organizations are starting to move towards internal open source, but the most common way of sharing code across the organization is still via shared libraries. This post from Antti K. Koskela shows a YAML pipeline for building a NuGet package and pushing it to a NuGet feed. Needless to say, the package feed could be hosted on Azure Artifacts. Thank you, Antti!
Tasktop Integration Hub – ServiceNow to Azure DevOps
In many cases, software development bottlenecks are caused by process and communication issues, rather than technical challenges. Tasktop is a product that helps integrate enterprise software delivery tools with operations management tools. This short video features an integration between Service Now and Azure DevOps using Tasktop. The Tasktop Integration Hub provides a two-way sync between Service Now and Work Items in Azure Boards, automatically synchronize IDs, progress statuses, comments, attachments, and other information. So much less process overhead!
Fortify on Demand – New Azure DevOps Features and Functionality
Application security is the top of mind for tech leads and executives alike. Fortify on Demand is a Micro Focus product that offers application security as a service, providing a range of security assessments. This video walks through the integration between Azure DevOps and Fortify on Demand (FOD), kicking off an FOD scan from an Azure Pipeline and verifying that the FOD policy was met before the Build can pass.
Three things to keep in mind when using Azure DevOps Pipelines
When introducing team members to new technical tools, we often focus on step by step instructions and forget the bigger picture. This video from Matthew Shiroma at Nebulaworks dives into three important concepts for setting up Continuous Integration and Continuous Delivery using Azure Pipelines – variable scopes, CI/CD triggers, and Task Groups. Thanks for a great conceptual overview, Matthew!
If you’ve written an article about Azure DevOps or find some great content about DevOps on Azure, please share it with the #AzureDevOps hashtag on Twitter!
I thought this was an interesting and subtle bug behavior that was not only hard to track down but hard to pin down. I wasn't sure 'whose fault it was.'
Here's the story. Feel free to follow along and see what you get.
namespace dotnetlocaletest
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine(3501.ToWords());
}
}
}
You can see that I want the app to print out the number 3051 as words. Presumably in English, as that's my primary language, but you'll note I haven't indicated that here. Let's run it.
Note that app this works great and as expected in Windows.
scott@IRONHEART:~/dotnetlocaletest$ dotnet run
3501
Huh. It didn't even try. That's weird.
My Windows machine is en-us (English in the USA) but what's my Ubuntu machine?
Looks like it's nothing. It's "C.UTF-8" and it's nothing. C in this context means the POSIX default locate. It's the most basic. C.UTF-8 is definitely NOT the same as en_US.utf8. It's a locate of sorts, but it's not a place.
Fortunately Humanizer 2.7.2 and above has fixed this issue and falls back correctly. Whose "bug" was it? Tough one but in this case, Humanizer had some flawed fallback logic. I updated to 2.7.2 and now C.UTF-8 falls back to a neutral English.
That said, I think it could be argued that WSL/Canonical/Ubuntu should detected my local language and/or set locale to it on installation.
The lesson here is that your applications - especially ones that are expected to work in multiple locales in multiple languages - take "input" from a lot of different places. Phrased differently, not all input comes from the user.
System locale and language, time, timezone, dates, are all input as ambient context to your application. Make sure you assert your assumptions about what "default" is. In this case, my little app worked great on en-US but not on "C.UTF-8." I was able to explore the behavior and learn that there was both a local workaround (I could detected and set a default locale if needed) and there was a library fix available as well.
Assert your assumptions!
Sponsor: Suffering from a lack of clarity around software bugs? Give your customers the experience they deserve and expect with error monitoring from Raygun.com. Installs in minutes, try it today!
Run these commands from PowerShell or PowerShell Core. I recommend PowerShell 6.2.3 or above. You can also use PowerShell on Linux too, so be aware. When you run Install-Module for the first time you'll get a warning that you're downloading and installing stuff from the internet so follow the prompts appropriately.
There's a number of choices for Powerline or Powerline-like prompts from Ubuntu. I like Powerline-Go for it's easy defaults.
I just installed Go, then installed powerline-go with go get.
sudo apt install golang-go
go get -u github.com/justjanne/powerline-go
Add this to your ~/.bashrc. You may already have a GOPATH so be aware.
GOPATH=$HOME/go
function _update_ps1() {
PS1="$($GOPATH/bin/powerline-go -error $?)"
}
if [ "$TERM" != "linux" ] && [ -f "$GOPATH/bin/powerline-go" ]; then
PROMPT_COMMAND="_update_ps1; $PROMPT_COMMAND"
fi
GOTCHA: If you are using WSL2, it'll be lightning fast with git prompts if your source code is in your Ubuntu/Linux mount, somewhere under ~/. However, if your source is under /mnt/c or /mnt anywhere, the git calls being made to populate the prompt are super slow. Be warned. Do your Linux source code/git work in the Linux filesystem for speed until WSL2 gets the file system faster until /mnt.
At this point your Ubuntu/WSL prompt will look awesome as well!
Fonts look weird? Uh oh!
Step Three - Get a better font
If you do all this and you see squares and goofy symbols, it's likely that the font you're using doesn't have the advanced Powerline glyphs. Those glyphs are the ones that make this prompt look so cool!
Then from within Windows Terminal, hit "Ctrl+," to edit your profile.json and change the "fontFace" of your profile or profiles to this:
"fontFace": "DelugiaCode NF",
And that's it!
Remember also you can get lots of Nerd Fonts at https://www.nerdfonts.com/, just make sure you get one (or generate one!) that includes the PowerLine Glyphs.
Have fun!
Sponsor: Suffering from a lack of clarity around software bugs? Give your customers the experience they deserve and expect with error monitoring from Raygun.com. Installs in minutes, try it today!
Today, Microsoft becomes the first cloud with a fully managed, first-party service to ingest, persist, and manage healthcare data in the native FHIR format. The Azure API for FHIR® is releasing today in generally availability to all Azure customers.
The core mission in healthcare is to deliver better health outcomes, and the data standard fueling the future of that mission is FHIR. The Fast Healthcare Interoperability Resource (FHIR) has revolutionized the industry in the last several years and is rapidly becoming established as the preferred standard for exchanging and managing healthcare information in electronic format. Microsoft understands the unique value FHIR offers to enable management of Protected Health Information (PHI) in the cloud, so we’re advancing Azure technology to enable our health customers the ability to ingest, manage, and persist PHI data across the Azure environment in the native FHIR format.
With the Azure API for FHIR, a developer, researcher, device maker, or anyone working with health data—is empowered with a turnkey platform to provision a cloud-based FHIR service in just minutes and begin securely managing PHI data in Azure. We’ve simplified FHIR through this new Platform-as-a-Service (PaaS) so customers can free up their operational resources and focus their development efforts on lighting up analytics, machine learning, and actionable intelligence across their health data.
Aridhia and Great Ormand Street Hospital (GOSH) in London, UK are leaders in the healthcare industry who are already leveraging FHIR in the Azure Cloud to power their Digital Research Environment (DRE), serving both historic and current patient records data:
“We now have a unified API as a basis for designing, testing, and deploying the next generation of machine learning and digital services in the hospital for our young patients. This will also enable rapid and easier collaboration with our international pediatric hospital partners to share specialised tools to improve patient outcomes and experience," said Professor Neil Sebire, Chief Research Information Officer at GOSH.
“Partnering with Microsoft on the Azure API for FHIR allows us to scale out and accelerate our customers’ use of SMART on FHIR. The managed service is a great additional component in the Aridhia DRE platform, bringing research and innovation closer to clinical impact,” added Rodrigo Barnes, CTO at Aridhia.
Managed FHIR service in the cloud
Normalizing health data in the FHIR format allows you to leverage the power of an open source standard that evolves with the science of healthcare. The FHIR standard is designed precisely for health data flows, so it allows for data interoperability now and sets your ecosystem up for the future as the science of medicine evolves. Blending a variety of data sets through a FHIR service ushers in powerful opportunities for accelerated machine learning development. As you develop and implement research and efficiency models for your system, data output can be securely and easily exchanged with any application interface that works with FHIR API.
Using the Azure API for FHIR brings your team all the benefits of the cloud – paying only for what you use, delivering low latency and high performance, and providing on-demand, scalable machine learning tools with built in controls for security and intelligence.
Key features of the Azure API for FHIR include:
• Provision and start running an enterprise-grade, managed FHIR service in just a few minutes
• Support for R3 and R4 of the FHIR Standard
• Role Based Access Control (RBAC) – allowing you to manage access to your data at scale
• Audit log tracking for access, creation, modification, and reads within each data store
• Secure compliance in the cloud: ISO 27001:2013 certified, supports HIPAA and GDPR, and built on the HITRUST-certified Azure platform
• Global Availability and Protection of your data with multi-region failover
• SMART on FHIR functionality
Security for PHI data in the cloud
The cloud environment you choose to manage your Protected Health Information (PHI) matters. Microsoft runs on trust.
We’ve built the Azure API for FHIR so your data is isolated and protected with layered, in-depth defense and advanced threat protection according to the most stringent industry compliance standards. Azure covers 90+ compliance offerings, including International Organization for Standardization (ISO 27001:2013), and the Health Insurance and Portability and Accountability Act (HIPAA). You can be confident that the Azure API for FHIR will enable persistence, security, and exchange of PHI data in a private and compliant pipeline.
“Humana is using Microsoft’s Azure API for FHIR to enable care team access to our members’ digital health records in a universal language and that is guarded by always-on security. By providing access to members’ records, Humana can focus on supporting doctors, nurses, and clinicians and helping our members experience their best lives.” – Marc Willard, VP, Humana
“Using Azure API for FHIR allows us to focus on designing People Compatible™ solutions for healthcare organization of all sizes in this dynamic regulatory environment, with less worrying about security and scalability.” – Pawan Jindal, Founder & President, Darena Solutions
Building the foundations of artificial intelligence in healthcare
While we’re excited to light our cloud on FHIR, we’re even more excited about the foundations FHIR is forging for the future of machine learning and life sciences in healthcare. We’re actively engaging with a broad set of customers who are pioneering new innovation with FHIR. Whether you’re improving operational efficiency across your ecosystem, need a new secure FHIR-based data store, or want to create richer datasets for research and innovation, the future of health data in the cloud is here, and it’s on FHIR.
Welcome back to another release of the unified Azure Data client libraries. For the most part, the API surface areas of the SDKs have been stabilized based on your feedback. Thank you to everyone who has been submitting issues on GitHub and keep the feedback coming.
Please grab the October preview libraries and try them out—throw demanding performance scenarios at them, integrate them with other services, try to debug an issue, or generally build your scenario and let us know what you find.
Our goal is to release these libraries before the end of the year but we are driven by quality and feedback and your participation is key.
Getting started
As we did for the last three releases, we have created four pages that unify all the key information you need to get started and give feedback. You can find them here:
For those of you who want to dive deep into the content, the release notes linked above and the changelogs they point to give more details on what has changed. Here we are calling out a few high-level items.
APIs locking down
The surface area for Azure Key Vault and Storage Libraries are nearly API-complete based on the feedback you’ve given us so far. Thanks again to everyone who has sent feedback, and if anyone has been waiting to try things out and give feedback, now is the time.
Batch API support in Storage
You can now use batching APIs with the SDKs for Storage to handle manipulating large numbers of items in parallel. In Java and .NET you will find a new batching library package in the release notes while in JavaScript and Python the feature is in the core library.
Unified credentials
The Azure SDKs that depend on Azure Identity make getting credentials for services much easier.
Each library supports the concept of a DefaultAzureCredential and depending on where your code runs, it will select the right credential for logging in. For example, if you’re writing code and have signed into Visual Studio or performed an az login from the CLI, the client libraries can automatically pick up the sign-in token from those tools. When you move the code to a service environment, it will attempt to use a managed identity if one is available. See the language specific READMEs for Azure Identity for more.
Working with us and giving feedback
So far, the community has filed hundreds of issues against these new SDKs with feedback ranging from documentation issues to API surface area change requests to pointing out failure cases. Please keep that coming. We work in the open on GitHub and you can submit issues here:
In addition, we're excited to say we'll be attending Microsoft Ignite 2019, so please come and talk to us in person. Finally, please tweet at us at @AzureSdk.
This post was co-authored by Jamie Reding, Senior Program Manager, Sadashivan Krishnamurthy, Principal Architect, and Bob Ward, Principal Architect.
Today, most applications are running online transactional processing (OLTP) transactions. Online banking, purchasing a book online, booking an airline ticket, sending a text message, and telemarketing are examples of OLTP workloads. OLTP workloads involves inserting, updating, and/or deleting small amounts of data in a database and mainly deals with large numbers of transactions by large number of users. Majority of OLTP workloads are read heavy, use diverse transactions, and utilizes wide range of data types.
Azure brings many price-performance advantages for your workloads with SQL Server on Azure Virtual Machines (VM) with a wide range of Azure Virtual Machine series and Azure disk options. Memory optimized VM series like Intel based Es_v3 series or AMD based Eas_v3 series offer high virtual CPU (vCPU) to memory ratio at a very low cost. Constraint vCPU capable VM sizes offer reduced cost of SQL Server licencing by constraining the vCPU abailable to the VM, while maintaining the same memory, storage, and input or output (I/O) bandwidth. Premium Solid State Drives (SSDs) deliver high-performance and low-latency managed disks with high IOPS and throughput capabilities needed for SQL Server data and log files. Standard SSDs, cost-effective storage options optimized for consistent performance, come as an optimum destination for most SQL Server backup files.
In addition to the large IOPS capacity of the Premium Disks, Azure Blobcache is a huge value for mission critical OLTP workloads as it brings significant additional high-performance I/O capacity to Azure Virtual Machine for free. Blobcache is a multi-tier caching technology enabled by combining the VM RAM and local SSD. You can host SQL Server data files on premium SSD managed disks with read only Blobcache and leverage extremely high-performance read I/Os that exceed the underlying disk’s capabilities. High scale VMs comes with very large Blobcache sizes that can host the all the data files for most applications. As all I/O activity from the Blobcache is free, you can boost application throughput with extremely high performance reads and optimize price-performance by only paying for the writes. Considering the majority of the OLTP workloads today come with 10 to 1 ratio for read and write, this is up to a 90 percent price-performance gain.
Additionally, for workloads demanding very low I/O latency, Azure ultra-disks deliver consistent low latency disk storage at high throughput and high IOPS levels. Ultra-disks maximize application throughput if the workload was bottlenecked on I/O latencies.
Based on read to write ratio, transaction complexity and scale pattern you may choose to use TPC-E or TPC-C for performance measurements. In general, TPC-E represents majority of the OLTP workloads in these days as it includes complex transactions and high read to write ratio. If you have write intensive workloads running simple transactions, then you can leverage the simplicity of TPC-C benchmark for performance validation. For detailed testing of SQL Server performance on Azure Virtual Machines with a scaled down TPC-E workload and HammerDB TPC-C kit please see this article.
Get started with SQL Server in Azure Virtual Machines
Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 19002 or greater). The Preview SDK Build 19002 contains bug fixes and under development changes to the API surface area.
This build works in conjunction with previously released SDKs and Visual Studio 2017 and 2019. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1903 or earlier to the Microsoft Store.
The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2019 here.
Now detects the Unicode byte order mark (BOM) in .mc files. If the If the .mc file starts with a UTF-8 BOM, it will be read as a UTF-8 file. Otherwise, if it starts with a UTF-16LE BOM, it will be read as a UTF-16LE file. If the -u parameter was specified, it will be read as a UTF-16LE file. Otherwise, it will be read using the current code page (CP_ACP).
Now avoids one-definition-rule (ODR) problems in MC-generated C/C++ ETW helpers caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of MCGEN_EVENTWRITETRANSFER are linked into the same binary, the MC-generated ETW helpers will now respect the definition of MCGEN_EVENTWRITETRANSFER in each .cpp file instead of arbitrarily picking one or the other).
Windows Trace Preprocessor (tracewpp.exe)
Now supports Unicode input (.ini, .tpl, and source code) files. Input files starting with a UTF-8 or UTF-16 byte order mark (BOM) will be read as Unicode. Input files that do not start with a BOM will be read using the current code page (CP_ACP). For backwards-compatibility, if the -UnicodeIgnore command-line parameter is specified, files starting with a UTF-16 BOM will be treated as empty.
Now supports Unicode output (.tmh) files. By default, output files will be encoded using the current code page (CP_ACP). Use command-line parameters -cp:UTF-8 or -cp:UTF-16 to generate Unicode output files.
Behavior change: tracewpp now converts all input text to Unicode, performs processing in Unicode, and converts output text to the specified output encoding. Earlier versions of tracewpp avoided Unicode conversions and performed text processing assuming a single-byte character set. This may lead to behavior changes in cases where the input files do not conform to the current code page. In cases where this is a problem, consider converting the input files to UTF-8 (with BOM) and/or using the -cp:UTF-8 command-line parameter to avoid encoding ambiguity.
TraceLoggingProvider.h
Now avoids one-definition-rule (ODR) problems caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of TLG_EVENT_WRITE_TRANSFER are linked into the same binary, the TraceLoggingProvider.h helpers will now respect the definition of TLG_EVENT_WRITE_TRANSFER in each .cpp file instead of arbitrarily picking one or the other).
In C++ code, the TraceLoggingWrite macro has been updated to enable better code sharing between similar events using variadic templates.
Signing your apps with Device Guard Signing
We are making it easier for you to sign your app. Device Guard signing is a Device Guard feature that is available in Microsoft Store for Business and Education. Signing allows enterprises to guarantee every app comes from a trusted source. Our goal is to make signing your MSIX package easier. Documentation on Device Guard Signing can be found here: https://docs.microsoft.com/en-us/windows/msix/package/signing-package-device-guard-signing
Windows SDK Flight NuGet Feed
We have stood up a NuGet feed for the flighted builds of the SDK. You can now test preliminary builds of the Windows 10 WinRT API Pack, as well as a microsoft.windows.sdk.headless.contracts NuGet package.
We use the following feed to flight our NuGet packages.
Microsoft.Windows.SDK.Contracts which can be used with to add the latest Windows Runtime APIs support to your .NET Framework 4.5+ and .NET Core 3.0+ libraries and apps.
The Windows 10 WinRT API Pack enables you to add the latest Windows Runtime APIs support to your .NET Framework 4.5+ and .NET Core 3.0+ libraries and apps.
Microsoft.Windows.SDK.Headless.Contracts provides a subset of the Windows Runtime APIs for console apps excludes the APIs associated with a graphical user interface. This NuGet is used in conjunction with
In this release api-ms-win-net-isolation-l1-1-0.lib has been removed from the Windows SDK. Apps that were linking against api-ms-win-net-isolation-l1-1-0.lib can switch to OneCoreUAP.lib as a replacement.
Removal of IRPROPS.LIB
In this release irprops.lib has been removed from the Windows SDK. Apps that were linking against irprops.lib can switch to bthprops.lib as a drop-in replacement.
Removal of WUAPICommon.H and WUAPICommon.IDL
In this release we have moved ENUM tagServerSelection from WUAPICommon.H to wupai.h and removed the header. If you would like to use the ENUM tagServerSelection, you will need to include wuapi.h or wuapi.idl.
Providing our customers with choice and flexibility is central to our mission around blockchain in Azure. Today, we are pleased to introduce that we're bringing managed Corda Enterprise to Azure Blockchain Service.
The road to Corda Enterprise on Azure as a managed service
In 2016, Microsoft and R3 worked together to bring Corda Enterprise to Azure as a virtual machine image in the Azure Marketplace.
In 2017, the relationship matured to a partnership, and in the subsequent years we’ve worked closely with customers, consortiums, and independent software vendors (ISVs) to help them bring Corda-based solutions to Azure. Working together with our customers and partners, we’ve seen the launch of multiple Corda consortiums on Azure, from Insurwave’s launch in 2018 to the recent September 2019 announcement of TradeIX’s launch of the Marco Polo Network on Azure.
As customers were building end to end solutions, one of the big requests was to make integrating Corda with enterprise data, systems, and Software as a Service (SaaS) easier. Earlier this year, we released the Corda Logic App and Flow Connectors that brought 30 years of Microsoft enterprise integration experience to Corda. With Flow and PowerApps, it also became possible for citizen developers to build low-code or no-code web and mobile apps for Corda.
However, the biggest request we had from customers was for Corda to be released as a managed service in Azure. Specifically, a Platform as a Service (PaaS) offering that would set up Corda nodes to connect with the appropriate Corda network, manage node health, and update both the nodes and the underlying software.
Today at CordaCon, we’re pleased to share that customers can now sign up for the preview of Corda Enterprise on Azure Blockchain Service.
Simple Corda node deployment
Corda on Azure Blockchain Service provides you with the ability to choose where to provision and host nodes, either on the Corda Network (Livenet, Testnet, UAT) or a private Corda network.
For the preview, Azure Blockchain Service supports the latest Corda Enterprise version (currently at 4.X). In addition to provisioning the node, Azure Blockchain Service automatically connects the Corda node to the appropriate network based on your Azure Blockchain Service. Being part of Azure Blockchain Service, you can configure and deploy a Corda node within the Azure portal or programmatically through REST APIs, CLI, or PowerShell. This dramatically simplifies Corda node deployment and connection.
Managed Corda nodes and Corda Distributed Applications
In addition to provisioning and deploying Corda nodes, Azure Blockchain Service provides managed APIs to help you manage your Corda nodes and Corda Distributed Applications (CorDapps). With Corda node management, you’ll be able to control access to your node, scale the node up or down, and drive flow draining. With CorDapp management, you’ll be able to easily add, manage, and version your CorDapps on your node.
Integrated node and CorDapp health, monitoring, and logging
Corda on Azure Blockchain Service leverages Azure Monitor making it easier to access Corda node and CorDapp health, monitoring, and logging information. With Azure Monitor, you’re able to customize alerts and actions based on logs and events. With all Corda and CorDapp logs at your fingertips, you’re able to create custom visualizations and dashboards based on the health and monitoring data.
Next steps
If you are building a solution on Corda Enterprise and are interested in joining the preview, please fill out the following form.
For those of you at CordaCon this week who would like to learn more, please come visit us at our booth or attend our Fully Managed Corda Enterprise with Azure Blockchain Service session on October 24th to speak with members of the Azure Blockchain team.
The Azure Repos app for Microsoft Teams allows users to monitor their repositories and branches from within Teams channels. Users can set up and manage subscriptions to get notifications in their channels whenever code is pushed/checked in, pull requests (PRs) are created or updated, etc. Users can leverage the presence of subscription filters to customize what they want to be notified about in the channel. Messaging extension can be used to search and share pull requests with other members in the channel or previews can be generated from pull request URLs to help initiate discussions around PRs and keep the conversations contextual.
Get notified when code is pushed to a repository or PR is created
Manage subscriptions from your Microsoft Teams channel
Use pull request URLs to initiate discussions around PRs
We’re constantly at work to improve the app, and soon you’ll see new features coming along, including the ability to create bulk subscriptions for all the repositories in a project. Please give the app a try and send us your feedback using the @azure repos feedback command in the app or on Developer Community.
This post was co-authored by Tina Coll, Sr Product Marketing Manager, Azure Cognitive Services.
Innovate at no cost to you, with out-of-the box AI services that are newly available for Azure free account users. Join the 1.3 million developers who have been using Cognitive Services to build AI powered apps to date. With the broadest offering of AI services in the market, Azure Cognitive Services can unlock AI for more scenarios than other cloud providers. Give your apps, websites, and bots the ability to see, understand, and interpret people’s needs — all it takes is an API call — by using natural methods of communication. Businesses in various industries have transformed how they operate using the very same Cognitive Services now available to you with an Azure free account.
These examples are just a small handful of what you can make possible with these services:
Improve app security with face detection: With Face API, detect and compare human faces. See how Uber uses Face API to authenticate drivers.
Automatically extract text and detect languages: Easily and accurately detect the language of any text string, simplifying development processes and allowing you to quickly translate and serve localized content. Learn how Chevron applied Form Recognizer for robotic process automation, quickly extracting text from documents.
Personalize your business’ homepage: Use Personalizer to deliver the most relevant content and experiences to each user on your homepage.
Develop your own computer vision model in minutes: Use your own images to teach Custom Vision Service the concepts you want it to learn and build your own model. Find out how Minsur, the largest tin mine in the western hemisphere, harnesses Custom Vision for sustainable mining practices.
Create inclusive apps: With Computer Vision and Immersive Reader, your camera becomes an inclusive tool that turns pictures into spoken words for low vision users.
Build conversational experiences for your customers: Give your bot the ability to interact with your users with Azure Cognitive Services. See how LaLiga, the Spanish men’s soccer league, engages hundreds of millions of fans with its chatbot using LUIS, QnAMaker, and more.
We are pleased to announce AddressSanitizer (ASan) support for the MSVC toolset. ASan is a fast memory error detector that can find runtime memory issues such as use-after-free and perform out of bounds checks. Support for sanitizers has been one of our more popular suggestions on Developer Community, and we can now say that we have an experience for ASan on Windows, in addition to our existing support for Linux projects.
At Microsoft, we believe that developers should be able to use the tools of their choice, and we are glad to collaborate with the LLVM team to introduce more of their tooling into Visual Studio. In the past, we also incorporated tools like clang-format, and most recently. MSVC support for ASan is available in our second Preview release of Visual Studio 2019 version 16.4.
To bring ASan to Windows and MSVC, we made the following changes:
The ASan runtime has been updated to better target Windows binaries
The MSVC compiler can now instrument binaries with ASan
CMake and MSBuild integrations have been updated to support enabling ASan
The Visual Studio debugger now can detect ASan errors in Windows binaries
ASan can be installed from the Visual Studio installer for the C++ Desktop workload
When you’re debugging your ASan-instrumented binary in Visual Studio, the IDE Exception Helper will be displayed when an issue is encountered, and program execution will stop. You can also view detailed ASan logging information in the Output window.
Installing ASan support for Windows
ASan is included with the C++ Desktop workload by default for new installations. However, if you are upgrading from an older version of Visual Studio 2019, you will need to enable ASan support in the Installer after the upgrade:
You can click Modify on your existing Visual Studio installation from the Visual Studio Installer to get to the screen above.
Note: if you run Visual Studio on the new update but have not installed ASan, you will get the following error when you run your code:
You can turn on ASan for an MSBuild project by right-clicking on the project in Solution Explorer, choosing Properties, navigating under C/C++ > General, and changing the Enable Address Sanitizer (Experimental) option. The same approach can be used to enable ASan for MSBuild Linux projects.
Note: Right now, this will only work for x86 Release targets, though we will be expanding to more architectures in the future.
Turning on ASan for Windows CMake projects
To enable ASan for CMake projects targeting Windows, do the following:
Open the Configurations dropdown at the top of the IDE and click on Manage Configurations. This will open the CMake Project Settings UI, which is saved in a CMakeSettings.json file.
Click the Edit JSON link in the UI. This will switch the view to raw .json.
Under the x86-Release configuration, add the following property:
"addressSanitizerEnabled": true
You may get a green squiggle under the line above with the following warning: Property name is not allowed by the schema. This is a bug that will be fixed shortly – the property will in fact work.
Here is an image of the relevant section of the CMakeSettings.json file after the change:
We will further simplify the process for enabling ASan in CMake projects in a future update.
Contributions to ASan Runtime
To enable a great Windows experience, we decided to contribute to the LLVM compiler-rt project and reuse their runtime in our implementation of ASan. Our contributions to the ASan project include bug fixes and improved interception for
HeapAlloc
,
RtlAllocateHeap
,
GlobalAlloc
, and
LocalAlloc
, along with their corresponding
Free
,
ReAllocate
, and
Size
functions. Anyone can enable these features by adding the following to the
ASAN_OPTIONS
environment variable for either Clang or MSVC on Windows:
ASAN_OPTIONS=set windows_hook_rtl_allocators=true
Additional options can be added with a colon at the end of the line above.
Changes to MSVC to enable ASan
To enable ASan, c1.dll and c2.dll have been modified to add instrumentation to programs at compile time. For a 32-bit address space, about 200 MB of memory is allocated to represent (or ‘shadow’) the entire address space. When an allocation is made, the shadow memory is modified to represent that the allocation is now valid to access. When the allocation is freed or goes out of scope, the shadow memory is modified to show that this allocation is no longer valid. Memory accesses which are potentially dangerous are checked against their entry in the shadow memory to verify that the memory is safe to access at that time. Violations are reported to the user as output from either stderr or an exception window in Visual Studio. The allocation data in the shadow memory is checked before the access happens. The AddressSanitizer algorithm enables error reports to show exactly where the problem occurred and what went wrong.
This means that programs compiled with MSVC + ASan also have the appropriate clang_rt.asan library linked for their target. Each library has a specific use case and linking can be complicated if your program is complex.
Compiling with ASan from the console
For console compilation, you will have to link the ASan libraries manually. Here are the libraries required for an x86 target:
asan-i386.lib – static runtime compatible with /MT CRT.
asan_cxx-i386.lib -static runtime component which adds support for new and delete, also compatible with /MT CRT.
asan_dynamic-i386.lib – dynamic import library, compatible with /MD CRT.
asan_dynamic-i386.dll – dynamic runtime DLL, compatible with /MD.
asan_dynamic_runtime_thunk-i386.lib – dynamic library to import and intercept some /MD CRT functions manually.
asan_dll_thunk-i386.lib – import library which allows an ASAN instrumented DLL to use the static ASan library which is linked into the main executable. Compatible with /MT CRT.
Once you have selected the correct ASan runtime to use, you should add
/wholearchive:<library to link>
to your link line and add the appropriate library to your executables. The clang_rt.asan_dynamic_i386.dll is not installed into System32, so when running you should make sure it is available in your environment’s search path.
Some additional instructions:
When compiling a single static EXE: link the static runtime (asan-i386.lib) and the cxx library if it is needed.
When compiling an EXE with the /MT runtime which will use ASan-instrumented DLLs: the EXE needs to have asan-i386.lib linked and the DLLs need the clang_rt.asan_dll_thunk-i386.lib. This allows the DLLs to use the runtime linked into the main executable and avoid a shadow memory collision problem.
When compiling with the /MD dynamic runtime: all EXE and DLLs with instrumentation should be linked with copies of asan_dynamic-i386.lib and clang_rt.asan_dynamic_runtime_thunk-i386.lib. At runtime, these libraries will refer to the clang_rt.asan_dynamic-i386.dll shared ASan runtime.
The ASan runtime libraries patch memory management functions at run-time and redirect executions to an ASan wrapper function which manages the shadow memory. This can be unstable if the runtime environment differs from what the libraries have been written to expect. Please submit any compile or run time errors which you encounter while using ASan via the feedback channels below!
Send us feedback!
Your feedback is key for us to deliver the best experience running ASan in Visual Studio. We’d love for you to try out the latest Preview version of Visual Studio 2019 version 16.4 and let us know how it’s working for you, either in the comments below or via email. If you encounter problems with the experience or have suggestions for improvement, please Report A Problem or reach out via Developer Community. You can also find us on Twitter @VisualC.
and I've blogged about "Try .NET" which is a wonderful .NET Core global tool that lets you make interactive in-browser documentation and create workshops that can be run both online and locally (totally offline!)
Even better, you can just clone a Try .NET enabled repository with markdown files that have a few magic herbs and spices, then run "dotnet try" in that cloned folder.
What does this have to do with Polly, the lovely .NET resilience and transient fault handling library that YOU should be using every day? Well, my friends, check out this lovely bit of work by Bryan J Hogan! He's created some interactive workshop-style demos using Try .NET!
How easy is it to check out? Let's give it a try. I've run dotnet tool install --global dotnet-try already. You may need to run update if you've installed it a while back.
That's it. What does it do? It'll launch your browser to a local website powered by Try .NET that looks like this!
Sweet! Ah, but Dear Reader, scroll down! Let me try out one of the examples. You'll see a Monaco-based local text editor (the same edit that powers VS Code) and you're able to run - and modify - local code samples IN THE BROWSER!
Here's the code as text to make it more accessible.
int result = retryPolicy.Execute(() => errorProneCode.QueryTheDatabase());
Console.WriteLine($"Received a response of {result}.");
And the output appears below the sample, again, in a console within the browser:
System.Exception thrown, retrying 1.
System.InsufficientMemoryException thrown, retrying 2.
Received a response of 0.
You can see that Polly gives you a RetryPolicy that can envelop your code and handle things like transient errors, occasional flaky server responses, or whatever else you want it to do. It can be configured as a policy outside your code, or coded inline fluently like this.
NOTE the URL! See that it's a .MD or Markdown file? Try .NET has a special handler that reads in a regular markdown file and executes it. The result is an HTML representation of your Markdown *and* your sample, now executable!
What's the page/image above look like as Markdown? Like this:
# Polly Retries Part 2
### Retrying When an Exception Occurs
The Polly NuGet package has been added and we are going to use the Retry Policy when querying database.
The policy states that if an exception occurs, it will retry up to three times.
Note how you execute the unreliable code inside the policy. `retryPolicy.Execute(() => errorProneCode.QueryTheDatabase());`
#### Next: [Retrying Based on a Result »](./retryIfIncorrectStatus.md) Previous: [Before You Add Polly «](../lettingItFail.md)
Note the special ``` region. The code isn't inline, but rather it lives in a named region in Program.cs in a project in this same repository, neatly under the /src folder. The region is presented in the sample, but as samples are usually more complex and require additional libraries and such, the region name and project context is passed into your app as Try.NET executes it.
Go check out some Try .NET enabled sample repositories. Just make sure you have the Try .NET global tool installed, then go clone and "dotnet try" any of these!
If you're doing classwork, teaching workshops, making assignments for homework, or even working in a low-bandwidth or remote environment this is great as you can put the repositories on a USB key and once they've run once they'll run offline!
Now, be inspired by (and star on GitHub) Bryan's great work and go make your own interactive .NET documentation!
Sponsor: Like C#? We do too! That’s why we've developed a fast, smart, cross-platform .NET IDE which gives you even more coding power. Clever code analysis, rich code completion, instant search and navigation, an advanced debugger... With JetBrains Rider, everything you need is at your fingertips. Code C# at the speed of thought on Linux, Mac, or Windows. Try JetBrains Rider today!