Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Support for large test attachments now available in Azure Pipelines

$
0
0

When running tests in a CI/CD pipeline, collecting diagnostic data such as screenshots, logs, and attachments is very useful to help troubleshooting failures. Azure Pipelines has always had support for test attachments and you can find details in the documentation on how to collect attachments in continous testing.

With this update, Azure Pipelines supports test attachments bigger than 100MB in size, which means you can now upload big files like crash dumps or videos with failed tests, aiding your troubleshooting experience.

This functionality is available for all our customers today.

Self-hosted agents

If you are using self-hosted build or release agents behind a firewall that is filtering outbound requests, you will need to make some configuration changes to be able to use this functionality. You might see VSTest task or Publish test results task returning a 403 or 407 error in logs, an example is as below:

Large attachment Build error

In order to fix the issue above, we recommend updating firewall rules to allow outbound requests to https://*.vstmrblob.vsassets.io. You can find troubleshooting information in our docs

Note that the change above is only required if you’re using self-hosted Azure Pipelines agents and you’re behind a firewall that is filtering outbound traffic. Customers using Microsoft-hosted agents in the cloud or that aren’t filtering outbound network traffic do not need to take any action.

Feedback

Please reach out to our Azure DevOps feedback portal if you have any questions / feedback, or connect with us on Twitter at @AzureDevOps.

The post Support for large test attachments now available in Azure Pipelines appeared first on Azure DevOps Blog.


Azure IoT Dev Experience July Update: IoT Edge tooling GA and more!

$
0
0

Welcome to the June update of IoT Tooling!

In the release, we have made a lot of features and improvements for our IoT developers!

General Availability of IoT Edge tooling

Azure IoT Edge was released in the year of 2017. With near two years’ continuous working on the Azure IoT Edge Tooling, we are happy to announce that Azure IoT Edge Tooling is now general available. Azure IoT Edge Tooling includes:

ARM64 support in Azure IoT Edge extension for Visual Studio Code

With the release of IoT Edge 1.0.8, Azure IoT Edge is supported on ARM64 IoT Edge devices. In the meanwhile, I’m glad to share the steps of developing and debugging ARM64 IoT Edge custom modules in VS Code. For details, you could checkout this blog post for more information.

Containerized tool chain to simplify IoT device development

In early this month, we announced the preview of a new feature enabled in Azure IoT Tools extension in VS Code to simplify the device cross-compiling tool chain acquisition effort for device developers working on embedded Linux devices (e.g. Debian, Ubuntu, Yocto Linux…) with Azure IoT by encapsulating the compilers, device SDK and essential libraries in Containers. All you need is to install or upgrade the IoT Device Workbench and get started developing within the container, just like today you are using a local environment. For details, you could checkout this blog post for more information.

Try it out

Please don’t hesitate to give it a try! We will continuously improve our IoT developer experience to empower every IoT developers on the planet to achieve more!

The post Azure IoT Dev Experience July Update: IoT Edge tooling GA and more! appeared first on The Visual Studio Blog.

Improved developer experience for Azure Blockchain development kit

$
0
0

As digital transformation expands beyond the walls of one company and into processes shared across organizations, businesses are looking to blockchain as a way to share workflow data and logic.

This spring we introduced Azure Blockchain Service, a fully-managed blockchain service that simplifies the formation, management, and governance of consortium blockchain networks. With a few simple clicks, users can create and deploy a permissioned blockchain network and manage consortium membership using an intuitive interface in the Azure portal.

To help developers building applications on the service, we also introduced our Azure Blockchain development kit for Ethereum. Delivered via Visual Studio Code, the dev kit runs on all major operating systems, and brings together the best of Microsoft and open source blockchain tooling, including deep integration with leading OSS tools from Truffle. These integrations enable developers to create, compile, test, and manage smart contract code before deploying it to a managed network in Azure.

We’re constantly looking and listening to feedback for areas where we can lean in and help developers go further, faster. This week for TruffleCon, we’re releasing some exciting new features that make it easier than ever to build blockchain applications:

  • Interactive debugger: Debugging of Ethereum smart contracts, has been so far, a challenging effort. While there are some great command line tools (e.g., Truffle Debugger), these tools aren’t integrated into integrated development environments (IDE) like Visual Studio Code. Native integration of the Truffle Debugger into Visual Studio Code brings all the standard debugging features developers have come to rely on (e.g, breakpoints, step in/over/out, call stacks, watch windows, and Intellisense pop ups) that let developers quickly identify, debug, and resolve issues.
  • Auto-generated prototype UI: The dev kit now generates a UI that is rendered and activated inside of Visual Studio Code. This allows developers to interact with their deployed contracts, directly in the IDE environment without having to build other UI or custom software simply to test out basic functionality of their contracts. Having a simple, graphical user interface (GUI) driven interface that allows developers to interact and test out basic functionality of their contracts inside the IDE, without writing code, is a huge improvement in productivity.

Interactive contract UI in Visual Studio Code

With the addition of these new debugger capabilities, we are bringing all the major components of software development, including build, debug, test, and deploy, for Smart Contracts into the popular Visual Studio Code developer environment.

If you’re in Redmond, Washington this weekend, August 2-4, 2019, come by TruffleCon to meet the team or head to the Visual Studio Marketplace to try these new features today!

An update on disabling VBScript in Internet Explorer 11

$
0
0

In early 2017, we began the process of disabling VBScript in Internet Explorer 11 to give the world the opportunity to prepare for it to be disabled by default.

The change to disable VBScript will take effect in the upcoming cumulative updates for Windows 7, 8, and 8.1 on August 13th, 2019. VBScript will be disabled by default for Internet Explorer 11 and WebOCs for Internet and Untrusted zones on all platforms running Internet Explorer 11. This change is effective for Internet Explorer 11 on Windows 10 as of the July 9th, 2019 cumulative updates.

The settings to enable or disable for VBScript execution in Internet Explorer 11 will remain configurable per site security zone, via Registry, or via Group Policy, should you still need to utilize this legacy scripting language.

To provide feedback on this change, or to report any issues resulting from this change, you can use the Feedback Hub app on any Windows 10 device. Your feedback goes directly to our engineers to help make Windows even better.

– Brent Mills, Senior Program Manager

The post An update on disabling VBScript in Internet Explorer 11 appeared first on Microsoft Edge Blog.

Top Stories from the Microsoft DevOps Community – 2019.08.02

$
0
0

If software is eating the world, YAML is eating CI/CD pipelines, and for a good reason! Who doesn’t want the ability to version their pipeline, keep it in source control and easily reuse it for similar applications?

In this week’s community posts, we learn more about YAML pipelines’ capabilities and additional security and compliance tools integrations. I am excited about what the community will do next week with all the new features we released in sprint 155!

Incorporating Snyk into Continuous Integration with Azure Yaml Pipelines
Security is becoming more and more central to software development, as hackers leverage known package vulnerabilities to breach major companies and even governments. It is recommended to shift security left in your CI/CD pipelines, and take advantage of the latest package vulnerability scanning tools. This post from Jason Penniman is an introduction to integrating Snyk, a vulnerability monitoring tool for open source packages, into your Azure Pipeline.

Azure DevOps and Telerik NuGet Packages
Speaking of package security, it is recommended to only consume software packages from known sources. Are you using Azure Pipelines with a private NuGet feed? This great article from Lance McCarthy has a detailed walkthrough of setting up the Telerik NuGet server as a package source in Azure DevOps using two different approaches – either by creating a Service Connection and a custom NuGet.config file, or by setting up a custom NuGet feed in Azure Artifacts. Thank you, Lance, for putting this guide together!

Getting started with Azure DevOps job and step Templates – Part 1
Just like any other software, pipelines are all about code reuse. In YAML, once you start copying over blocks of code, you know that will eventually introduce configuration drift. Luckily there is a solution – YAML Job and Step templates! This post by Barbara Forbes shows how to create a reusable YAML template for the repetitive steps you do in multiple pipelines. Thank you, Barbara, for this example use case!

Reap What You Sow II – IaC, Terraform, & Azure DevOps – Now With YAML
This post is an infrastructure-as-code with Azure Pipelines chapter two, in which Napoleon Jones walks us through his journey of trying YAML for the first time, and converting the Terraform pipeline from the previous post into the new multi-stage YAML pipeline. Sounds like YAML turned out to be friendlier than expected! Great job, Napoleon!

DevSecOps: Policy-as-code with Azure Pipelines
To expand on the topic of security and compliance, Azure Policy is an important tool that allows you to verify that your Azure resources comply with your company requirements, such as networking restrictions, geographic locations, VM SKUs and more. This article by Vishal Jain shows how to deploy Azure Policy using Azure Pipelines, and even add a Policy Compliance gate to your deployment.

If you’ve written an article about Azure DevOps or find some great content about DevOps on Azure, please share it with the #AzureDevOps hashtag on Twitter!

The post Top Stories from the Microsoft DevOps Community – 2019.08.02 appeared first on Azure DevOps Blog.

What’s new in Azure DevOps Sprint 155

$
0
0

Sprint 155 has just finished rolling out to all organisations and you can check out all the cool features in the release notes. Here are just some of the features that you can start using today.

Manage pipeline variables in YAML editor

We updated the experience for managing pipeline variables in the YAML editor. You no longer have to go to the classic editor to add or update variables in your YAML pipelines.

Get insights into your team’s health with three new Azure Boards reports

You can’t fix what you can’t see. Therefore, you want to keep a close eye on the state and health of their work processes. With these reports, we are making it easier for you to track important metrics with minimal effort in Azure Boards.

The three new interactive reports are: Burndown, Cumulative Flow Diagram (CFD) and Velocity. You can see the reports in the new analytics tab.

Metrics like sprint burndown, flow of work and team velocity give you the visibility into your team’s progress and help answer questions such as:

  • How much work do we have left in this sprint? Are we on track to complete it?
  • What step of the development process is taking the longest?
  • Can we do something about it? Based on previous iterations, how much work should we plan for next the sprint?

With these new reports, it is super easy to answer all these and more. The new reports are fully interactive and allow you to adjust them for your needs. You can find the new reports under the Analytics tab in each hub. Check out the video for a demo on these as well as the release notes.

These are just the tip of the iceberg, and there are plenty more features that we’ve released in Sprint 155. Check out the full list of features for this sprint in the release notes.

The post What’s new in Azure DevOps Sprint 155 appeared first on Azure DevOps Blog.

Dotnet Depends is a great text mode development utility made with Gui.cs

$
0
0

I love me some text mode. ASCII, ANSI, VT100. Keep your 3D accelerated ray traced graphics and give me a lovely emoji-based progress bar.

Miguel has a nice thing called Gui.cs and I bumped into it in an unexpected and lovely place. There are hundreds of great .NET Global Tools that you can install to make your development lifecycle smoother, and I was installing Martin Björkström's lovely "dotnet depends" tool (go give him a GitHub star now!)  like this:

dotnet tool install -g dotnet-depends

Then I headed over to my Windows Terminal (get it free in the Store) and ran "dotnet depends" on my main website's code and was greeted by this (don't sweat the line spacing, that's a Terminal bug that'll be fixed soon):

dotnet depends in the Windows Terminal

How nice is this! It's a fully featured dependency explorer but it's all in text mode and doesn't require me to use the mouse and take my hands of the keyboard. If I'm already deep into the terminal/text mode, this is a great example of a solid, useful tool.

But how hard was it to make? Surprisingly little as his code is very simple. This is a testament to how he used the API and how Miguel designed it. He's separated the UI and the Business Logic, of course. He does the analysis work and stores it in a graph variable.

Here they're setting up some panes for the (text mode) Windows:

Application.Init();


var top = new CustomWindow();

var left = new FrameView("Dependencies")
{
Width = Dim.Percent(50),
Height = Dim.Fill(1)
};
var right = new View()
{
X = Pos.Right(left),
Width = Dim.Fill(),
Height = Dim.Fill(1)
};

It's split in half at this point, with the left side staying  at 50%.

var orderedDependencyList = graph.Nodes.OrderBy(x => x.Id).ToImmutableList();

var dependenciesView = new ListView(orderedDependencyList)
{
CanFocus = true,
AllowsMarking = false
};
left.Add(dependenciesView);
var runtimeDependsView = new ListView(Array.Empty<Node>())
{
CanFocus = true,
AllowsMarking = false
};
runtimeDepends.Add(runtimeDependsView);
var packageDependsView = new ListView(Array.Empty<Node>())
{
CanFocus = true,
AllowsMarking = false
};
packageDepends.Add(packageDependsView);
var reverseDependsView = new ListView(Array.Empty<Node>())
{
CanFocus = true,
AllowsMarking = false
};
reverseDepends.Add(reverseDependsView);

right.Add(runtimeDepends, packageDepends, reverseDepends);
top.Add(left, right, helpText);
Application.Top.Add(top)

The right side gets three ListViews added to it and the left side gets the dependencies view. Top it off with some clean data binding to the views and an initial call to UpdateLists. Anytime the dependenciesView gets a SelectedChanged event we'll call UpdateLists again.

top.Dependencies = orderedDependencyList;

top.VisibleDependencies = orderedDependencyList;
top.DependenciesView = dependenciesView;

dependenciesView.SelectedItem = 0;
UpdateLists();

dependenciesView.SelectedChanged += UpdateLists;

Application.Run();

What's in update lists? Filtering code for that graph variable from before.

void UpdateLists()

{
var selectedNode = top.VisibleDependencies[dependenciesView.SelectedItem];

runtimeDependsView.SetSource(graph.Edges.Where(x => x.Start.Equals(selectedNode) && x.End is AssemblyReferenceNode)
.Select(x => x.End).ToImmutableList());
packageDependsView.SetSource(graph.Edges.Where(x => x.Start.Equals(selectedNode) && x.End is PackageReferenceNode)
.Select(x => $"{x.End}{(string.IsNullOrEmpty(x.Label) ? string.Empty : " (Wanted: " + x.Label + ")")}").ToImmutableList());
reverseDependsView.SetSource(graph.Edges.Where(x => x.End.Equals(selectedNode))
.Select(x => $"{x.Start}{(string.IsNullOrEmpty(x.Label) ? string.Empty : " (Wanted: " + x.Label + ")")}").ToImmutableList());
}

That's basically it and it's fast as heck. Probably to be expected from the folks that brought you Midnight Commander.

Are you working on any utilities or cool projects and might want to consider - gasp - text mode over a website?


Sponsor: Looking for a tool for performance profiling, unit test coverage, and continuous testing that works cross-platform on Windows, macOS, and Linux? Check out the latest JetBrains Rider!



© 2019 Scott Hanselman. All rights reserved.
     

We’re making Azure Archive Storage better with new lower pricing

$
0
0

As part of our commitment to provide the most cost-effective storage offering, we’re excited to share that we have dropped Azure Archive Storage prices by up to 50 percent in some regions. The new pricing is effective immediately.

In 2017 we launched Azure Archive Storage to provide cloud storage for rarely accessed data with flexible latency requirements at an industry leading price point. Since then we’ve seen both small and large customers from all industries utilize Archive Storage to significantly reduce their storage bill, improve data durability, and meet legal compliance. Forrester Consulting interviewed four of these customers and conducted a commissioned Total Economic Impact™ (TEI) study to evaluate the value customers achieved by moving both on-premises and existing data in the cloud to Archive Storage. Below are some of the highlights from that study.

  • 112 percent return-on-investment (ROI). Forrester’s interviews with four existing customers and subsequent financial analysis found that a composite organization based on these interviewed organizations projects expected benefits of $296,941 over projected three years versus costs of $140,376, adding up to a net present value (NPV) of $156,565 and an ROI of 112 percent.
  • Reduced or eliminated more than $173 thousand in operational and hardware expenses over a three-year period. Organizations were able to reduce spending in their on-premises storage environments by transitioning data to the cloud. Moving to the cloud enabled users to eliminate their tape and hard disk backups, while also reducing overall operating expenditures.
  • Reduced monthly cloud storage costs by 95 percent. Organizations identified infrequently accessed data stored in active cloud storage tiers and transitioned them to the Archive tier, reducing their monthly per gigabyte (GB) storage costs by 95 percent. The Archive tier allowed organizations to augment their existing cloud storage savings. Over a three-year period, this saves an estimated $123,692.

How are customers using Archive Storage?

Red Toshiba logo

Toshiba America Business Solutions (TABS) sells digital signage and multifunction printers (MFPs), along with a complete set of maintenance and management services to help customers optimize their digital and paper communications. TABS created two Internet of Things (IoT) analytics solutions, e-BRIDGE™ CloudConnect and CloudConnect Data Services that are based on Microsoft Azure platform-as-a-service (PaaS) offerings, including Azure SQL Data Warehouse. Using e-BRIDGE, TABS remotely gathers device health data from thousands of installed devices and preemptively dispatches service technicians with the correct parts to perform repairs. With CloudConnect Data Services, TABS analyzes device health and repair history data to continuously improve product design and component choices. These solutions have helped the company improve device uptime and reduce service costs.

The daily configuration updates from printer devices were being stored in hot Blob Storage for four years even though they were rarely accessed. With Archive Blob Storage, Toshiba now moves these files to Archive Storage after 30 days once the probability of them being accessed goes down significantly. At this point, they also don’t need the files immediately available and can wait hours to get the files back. Archive Storage has allowed Toshiba to reduce their storage costs for this data by almost 90 percent.

Blue Oceaneering logo

Oceaneering uses remotely operated vehicles (ROVs) to capture video of operations and inspection work. The increase in overall video quality over the last few years has predicated the use of more efficient storage capabilities provided by the Azure platform. The satellite links provided onboard the vessels provide limited bandwidth to stream the video, so the traditional transport of media such as Data Box sometimes requires manual transport. The large amount of data per inspection, 2 TB a day in some instances, is maintained on Azure Storage. For the larger library of historical video, Azure Archive Storage is used to provide the most cost-effective solution for our customers who access the video via the Oceaneering Media Vault (OMV). Oceaneering has experienced 60 percent savings utilizing Azure Archive capabilities.

Regional availability

Archive Storage is currently available in a total of 29 regions worldwide, and we’re continuing to expand that list. Over the past year we have added support for Archive Storage in Australia East, Australia Southeast, East Asia, Southeast Asia, UK West, UK South, Japan East, Japan West, Canada Central, Canada East, US Gov Virginia, US Gov Texas, US Gov Arizona, China East 2, and China North 2.

Additional information

Azure Archive Storage provides an extremely cost-effective alternative to on-premises storage for cold data as highlighted in the Forrester TEI study. Customers can significantly reduce operational and hardware expenses to realize an ROI of up to 112 percent over three years by moving their data to the Archive tier.

Archive exists alongside the Hot and Cool access tiers. All archive operations are consistent with the other tiers so you can seamlessly move your data among tiers programmatically or using lifecycle management policies. Archive is supported by a broad and diverse set of storage partners.

For more information on Archive Storage features and capabilities, please visit our product page. For more information on Archive Storage pricing, please visit the Azure Block Blob Pricing page. If you have any further questions or feedback, please reach out to us at archivefeedback@microsoft.com.


Azure and Informatica team up to remove barriers for cloud analytics migration

$
0
0

Today, we are announcing the most comprehensive and compelling migration offer available in the industry to help customers simplify their cloud analytics journey.

This collaboration between Microsoft and Informatica provides customers an accelerated path for their digital transformation. As customers modernize their analytics systems, it enables them to truly begin integrating emerging technologies, such as AI and machine learning, into their business. Without migrating analytics workloads to the cloud, it becomes difficult for customers to maximize the potential their data holds.

For customers that have been tuning analytics appliances for years, such as Teradata and Netezza, it can seem overwhelming to start the journey towards the cloud. Customers have invested valuable time, skills, and personnel to achieve optimal performance from their analytics systems, which contain the most sensitive and valuable data for their business. We understand that the idea of migrating these systems to the cloud can seem risky and daunting. This is why we are partnering with Informatica to help customers begin their cloud analytics journey today with an industry-leading offer.

Free evaluation

With this offering, customers can now work with Azure and Informatica to easily understand their current data estate, determine what data is connected to their current data warehouse, and replicate tables without moving any data in order to conduct a robust proof of value.

This enables customers to get an end-to-end view of their data, execute a proof of value without disrupting their existing systems, and quickly see the possibilities of moving to Azure.

Free code conversion

A critical aspect of migrating on-premises appliances to the cloud is converting existing schemas to take advantage of cloud innovation. This conversion can quickly become expensive even in proof of values.

With this joint offering from Azure and Informatica, customers receive free code conversion for both the proof of value phase and when fully migrating to the cloud, as well as a SQL Data Warehouse subscription for the duration of the proof of value (up to 30 days).

Hands-on approach

Both Azure and Informatica are dedicating the personnel and resources to have analytics experts on-site helping customers as they begin migrating to Azure.

Customers that qualify for this offering will have full support from Azure SQL Data Warehouse experts. They will help with the initial assessment, executing the proof of value, and provide best practice guidance during migration.

Everything you need to start your cloud analytics journey

Image of table displaying Azure and Informatica proof of value

Get started today

Analytics in Azure is up to 14 times faster and costs 94 percent less than other cloud providers, and is the leader in both the TPC-H and TPC-DS industry benchmarks. Now with this joint offer, customers can easily get started on their cloud analytics journey.

Button image to sign up for Azure and Informatica joint offer

Preview Features in Visual Studio

$
0
0

The Preview Features page under Tools > Options > Environment has a new look! We introduced the Preview Features page so that you can easily find these capabilities and be able to control their enablement. The new layout provides more information and an opportunity to give feedback on the features. While these features are in development, you can disable any of these features if you run into issues.  We also encourage you to provide feedback on the capabilities or any issues you find. Some features you may find on this page includes ones in early development that affect existing functionality, ones that are still evolving, and ones that are experiments that are meant to inform future development.

As we continue to iterate on these features, the list of preview features will fluctuate for each Visual Studio release as some features get added into the product and others are cut. You can learn more about the preview features available in each release in the release notes.

The post Preview Features in Visual Studio appeared first on The Visual Studio Blog.

Improving .NET Core installation in Visual Studio and on Windows

$
0
0

Visual Studio 2019 version 16.3 and .NET Core 3.0 Preview 7 improve the installation experience of .NET Core on Windows. The goal is to reduce the number of .NET Core versions that might be on a machine. The improvements are based on customer feedback and our own experiences as well as laying the groundwork for future improvements.

.NET Core SDK installer for Windows

Let’s start with the .NET Core SDK installer.

Install now removes previous versions

Starting with .NET Core 3.0 Preview 7, the .NET Core SDK installer will remove previous patch versions after a successful installation. This means that if you have 3.0 Preview 5 and Preview 7 on a machine then install Preview 7, only Preview 7 will remain once the process is complete.

You will see “Processing: Previous version” in the progress dialog during this step.

stand_alone_progress

If you would like to know more, see Overview of how .NET Core is versioned.

The change to remove the previous patch version was made based on customer feedback regarding the many installed versions which could accumulate on a machine.

Installing previous versions of .NET Core

.NET Core still supports side-by-side installations. All previously released versions of .NET Core are available for download at the .NET Core download page. You can find out which .NET Core SDKs and Runtimes are on your machine with dotnet --info.

Visual Studio installation of .NET Core

Visual Studio 2019 16.3 Preview 1 includes the following improvements for .NET Core installation.

We have moved to a model where there will be a single SDK for each Visual Studio installation. There can be multiple versions of the Runtime installed enabling you to target lower versions of the Runtime with the current SDK.

Visual Studio now removes previous .NET Core versions

Visual Studio installs .NET Core. Likewise, if Visual Studio is updated or uninstalled, .NET Core will also be updated or uninstalled.

Additional versions of .NET Core and the .NET Core SDK can be downloaded and installed side-by-side as needed.

Any .NET Core versions installed by the .NET Core installer for Windows will not be affected by the Visual Studio installer. This would happen if, for example, additional versions of .NET Core are installed as described above.

.NET Core in the Programs and Features Control Panel

Visual Studio is now responsible for its copy of .NET Core. There will not be an entry in the Progams and Features Control Panel for the .NET Core version that Visual Studio installs.

The Visual Studio and .NET Core installers will continue to use the same root directory – C:Program Filesdotnet. It is important that you do not delete the dotnet directory because Visual Studio depends on .NET Core at that location. If you break your Visual Studio installation by deleting the dotnet directory, run “Repair” in the Visual Studio Installer.

Visual Studio Workloads with .NET Core

The installer Workloads selection has the same experience as previous versions of Visual Studio. When a workload is selected which requires .NET Core, the 3.0 Development Tools (SDK) and 3.0 Runtime will be installed.

workload_selection

Adding .NET Core 2.1 or 2.2

.NET Core 2.1 and 2.2 are optional components in Visual Studio 2019 16.3 and need to be explicitly selected in the Individual Components tab.

If you have already installed the .NET Core 2.1 and/or 2.2 SDK, no additional actions are needed for applications to target these versions. Even though you may have .NET Core 2.1 or 2.2 installed, the Visual Studio Installer Individual Components tab will not have these components selected. If you would like to ensure you have the latest .NET Core 2.1 or 2.2, select them in the Individual Components tab.

ic_tab

In a later preview of Visual Studio 16.3, .NET Core 3.0 and 2.1 (which is the Long Term Support or LTS, release) will be installed whenever a .NET Core workload is selected.

Future enhancements

We are considering additional enhancements to the .NET Core installer. The kinds of functionality we’re exploring for coming releases include providing:

  • Similar improvements and enhancements for the Visual Studio for Mac installer and the .NET Core installer for Mac.
  • A full-featured Windows installer to manage .NET Core.
  • The ability to discover and install updates, similar to update notifications in the Visual Studio Installer.
  • A removal tool to easily manage the many instances of .NET Core, which may be on a machine.

Leave a comment if there is something else you would like us to consider for the installers.

The post Improving .NET Core installation in Visual Studio and on Windows appeared first on .NET Blog.

Announcing Azure Databricks unit pre-purchase plan and new regional availability

$
0
0

Azure Databricks is a fast, easy, and collaborative Apache Spark based analytics platform that simplifies the process of building big data and artificial intelligence (AI) solutions. Azure Databricks provides data engineers and data scientists an interactive workplace where they can use the languages and frameworks of their choice. Natively integrated with services like Azure Machine Learning and Azure SQL Data Warehouse, Azure Databricks enables customers to build an end-to-end modern data warehouse, real-time analytics, and machine learning solutions.

Save up to 37 percent on your Azure Databricks workloads

Azure Databricks Unit pre-purchase plan is now generally available—expanding our commitment to make Azure the most cost-effective cloud for running your analytics workloads.

Today, with the Azure Databricks Unit pre-purchase plan, you can start unlocking the benefits of Azure Databricks at significantly reduced costs when you pre-pay for Databricks compute for a one or three-year term. With this new pricing option, you can achieve savings of up to 37 percent compared to pay-as-you-go pricing. You can learn more about the discount tiers on our pricing page. All Azure Databricks SKUs—Premium and Standard SKUs for Data Engineering Light, Data Engineering, and Data Analytics—are eligible for DBU pre-purchase.

Compared with other Azure services with reserved capacity pricing, which have a per hour capacity purchase, this plan allows you to pre-purchase DBUs that can be used at any time. You also have the flexibility to consume units across all workload types and tiers.

Azure Databricks is offered as a first party Azure service. You can pre-purchase Databricks compute either from your Azure prepayment or existing payment instruments.

Azure Databricks is now available in South Africa and South Korea

Azure Databricks is now generally available in additional regions—South Africa and South Korea. These additional locations bring the product worldwide availability count to 26 regions backed by a 99.95 percent SLA.

Driven by the motto of innovation and accessibility, we aim to ensure that we build a cloud infrastructure to serve the needs of customers globally. Stay updated with the region availability for Azure Databricks.

Organizations also benefit from Azure Databricks' native integration with other services like Azure Blob storage, Azure Data Factory, Azure SQL Data Warehouse, Azure Machine Learning, and Azure Cosmos DB. This enables new analytics solutions that support modern data warehousing, advanced analytics, and real-time analytics scenarios.

Get started today

Getting started with DBU pre-purchase is easy, and is done via the Azure portal. For details on how to get started, see our documentation. For more information on discount tiers, please visit the pricing page.

A Deep Dive into Git Performance using Trace2

$
0
0

One of the cardinal rules when attempting to improve software performance is to measure rather than guess. It is easy to fall into the trap of attempting a performance enhancement before root-causing the real performance bottleneck.

Our team at Microsoft has been working to improve the performance of Git to scale up to extremely large repositories, such as the Office and Windows repositories–the latter being the largest one on the planet today.

We added the Trace2 feature to Git to help us find and measure performance problems, to help us track our progress, and to help us direct our engineering efforts. We contributed Trace2 to core Git so that others may use it too.

Trace2 is a new feature available in Git 2.22, so update your version of Git if necessary and give it a try.

What is Trace2?

Trace2 is a logging framework built into Git. It can write to multiple logging targets, each with a unique format suitable for a different type of analysis. In this article I’ll use the performance format target to generate PERF format tracing data and I’ll show how we use it in our iterative development loop to understand and improve Git. In a later article I’ll focus on the event format target to generate EVENT format tracing data and show how we use it to aggregate performance across multiple commands and users and gain higher-level insight. We’ve found that both types of analysis are critical to removing the guess work and help us understand the big picture as we scale Git to help our users be productive.

Turn on Trace2 and Follow Along

In this article I’m going to do a deep dive on a few example Git commands, show the Trace2 output, and explain what it all means. It’ll be easier to understand how Trace2 works if you turn it on and follow along using one of your own repos.

I’m going to use Microsoft’s fork of Git which has features specifically for VFS for Git, so my output may differ slightly from yours.

Select a repository that you are familiar with. The bigger the better–after all we are talking about performance and scaling. It should also have an “https” remote that you can push to.

I’m going to “go really big” and use the Windows repository with help from VFS for Git.

Trace2 can write trace data to the console or to a log file. Sending it to the console can be confusing because the data is mixed with the actual command output. It is better to send it to a log file so you can study it in detail after the commands complete. Trace2 always appends new trace data to the bottom of the log file, so we can see the command history. And if multiple Git commands run concurrently, their output will be interleaved, so we can see the interaction.

Let’s turn on the performance format target and send the data to a file. And for space reasons, also turn on “brief” mode, which hides source filenames and line numbers.

You can enable Trace2 for Git commands in the current terminal window using environment variables.

 
export GIT_TRACE2_PERF_BRIEF=true
export GIT_TRACE2_PERF=/an/absolute/pathname/to/logfile
 

You’re all set now. Git commands will now append performance data to this log file.

Example 1 – Git Status

Let’s start with a simple example by running git status. We will step through the Trace2 logs line by line.

 
$ git status
On branch x
Your branch and 'origin/official/rsmaster' have diverged,
and have 3 and 242137 different commits each, respectively.
  (use "git pull" to merge the remote branch into yours)

It took 26.02 seconds to compute the branch ahead/behind values.
You can use '--no-ahead-behind' to avoid this.

nothing to commit, working tree clean
 

Your log file should contain something similar to the following. The format of the output is described in PERF format, but it should be fairly easy to follow along, so I won’t try to repeat it here. I will point out key points as we go along.

Note that Git will append Trace2 messages from the current command to the end of the log file, so your log file may have output from previous commands if you have been experimenting on your own.

For space and privacy reasons, I’ll omit or shorten some of the Trace2 messages, so your output may differ from this a little. And for clarity, I will add blank lines for grouping purposes.

Process Startup

 
d0 | version      |           |           |            | 2.22.0.vfs.1.1
d0 | start        |  0.007057 |           |            | 'C:vbingit.exe' status
 

The “start” message contains the full command line.

Column 3 contains the elapsed time since the start of the program.

 
d0 | child_start  |  0.031535 |           |            | [ch0] class:hook hook:pre-command argv: ...
d0 | child_exit   |  0.232245 |  0.200710 |            | [ch0] pid:20112 code:0
 

“child_*” messages mark the boundaries of a spawned child process. This child happens to be of type “hook”.

Column 4, when present, contains the relative time since the corresponding start or enter message.

The VFS for Git-aware version of Git contains a pre-command hook that is invoked at the start of each Git command. It started at t+0.031535 seconds. The child process finished at t+0.232245 seconds. Since hooks are run synchronously, we know that the git status process waited 0.200710 seconds for the hook to complete.

 
d0 | cmd_name     |           |           |            | status (status)
 

This message prints the canonical name of the command and the command’s ancestry in the last column.

The canonical name is status. Normally, this is just the value of the first non-option token on the command line.

For top-level Git commands there are no parent Git commands, so the ancestry is reported as just status.

The ancestry will be more important later when we talk about child Git commands spawned by other Git commands. In these instances the ancestry contains the hierarchy of Git commands spawning this command. We’ll see examples of this later.

Reading the Index

 
d0 | region_enter |  0.232774 |           | index      | label:do_read_index .git/index
d0 | data         |  0.518577 |  0.285803 | index      | ..read/version:4
d0 | data         |  0.518636 |  0.285862 | index      | ..read/cache_nr:3183722
d0 | region_leave |  0.518655 |  0.285881 | index      | label:do_read_index .git/index
 

“region_*” messages mark the boundaries of an “interesting” computation, such as a function call, a loop, or a just span of code.

Column 5, when present, contains a “category” field. This is an informal way of grouping a region or data message to a section of the code. These messages are all associated with the index.

The last column contains a label name for the region, in this case do_read_index. We know from looking at the source code that this is the name of the function that reads the index into memory. This column can also include region-specific information. In this case it is the relative pathname of the index file.

“data” messages report the values of “interesting” variables in the last column. The index was in V4 format. The index contained 3,183,722 cache-entries (files under version control).

Also, the last column of “data” and “region” messages contained within an outer region are prefixed with “..” to indicate the nesting level.

Git started reading the index at t+0.232774 and finished at t+0.518655. We spent 0.285881 seconds reading the index.

As you can see the Windows repo is extremely large with ~3.2 million files under version control. It takes Git about a third of a second just to read and parse the index. And this must happen before any actual work can start.

Applying VFS Hints

 
d0 | child_start  |  0.518814 |           |            | [ch1] argv: .git/hooks/virtual-filesystem 1
d0 | child_exit   |  0.534578 |  0.015764 |            | [ch1] pid:28012 code:0

d0 | data         |  0.964700 |  0.964700 | vfs        | apply/tracked:44
d0 | data         |  0.964748 |  0.964748 | vfs        | apply/vfs_rows:47
d0 | data         |  0.964765 |  0.964765 | vfs        | apply/vfs_dirs:2
 

A child process was created to run the VFS for Git virtual-filesystem hook. This hook is critical to making git status fast in a virtualized repository, as VFS for Git is watching file contents change and Git can trust the hook’s response instead of walking the filesystem directly. Our Trace2 logs include some statistics about the hook’s response. For example, VFS for Git knows that only 44 files have been opened for read+write, such as in an editor.

However, we see a gap of about ~0.43 seconds after the hook finished at t+0.534578 and before the first data message at t+0.964700. In the future we might want to use Trace2 or a profiler to experiment and track down what is happening during that gap.

Status Computations

 
d0 | region_enter |  1.171908 |           | status     | label:worktrees
d0 | region_leave |  1.242157 |  0.070249 | status     | label:worktrees
 

Phase 1 (“label:worktrees”) of the status computation took 0.070249 seconds.

 
d0 | region_enter |  1.242215 |           | status     | label:index
d0 | data         |  1.243732 |  0.001517 | midx       | ..load/num_packs:2592
d0 | data         |  1.243757 |  0.001542 | midx       | ..load/num_objects:47953162
d0 | region_enter |  1.297172 |           | exp        | ..label:traverse_trees
d0 | region_leave |  1.297310 |  0.000138 | exp        | ..label:traverse_trees
d0 | region_leave |  1.345756 |  0.103541 | status     | label:index
 

Phase 2 (“label:index”) of the status computation took 0.103541 seconds. Within this region, a nested region (“label:traverse_trees”) took 0.000138 seconds.

We know there are 2,592 packfiles containing ~48 million objects.

 
d0 | region_enter |  1.345811 |           | status     | label:untracked
d0 | region_leave |  1.347070 |  0.001259 | status     | label:untracked
 

Phase 3 (“label:untracked”) of the status computation took 0.001259 seconds. This phase was very fast because it only had to inspect the 44 paths identified by the virtual-filesystem hook.

 
d0 | data         |  1.398723 |  1.398723 | status     | count/changed:0
d0 | data         |  1.398766 |  1.398766 | status     | count/untracked:0
d0 | data         |  1.398782 |  1.398782 | status     | count/ignored:0
 

Status was clean. We have the result ready to print at t+1.398782 seconds.

Printing Status Results

 
d0 | region_enter |  1.398833 |           | status     | label:print
d0 | region_leave | 27.418896 | 26.020063 | status     | label:print
 

However, it took 26.020063 seconds to print the status results. More on this in a minute.

Process Shutdown

 
d0 | child_start  | 27.419178 |           |            | [ch2] class:hook hook:post-command argv: ...
d0 | child_exit   | 27.619002 |  0.199824 |            | [ch2] pid:20576 code:0
 

VFS for Git also requires a post-command hook.

 
d0 | atexit       | 27.619494 |           |            | code:0
 

Status completed with exit code 0 at t+27.619494 seconds.

The Status Ahead/Behind Problem

As you can see, we have a pretty good idea of where our time was spent in the command. We spent ~26 seconds in the “label:print” region. That seems like a very long time to print “nothing to commit, working tree clean”.

We could use a profiler tool or we could add some nested regions to track it down. I’ll leave that as an exercise for you. I did that exercise a few years ago and found that the time was spent computing the exact ahead/behind numbers. Status reported that my branch is behind upstream by 242,137 commits and the only way to know that is to walk the commit graph and search for the relationship between HEAD and the upstream branch.

At this point you’re probably saying something about this being a contrived example and that I picked an ancient commit as the basis for my x topic branch. Why else would I have a branch nearly 250K commits behind? The Windows repo is huge and master moves very fast. The basis for my topic branch is less than 3 months old! This is easily within the realm of a topic branch under development and review. This is another scale vector we have to contend with.

This led me to add the --no-ahead-behind option to git status in Git 2.17. Let’s give that a try and see what happens.

 
$ git status --no-ahead-behind
On branch x
Your branch and 'origin/official/rsmaster' refer to different commits.
  (use "git status --ahead-behind" for details)

nothing to commit, working tree clean
 

When this option is enabled, status only reports that the 2 branches refer to different commits. It does not report ahead/behind counts or whether the branches have diverged or are fast-forwardable.

I’ll skip over the similar parts of the trace output and just show the bottom portion.

 
d0 | data         |  1.474304 |  1.474304 | status     | count/changed:0
d0 | data         |  1.474348 |  1.474348 | status     | count/untracked:0
d0 | data         |  1.474376 |  1.474376 | status     | count/ignored:0

d0 | region_enter |  1.474390 |           | status     | label:print
d0 | region_leave |  1.475869 |  0.001479 | status     | label:print

d0 | atexit       |  1.663404 |           |            | code:0
 

Git printed the status results in 0.001479 seconds in this run by avoiding the ahead/behind computation. Total run time for status was 1.663404 seconds.

So What’s the Big Deal?

At this point you may be saying “So, what’s the big deal? I could get that level of detail from a profiler.”

Profilers are great, but have their limitations.

  1. Profilers typically only work on a single process. Git processes frequently invoke child Git processes and shell scripts. This makes it extremely difficult to capture a complete view of an operation since we need to somehow merge the profile data from all of the processes spawned by a command.
  2. A top-level Git command may spawn numerous child Git commands and shell scripts. These shell scripts may also spawn numerous child Git commands. These child processes usually require a complex setup with environment variables and stdin/stdout plumbing. This makes it difficult to capture profile data on an isolated child Git process.
  3. Profiler dumps are typically very large, since they contain stack trace and symbol data. They are platform-specific and very version/build sensitive, so they don’t archive well. They are great for exploring performance with an interactive tool, like Visual Studio, but they are less so when comparing multiple runs or worse, runs from other users. It is often better to have simple logs of the commands and compare them like we did above.
  4. Dumps are usually based on process sampling and typically have function-level granularity. Trace2 messages can be generated at precise points in the code with whatever granularity is desired. This allows us to focus our attention on “interesting” parts of the code.
  5. It is difficult to extract per-thread details. Trace2 messages identify the calling thread and have thread-specific regions and timestamps. This helps us understand and fine-tune our multi-threading efforts.
  6. Profilers give us the time spent in a section of code, but they don’t let us gather run-time data as part of the output, such as the number of packfiles, the size of the index, or the number of changed files. Trace2 data messages let us include this information with the timing information. This helps us see when there are data-dependent performance problems, such as in the ahead/behind example.
  7. Profilers don’t allow us to do higher-level data aggregations, such as averaging times by region across multiple runs or across multiple users. Trace2 data can be post-processed for aggregation and further study. More on this in the next article.
  8. Trace2 is cross-platform so we can do the same analysis on all platforms and we can compare numbers between platforms.

As you can see, Trace2 gives us the ability to collect exactly the performance data that we need, archive it efficiently, and analyze it in a variety of ways. Profilers can help us in our work, but they have their limitations.

Example 2 – Git Push

Let’s look at a more complex example by running git push.

In a previous article and in a Git Merge 2019 presentation we described how we improved the performance of push with a new sparse push algorithm. In both we hinted that we used Trace2 to find and measure the problem and to confirm that we had actually improved it.

Let’s dive in and see how Trace2 made this possible and let you verify our results.

Create a Commit

First, let’s make a change that we can push. Create and checkout a test branch. Modify a single file (preferably someplace deep in the worktree). And commit it.

This would be a good time to look at the log file and see the trace output written by those commands, but I’ll leave that for you.

You may want to delete the log file before starting the push, so that later it will only contain the push logging. This may make it easier to follow along.

Push with the Original Algorithm

Next, let’s push your test branch with the original algorithm.

 
git -c pack.useSparse=false push origin test
 

Push is a complex command and creates a cascade of at least 6 processes. Trace2 interleaves the output from each of the Git commands as it happens.

The first column in each line is the Git process depth.

Push Process Startup

 
d0 | version      |           |           |            | 2.22.0.vfs.1.1
d0 | start        |  0.012840 |           |            | 'C:vbingit.exe' -c pack.useSparse=false push

d0 | child_start  |  0.041627 |           |            | [ch0] class:hook hook:pre-command argv: ...
d0 | child_exit   |  0.326167 |  0.284540 |            | [ch0] pid:20752 code:0

d0 | cmd_name     |           |           |            | push (push)
 

The top-level “d0:git push” starts up. It reports its canonical name is push and since it is a top-level command, its ancestry is just push.

Push Spawns Remote-Https which Spawns Git-Remote-Https

 
d0 | child_start  |  0.326698 |           |            | [ch1] class:remote-https argv: git remote-https ...
 

“d0:git push” spawns “d1:git remote-https”.

 
d1 | version      |           |           |            | 2.22.0.vfs.1.1
d1 | start        |  0.007434 |           |            | git remote-https origin https://my.server/os
d1 | cmd_name     |           |           |            | _run_dashed_ (push/_run_dashed_)
 

“d1:git remote-https” starts. It reports that its canonical name is _run_dashed_ and its ancestry is push/_run_dashed_.

The term _run_dashed_ is used to indicate that the command is going to hand off to a dashed form of the command. In this case, git remote-https is invoking git-remote-https rather than actually doing the work itself.

 
d1 | child_start  |  0.031055 |           |            | [ch0] argv: git-remote-https ...
 

“d1:git remote-https” spawns “d2:git-remote-https”.

 
d2 | version      |           |           |            | 2.22.0.vfs.1.1
d2 | start        |  0.013885 |           |            | git-remote-https origin https://my.server/os
d2 | cmd_name     |           |           |            | remote-curl (push/_run_dashed_/remote-curl)
 

“d2:git-remote-https” reports its canonical name is remote-curl. The ancestry is a hierarchy of the open Git commands (including this one).

Push is Working while Waiting

 
d0 | data         |  0.859083 |  0.859083 | midx       | load/num_packs:2894
d0 | data         |  0.859112 |  0.859112 | midx       | load/num_objects:47641314
 

The first column indicates that these data messages are from “d0”. This means that the top-level “d0:git push” is doing something while it waits for “d1:git remote-https” to complete. This can happen if Git spawns a background child process and doesn’t immediately wait for it to complete. We’re going to ignore it.

Getting My Credentials

 
d2 | child_start  |  0.639533 |           |            | [ch0] argv: 'git credential-manager get'
 

“d2:git-remote-https” spawns “d3:git credential-manager” to get my cached credentials.

 
d3 | version      |           |           |            | 2.22.0.vfs.1.1
d3 | start        |  0.007614 |           |            | 'C:...git.exe' credential-manager get
d3 | cmd_name     |           |           |            | _run_dashed_
                                                         (push/_run_dashed_/remote-curl/_run_dashed_)
 

“d3:git credential-manager” also reports _run_dashed_ because it also going to defer to its dashed peer.

 
d3 | child_start  |  0.039516 |           |            | [ch0] argv: git-credential-manager get
 

“d3:git credential-manager” spawns “git-credential-manager”, but this is a third-party application so we do not get any Trace2 data from it.

 
d3 | child_exit   |  0.495332 |  0.455816 |            | [ch0] pid:24748 code:0
d3 | atexit       |  0.495867 |           |            | code:0
 

The “d3:git credential-manager” process reported its “atexit” time as 0.495867 seconds. That is the “internal” duration of the command (within main()).

 
d2 | child_exit   |  1.436891 |  0.797358 |            | [ch0] pid:10412 code:0
 

The “d2:git-remote-https” process reported the “child_exit” time as 0.797358 seconds. That is the “external” duration of the child and includes the process creation and cleanup overhead, so it is a little longer than the child’s “atexit” time.

“d2:git-remote-https” now has the credentials.

Storing My Credentials

 
d2 | child_start  |  1.737848 |           |            | [ch1] argv: 'git credential-manager store'
 

 
d3 | version      |           |           |            | 2.22.0.vfs.1.1
d3 | start        |  0.007661 |           |            | 'C:...git.exe' credential-manager store
d3 | cmd_name     |           |           |            | _run_dashed_
                                                         (push/_run_dashed_/remote-curl/_run_dashed_)

d3 | child_start  |  0.038594 |           |            | [ch0] argv: git-credential-manager store
d3 | child_exit   |  0.270066 |  0.231472 |            | [ch0] pid:21440 code:0

d3 | atexit       |  0.270569 |           |            | code:0
 

 
d2 | child_exit   |  2.308430 |  0.570582 |            | [ch1] pid:25732 code:0
 

“d2:git-remote-https” repeats the credential-manager sequence again to store/update the credentials.

Running Send-Pack and Pack-Objects

 
d2 | child_start  |  2.315457 |           |            | [ch2] argv: git send-pack ...
 

“d2:git-remote-https” spawns “d3:git send-pack”.

 
d3 | version      |           |           |            | 2.22.0.vfs.1.1
d3 | start        |  0.007556 |           |            | git send-pack --stateless-rpc ...
d3 | cmd_name     |           |           |            | send-pack
                                                         (push/_run_dashed_/remote-curl/send-pack)
d3 | child_start  |  0.050237 |           |            | [ch0] argv: git pack-objects ...
 

“d3:git send-pack” spawns “d4:git pack-objects”.

 
d4 | version      |           |           |              | 2.22.0.vfs.1.1
d4 | start        |  0.007636 |           |              | git pack-objects --all-progress-implied ...
d4 | cmd_name     |           |           |              | pack-objects
                                                           (push/_run_dashed_/remote-curl/send-pack/pack-objects)
 

“d4:git pack-objects” reports its canonical name is pack-objects and its ancestry is push/_run_dashed_/remote-curl/send-pack/pack-objects.

 
d4 | region_enter |  0.039389 |           | pack-objects | label:enumerate-objects 
d4 | region_leave | 14.420960 | 14.381571 | pack-objects | label:enumerate-objects
 

“d4:git pack-objects” spent 14.381571 seconds enumerating objects. More on this in a minute.

 
d4 | region_enter | 14.421012 |           | pack-objects | label:prepare-pack
d4 | region_leave | 14.431710 |  0.010698 | pack-objects | label:prepare-pack

d4 | region_enter | 14.431754 |           | pack-objects | label:write-pack-file
d4 | data         | 14.433644 |  0.001890 | pack-objects | ..write_pack_file/wrote:9
d4 | region_leave | 14.433679 |  0.001925 | pack-objects | label:write-pack-file

d4 | atexit       | 14.434176 |           |              | code:0
 

“d4:git pack-objects” wrote 9 objects in a packfile to stdout.

 
d3 | child_exit   | 14.924402 | 14.874165 |              | [ch0] pid:24256 code:0

d3 | atexit       | 15.610328 |           |              | code:0
 

Hidden in here somewhere, “d3:git send-pack” sent the packfile to the server. I’m not going to try to isolate the actual network time.

Unwinding Everything

 
d2 | child_exit   | 18.167133 | 15.851676 |            | [ch2] pid:19960 code:0
d2 | atexit       | 18.176882 |           |            | code:0

d1 | child_exit   | 18.419940 | 18.388885 |            | [ch0] pid:13484 code:0
d1 | atexit       | 18.420427 |           |            | code:0

d0 | child_exit   | 18.988088 | 18.661390 |            | [ch1] pid:16356 code:0
 

The child processes all exit and control returns to the top-level “d0:git push”.

 
d0 | child_start  | 18.988210 |           |            | [ch2] class:hook hook:post-command argv: ...
d0 | child_exit   | 19.186139 |  0.197929 |            | [ch2] pid:2252 code:0
 

“d0:git push” runs the VFS for Git post-command hook.

 
d0 | atexit       | 19.186581 |           |            | code:0
 

And we’re done. The total push time was 19.186581 seconds. Clearly, enumerate-objects is the problem, since it consumes 14.4 of the 19.2 seconds.

Push With the New Algorithm

Now let’s try again with the new algorithm. Make another change to that same file, commit and push.

 
git -c pack.useSparse=true push origin test
 

For space reasons I’m only going to show the important differences for this push.

 
d4 | version      |           |           |              | 2.22.0.vfs.1.1
d4 | start        |  0.007520 |           |              | git pack-objects --all-progress-implied ...
d4 | cmd_name     |           |           |              | pack-objects
                                                           (push/_run_dashed_/remote-curl/send-pack/pack-objects)

d4 | region_enter |  0.039500 |           | pack-objects | label:enumerate-objects 
d4 | region_leave |  0.590796 |  0.551296 | pack-objects | label:enumerate-objects 
 

With the new algorithm enumerate-objects took 0.551296 seconds.

 
d4 | region_enter |  0.590900 |           | pack-objects | label:prepare-pack 
d4 | region_leave |  0.601070 |  0.010170 | pack-objects | label:prepare-pack 

d4 | region_enter |  0.601118 |           | pack-objects | label:write-pack-file 
d4 | data         |  0.602861 |  0.001743 | pack-objects | ..write_pack_file/wrote:9
d4 | region_leave |  0.602896 |  0.001778 | pack-objects | label:write-pack-file 

d4 | atexit       |  0.603413 |           |              | code:0
 

Like before “d4:git pack-objects” wrote 9 objects in a packfile to stdout.

 
d0 | atexit       |  4.933607 |           |              | code:0
 

And the entire push only took 4.933607 seconds. That’s much better!

Using Trace2 for Iterative Development

Trace2 defines the basic performance tracing framework. And allows us to see where time is being spent and gives us a feeling of the overall time flow in a command. It does this by tracking process times, child process relationships, thread usage, and regions of interest.

Trace2 lets us explore and experiment during our iterative development loop. For example, we can trivially add new regions and data messages to help further our understanding of Git’s internal workings. And we can use Trace2 in conjunction with traditional profilers to help focus our investigations.

The git push example shows how we were able to track down and measure the performance problem using just the process and child process messages. We initially guessed it would be a network or a packfile compression problem. We weren’t even looking at enumerate-objects. But after running some experiments and measuring the activity in each process, we found a complex set of parent/child relationships and that the problem is actually in “d4:git pack-objects” — four nested processes removed from the “d0:git push” command we launched.

With that information in hand, we were then able to dig deeper and use custom Trace2 regions and the profiler on pack-objects to help us understand what was happening and why. We then deployed an experimental build including the custom Trace2 regions to some of our Windows developers to confirm they were experiencing the same bottleneck that we found.

Consequently, we were able to design a more efficient algorithm and then use Trace2 to confirm that we actually fixed the problem. We then deployed another experimental build to some of our Windows developers to confirm that it fixed their problem too.

Another example where we used Trace2 to measure performance is the commit-graph feature. In a previous post we described generation numbers as a tool to speed up commit walks. After the algorithms were implemented in Git, some repositories had data shapes that actually led to worse performance! After digging into these examples, multiple alternatives were presented to replace generation numbers. While we could use timing data to compare runs, that data is very noisy. Instead, we used Trace2 to report the exact number of commits walked by each algorithm for each option. These numbers were predictable and related directly to the algorithm’s runtime.

These are but a few examples of how we have used Trace2 and the “performance format target” in our iterative development process to address performance problems as we scale Git.

Final Remarks

The Trace2 performance target is a great tool for the types of analysis that I’ve described in this article. But it does have a limitation. We have to already know what needs to be studied (e.g. “Why is push slow?”).

To best help our enterprise Windows and Office developers we need to understand their pain-points: the commands that they find slow; the conditions underwhich those commands are especially slow; and what overall effect this is having on their total productivity. Then with those measurements in hand, we can prioritize our engineering efforts to improve Git for maximum benefit. For that we need to collect and aggregate some telemetry data on Git usage.

In my next article I’ll talk about using the Trace2 “event format target” to generate a custom telemetry stream for Git. I’ll talk about the usual telemetry metrics, like average and weighted-average command durations. And I’ll talk about custom metrics, like averages for the regions and data fields we identified earlier. This will give us the data to prioritize our engineering efforts and to verify at-scale the optimizations we make.

The post A Deep Dive into Git Performance using Trace2 appeared first on Azure DevOps Blog.

When to use Azure Service Health versus the status page

$
0
0

If you’re experiencing problems with your applications, a great place to start investigating solutions is through your Azure Service Health dashboard. In this blog post, we’ll explore the differences between the Azure status page and Azure Service Health. We’ll also show you how to get started with Service Health alerts so you can stay better informed about service issues and take action to improve your workloads’ availability.

How and when to use the Azure status page

The Azure status page works best for tracking major outages, especially if you’re unable to log into the Azure portal or access Azure Service Health. Many Azure users visit the status page regularly. It predates Azure Service Health and has a friendly format that shows the status of all Azure services and regions at a glance.

681.1

The Azure status page, however, doesn’t show all information about the health of your Azure services and regions. The status page isn’t personalized, so you need to know exactly which services and regions you’re using and locate them in the grid. The status page also doesn’t include information about non-outage events that could affect your availability. For example, planned maintenance events and health advisories (think service retirements and misconfigurations). Finally, the status page doesn’t have a means of notifying you automatically in the event of an outage or a planned maintenance window that might affect you.

For all of these use cases, we created Azure Service Health.

How and when to use Azure Service Health

At the top of the Azure status page, you’ll find a button directing you to your personalized dashboard. One common misunderstanding is that this button allows you to personalize the status page grid of services and regions. Instead, the button takes you into the Azure portal to Azure Service Health, the best option for viewing Azure events that may impact the availability of your resources.

681.2

In Service Health, you’ll find information about everything from minor outages that affect you to planned maintenance events and other health advisories. The dashboard is personalized, so it knows which services and regions you’re using and can even help you troubleshoot by offering a list of potentially impacted resources for any given event.

681.3

Service Health’s most useful feature is Service Health alerts. With Service Health alerts, you’ll proactively receive notifications via your preferred channel—email, SMS, push notification, or even webhook into your internal ticketing system like ServiceNow or PagerDuty—if there’s an issue with your services and regions. You don’t have to keep checking Service Health or the status page for updates and can instead focus on other important work.

681.4

Set up your Service Health alerts today

Feel free to keep using the status page for quick updates on major outages. However, we highly encourage you make it a habit to visit Service Health to stay informed of all potential impacts to your availability and take advantage of rich features like automated alerting.

Set up your Azure Service Health alerts today in the Azure portal. For more in-depth guidance, visit the Azure Service Health documentation. Let us know if you have a suggestion by submitting an idea here.

Windows 10 SDK Preview Build 18950 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 18950 or greater). The Preview SDK Build 18950 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017 and 2019. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1903 or earlier to the Microsoft Store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2019 here.
  • This build of the Windows SDK will install on only on Windows 10 Insider Preview builds.
  • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following static URL: https://software-download.microsoft.com/download/sg/Windows_InsiderPreview_SDK_en-us_18950_1.iso.

Tools Updates

Message Compiler (mc.exe)

  • Now detects the Unicode byte order mark (BOM) in .mc files. If the If the .mc file starts with a UTF-8 BOM, it will be read as a UTF-8 file. Otherwise, if it starts with a UTF-16LE BOM, it will be read as a UTF-16LE file. If the -u parameter was specified, it will be read as a UTF-16LE file. Otherwise, it will be read using the current code page (CP_ACP).
  • Now avoids one-definition-rule (ODR) problems in MC-generated C/C++ ETW helpers caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of MCGEN_EVENTWRITETRANSFER are linked into the same binary, the MC-generated ETW helpers will now respect the definition of MCGEN_EVENTWRITETRANSFER in each .cpp file instead of arbitrarily picking one or the other).

Windows Trace Preprocessor (tracewpp.exe)

  • Now supports Unicode input (.ini, .tpl, and source code) files. Input files starting with a UTF-8 or UTF-16 byte order mark (BOM) will be read as Unicode. Input files that do not start with a BOM will be read using the current code page (CP_ACP). For backwards-compatibility, if the -UnicodeIgnore command-line parameter is specified, files starting with a UTF-16 BOM will be treated as empty.
  • Now supports Unicode output (.tmh) files. By default, output files will be encoded using the current code page (CP_ACP). Use command-line parameters -cp:UTF-8 or -cp:UTF-16 to generate Unicode output files.
  • Behavior change: tracewpp now converts all input text to Unicode, performs processing in Unicode, and converts output text to the specified output encoding. Earlier versions of tracewpp avoided Unicode conversions and performed text processing assuming a single-byte character set. This may lead to behavior changes in cases where the input files do not conform to the current code page. In cases where this is a problem, consider converting the input files to UTF-8 (with BOM) and/or using the -cp:UTF-8 command-line parameter to avoid encoding ambiguity.

TraceLoggingProvider.h

  • Now avoids one-definition-rule (ODR) problems caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of TLG_EVENT_WRITE_TRANSFER are linked into the same binary, the TraceLoggingProvider.h helpers will now respect the definition of TLG_EVENT_WRITE_TRANSFER in each .cpp file instead of arbitrarily picking one or the other).
  • In C++ code, the TraceLoggingWrite macro has been updated to enable better code sharing between similar events using variadic templates.

Signing your apps with Device Guard Signing

  • We are making it easier for you to sign your app. Device Guard signing is a Device Guard feature that is available in Microsoft Store for Business and Education. Signing allows enterprises to guarantee every app comes from a trusted source. Our goal is to make signing your MSIX package easier. Documentation on Device Guard Signing can be found here: https://docs.microsoft.com/windows/msix/package/signing-package-device-guard-signing

Breaking Changes

Removal of IRPROPS.LIB

In this release irprops.lib has been removed from the Windows SDK. Apps that were linking against irprops.lib can switch to bthprops.lib as a drop-in replacement.

API Updates, Additions and Removals

The following APIs have been added to the platform since the release of Windows 10 SDK, version 1903, build 18362.

Additions:


namespace Windows.Devices.Input {
  public sealed class PenButtonListener
  public sealed class PenDockedEventArgs
  public sealed class PenDockListener
  public sealed class PenTailButtonClickedEventArgs
  public sealed class PenTailButtonDoubleClickedEventArgs
  public sealed class PenTailButtonLongPressedEventArgs
  public sealed class PenUndockedEventArgs
}
namespace Windows.Devices.PointOfService {
  public sealed class PaymentDevice : IClosable
  public sealed class PaymentDeviceCapabilities
  public sealed class PaymentDeviceConfiguration
  public sealed class PaymentDeviceGetConfigurationResult
  public sealed class PaymentDeviceOperationResult
  public sealed class PaymentDeviceTransactionRequest
  public sealed class PaymentDeviceTransactionResult
  public sealed class PaymentMethod
  public enum PaymentMethodKind
  public enum PaymentOperationStatus
  public enum PaymentUserResponse
}
namespace Windows.Devices.PointOfService.Provider {
  public sealed class PaymentDeviceCloseTerminalRequest
  public sealed class PaymentDeviceCloseTerminalRequestEventArgs
  public sealed class PaymentDeviceConnection : IClosable
  public sealed class PaymentDeviceConnectionTriggerDetails
  public sealed class PaymentDeviceConnectorInfo
  public sealed class PaymentDeviceGetTerminalsRequest
  public sealed class PaymentDeviceGetTerminalsRequestEventArgs
  public sealed class PaymentDeviceOpenTerminalRequest
  public sealed class PaymentDeviceOpenTerminalRequestEventArgs
  public sealed class PaymentDevicePaymentAuthorizationRequest
  public sealed class PaymentDevicePaymentAuthorizationRequestEventArgs
  public sealed class PaymentDevicePaymentRequest
  public sealed class PaymentDevicePaymentRequestEventArgs
  public sealed class PaymentDeviceReadCapabilitiesRequest
  public sealed class PaymentDeviceReadCapabilitiesRequestEventArgs
  public sealed class PaymentDeviceReadConfigurationRequest
  public sealed class PaymentDeviceReadConfigurationRequestEventArgs
  public sealed class PaymentDeviceRefundRequest
  public sealed class PaymentDeviceRefundRequestEventArgs
  public sealed class PaymentDeviceVoidTokenRequest
  public sealed class PaymentDeviceVoidTokenRequestEventArgs
  public sealed class PaymentDeviceVoidTransactionRequest
  public sealed class PaymentDeviceVoidTransactionRequestEventArgs
  public sealed class PaymentDeviceWriteConfigurationRequest
  public sealed class PaymentDeviceWriteConfigurationRequestEventArgs
}
namespace Windows.Devices.Sensors {
  public sealed class Accelerometer {
    AccelerometerDataThreshold ReportThreshold { get; }
  }
  public sealed class AccelerometerDataThreshold
  public sealed class Altimeter {
    AltimeterDataThreshold ReportThreshold { get; }
  }
  public sealed class AltimeterDataThreshold
  public sealed class Barometer {
    BarometerDataThreshold ReportThreshold { get; }
  }
  public sealed class BarometerDataThreshold
  public sealed class Compass {
    CompassDataThreshold ReportThreshold { get; }
  }
  public sealed class CompassDataThreshold
  public sealed class Gyrometer {
    GyrometerDataThreshold ReportThreshold { get; }
  }
  public sealed class GyrometerDataThreshold
  public sealed class Inclinometer {
    InclinometerDataThreshold ReportThreshold { get; }
  }
  public sealed class InclinometerDataThreshold
  public sealed class LightSensor {
    LightSensorDataThreshold ReportThreshold { get; }
  }
  public sealed class LightSensorDataThreshold
  public sealed class Magnetometer {
    MagnetometerDataThreshold ReportThreshold { get; }
  }
  public sealed class MagnetometerDataThreshold
}
namespace Windows.Foundation.Metadata {
  public sealed class AttributeNameAttribute : Attribute
  public sealed class FastAbiAttribute : Attribute
  public sealed class NoExceptionAttribute : Attribute
}
namespace Windows.Graphics.Capture {
  public sealed class GraphicsCaptureSession : IClosable {
    bool IsCursorCaptureEnabled { get; set; }
  }
}
namespace Windows.Management.Deployment {
  public sealed class AddPackageOptions
  public enum DeploymentOptions : uint {
    StageInPlace = (uint)4194304,
  }
  public sealed class PackageManager {
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> AddPackageByUriAsync(Uri packageUri, AddPackageOptions options);
    IIterable<Package> FindProvisionedPackages();
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackageByUriAsync(Uri manifestUri, RegisterPackageOptions options);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackagesByFullNameAsync(IIterable<string> packageFullNames, DeploymentOptions deploymentOptions);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> StagePackageByUriAsync(Uri packageUri, StagePackageOptions options);
  }
  public enum PackageTypes : uint {
    All = (uint)4294967295,
  }
  public sealed class RegisterPackageOptions
  public enum RemovalOptions : uint {
    PreserveRoamableApplicationData = (uint)128,
  }
  public sealed class StagePackageOptions
}
namespace Windows.Management.Policies {
  public static class NamedPolicy {
    public static IAsyncAction ClearAllPoliciesAsync();
    public static IAsyncAction ClearAllPoliciesAsync(string accountId);
    public static NamedPolicySetter TryCreatePolicySetter(string accountId);
    public static NamedPolicySetter TryCreatePolicySetterForUser(User user, string accountId);
  }
  public sealed class NamedPolicySetter
}
namespace Windows.Media.Capture {
  public sealed class MediaCapture : IClosable {
    MediaCaptureRelativePanelWatcher CreateRelativePanelWatcher(StreamingCaptureMode captureMode, DisplayRegion displayRegion);
  }
  public sealed class MediaCaptureInitializationSettings {
    Uri DeviceUri { get; set; }
    PasswordCredential DeviceUriPasswordCredential { get; set; }
  }
  public sealed class MediaCaptureRelativePanelWatcher : IClosable
}
namespace Windows.Media.Capture.Frames {
  public sealed class MediaFrameSourceInfo {
    Panel GetRelativePanel(DisplayRegion displayRegion);
  }
}
namespace Windows.Media.Devices {
  public sealed class PanelBasedOptimizationControl
}
namespace Windows.Media.MediaProperties {
  public static class MediaEncodingSubtypes {
    public static string Pgs { get; }
    public static string Srt { get; }
    public static string Ssa { get; }
    public static string VobSub { get; }
  }
  public sealed class TimedMetadataEncodingProperties : IMediaEncodingProperties {
    public static TimedMetadataEncodingProperties CreatePgs();
    public static TimedMetadataEncodingProperties CreateSrt();
    public static TimedMetadataEncodingProperties CreateSsa(byte[] formatUserData);
    public static TimedMetadataEncodingProperties CreateVobSub(byte[] formatUserData);
  }
}
namespace Windows.Networking.BackgroundTransfer {
  public sealed class DownloadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
  public sealed class UploadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
}
namespace Windows.Networking.NetworkOperators {
  public interface INetworkOperatorTetheringAccessPointConfiguration2
  public interface INetworkOperatorTetheringManagerStatics4
  public sealed class NetworkOperatorTetheringAccessPointConfiguration : INetworkOperatorTetheringAccessPointConfiguration2 {
    TetheringWiFiBand Band { get; set; }
    bool IsBandSupported(TetheringWiFiBand band);
    IAsyncOperation<bool> IsBandSupportedAsync(TetheringWiFiBand band);
  }
  public sealed class NetworkOperatorTetheringManager {
    public static void DisableTimeout(TetheringTimeoutKind timeoutKind);
    public static IAsyncAction DisableTimeoutAsync(TetheringTimeoutKind timeoutKind);
    public static void EnableTimeout(TetheringTimeoutKind timeoutKind);
    public static IAsyncAction EnableTimeoutAsync(TetheringTimeoutKind timeoutKind);
    public static bool IsTimeoutEnabled(TetheringTimeoutKind timeoutKind);
    public static IAsyncOperation<bool> IsTimeoutEnabledAsync(TetheringTimeoutKind timeoutKind);
  }
  public enum TetheringTimeoutKind
  public enum TetheringWiFiBand
}
namespace Windows.Security.Authentication.Web.Core {
  public sealed class WebAccountMonitor {
    event TypedEventHandler<WebAccountMonitor, WebAccountEventArgs> AccountPictureUpdated;
  }
}
namespace Windows.Storage {
  public sealed class StorageFile : IInputStreamReference, IRandomAccessStreamReference, IStorageFile, IStorageFile2, IStorageFilePropertiesWithAvailability, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFile> GetFileFromPathForUserAsync(User user, string path);
  }
  public sealed class StorageFolder : IStorageFolder, IStorageFolder2, IStorageFolderQueryOperations, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFolder> GetFolderFromPathForUserAsync(User user, string path);
  }
}
namespace Windows.Storage.Provider {
  public static class StorageProviderSyncRootManager {
    public static bool IsSupported();
  }
}
namespace Windows.System {
  public sealed class FolderLauncherOptions : ILauncherViewOptions {
    ViewGrouping GroupingPreference { get; set; }
  }
  public sealed class LauncherOptions : ILauncherViewOptions {
    ViewGrouping GroupingPreference { get; set; }
  }
  public sealed class User {
    public static User GetDefault();
  }
  public sealed class UserChangedEventArgs {
    IVectorView<UserWatcherUpdateKind> ChangedPropertyKinds { get; }
  }
  public enum UserType {
    SystemManaged = 4,
  }
  public enum UserWatcherUpdateKind
}
namespace Windows.UI.Composition.Interactions {
  public sealed class InteractionTracker : CompositionObject {
    int TryUpdatePosition(Vector3 value, InteractionTrackerClampingOption option, InteractionTrackerPositionUpdateOption posUpdateOption);
  }
  public enum InteractionTrackerPositionUpdateOption
}
namespace Windows.UI.Composition.Particles {
  public sealed class ParticleAttractor : CompositionObject
  public sealed class ParticleAttractorCollection : CompositionObject, IIterable<ParticleAttractor>, IVector<ParticleAttractor>
  public class ParticleBaseBehavior : CompositionObject
  public sealed class ParticleBehaviors : CompositionObject
  public sealed class ParticleColorBehavior : ParticleBaseBehavior
  public struct ParticleColorBinding
  public sealed class ParticleColorBindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleColorBinding>>, IMap<float, ParticleColorBinding>
  public enum ParticleEmitFrom
  public sealed class ParticleEmitterVisual : ContainerVisual
  public sealed class ParticleGenerator : CompositionObject
  public enum ParticleInputSource
  public enum ParticleReferenceFrame
  public sealed class ParticleScalarBehavior : ParticleBaseBehavior
  public struct ParticleScalarBinding
  public sealed class ParticleScalarBindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleScalarBinding>>, IMap<float, ParticleScalarBinding>
  public enum ParticleSortMode
  public sealed class ParticleVector2Behavior : ParticleBaseBehavior
  public struct ParticleVector2Binding
  public sealed class ParticleVector2BindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleVector2Binding>>, IMap<float, ParticleVector2Binding>
  public sealed class ParticleVector3Behavior : ParticleBaseBehavior
  public struct ParticleVector3Binding
  public sealed class ParticleVector3BindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleVector3Binding>>, IMap<float, ParticleVector3Binding>
  public sealed class ParticleVector4Behavior : ParticleBaseBehavior
  public struct ParticleVector4Binding
  public sealed class ParticleVector4BindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleVector4Binding>>, IMap<float, ParticleVector4Binding>
}
namespace Windows.UI.Input {
  public sealed class CrossSlidingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class DraggingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class GestureRecognizer {
    uint HoldMaxContactCount { get; set; }
    uint HoldMinContactCount { get; set; }
    float HoldRadius { get; set; }
    TimeSpan HoldStartDelay { get; set; }
    uint TapMaxContactCount { get; set; }
   uint TapMinContactCount { get; set; }
    uint TranslationMaxContactCount { get; set; }
    uint TranslationMinContactCount { get; set; }
  }
  public sealed class HoldingEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
 }
  public sealed class ManipulationCompletedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class ManipulationInertiaStartingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationStartedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationUpdatedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class RightTappedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class SystemButtonEventController : AttachableInputObject
  public sealed class SystemFunctionButtonEventArgs
  public sealed class SystemFunctionLockChangedEventArgs
  public sealed class SystemFunctionLockIndicatorChangedEventArgs
  public sealed class TappedEventArgs {
    uint ContactCount { get; }
  }
}
namespace Windows.UI.Input.Inking {
  public sealed class InkModelerAttributes {
    bool UseVelocityBasedPressure { get; set; }
  }
}
namespace Windows.UI.Text.Core {
  public sealed class CoreTextServicesManager {
    public static TextCompositionKind TextCompositionKind { get; }
  }
  public enum TextCompositionKind
}
namespace Windows.UI.ViewManagement {
  public sealed class ApplicationView {
    bool CanOpenInNewTab { get; }
    bool CriticalInputMismatch { get; set; }
    bool IsTabGroupingSupported { get; }
    ScreenCaptureDisabledBehavior ScreenCaptureDisabledBehavior { get; set; }
    bool TemporaryInputMismatch { get; set; }
    void ApplyApplicationUserModelID(string value);
  }
  public enum ApplicationViewMode {
    Spanning = 2,
  }
  public sealed class ApplicationViewTitleBar {
    IAsyncAction SetActiveIconStreamAsync(IRandomAccessStreamReference activeIcon);
  }
  public interface ISystemTray
  public interface ISystemTrayStatics
  public enum ScreenCaptureDisabledBehavior
  public sealed class SystemTray : ISystemTray
  public sealed class UISettings {
    event TypedEventHandler<UISettings, UISettingsAnimationsEnabledChangedEventArgs> AnimationsEnabledChanged;
    event TypedEventHandler<UISettings, UISettingsMessageDurationChangedEventArgs> MessageDurationChanged;
  }
  public sealed class UISettingsAnimationsEnabledChangedEventArgs
  public sealed class UISettingsMessageDurationChangedEventArgs
  public enum ViewGrouping
  public sealed class ViewModePreferences {
    ViewGrouping GroupingPreference { get; set; }
  }
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    event TypedEventHandler<CoreInputView, CoreInputViewHidingEventArgs> PrimaryViewHiding;
    event TypedEventHandler<CoreInputView, CoreInputViewShowingEventArgs> PrimaryViewShowing;
  }
  public sealed class CoreInputViewHidingEventArgs
  public enum CoreInputViewKind {
    Symbols = 4,
  }
  public sealed class CoreInputViewShowingEventArgs
  public sealed class UISettingsController
}
namespace Windows.UI.WindowManagement {
  public sealed class AppWindow {
    void SetPreferredTopMost();
    void SetRelativeZOrderBeneath(AppWindow appWindow);
  }
  public sealed class AppWindowChangedEventArgs {
    bool DidOffsetChange { get; }
  }
  public enum AppWindowPresentationKind {
    Snapped = 5,
    Spanning = 4,
  }
  public sealed class SnappedPresentationConfiguration : AppWindowPresentationConfiguration
  public sealed class SpanningPresentationConfiguration : AppWindowPresentationConfiguration
}
namespace Windows.UI.Xaml {
  public interface IXamlServiceProvider
}
namespace Windows.UI.Xaml.Controls {
  public class HandwritingView : Control {
    UIElement HostUIElement { get; set; }
    public static DependencyProperty HostUIElementProperty { get; }
    CoreInputDeviceTypes InputDeviceTypes { get; set; }
    bool IsSwitchToKeyboardButtonVisible { get; set; }
    public static DependencyProperty IsSwitchToKeyboardButtonVisibleProperty { get; }
    double MinimumColorDifference { get; set; }
    public static DependencyProperty MinimumColorDifferenceProperty { get; }
    bool PreventAutomaticDismissal { get; set; }
    public static DependencyProperty PreventAutomaticDismissalProperty { get; }
    bool ShouldInjectEnterKey { get; set; }
    public static DependencyProperty ShouldInjectEnterKeyProperty { get; }
    event TypedEventHandler<HandwritingView, HandwritingViewCandidatesChangedEventArgs> CandidatesChanged;
    event TypedEventHandler<HandwritingView, HandwritingViewContentSizeChangingEventArgs> ContentSizeChanging;
    void SelectCandidate(uint index);
    void SetTrayDisplayMode(HandwritingViewTrayDisplayMode displayMode);
  }
  public sealed class HandwritingViewCandidatesChangedEventArgs
  public sealed class HandwritingViewContentSizeChangingEventArgs
  public enum HandwritingViewTrayDisplayMode
}
namespace Windows.UI.Xaml.Core.Direct {
  public enum XamlEventIndex {
   HandwritingView_ContentSizeChanging = 321,
  }
  public enum XamlPropertyIndex {
    HandwritingView_HostUIElement = 2395,
    HandwritingView_IsSwitchToKeyboardButtonVisible = 2393,
    HandwritingView_MinimumColorDifference = 2396,
    HandwritingView_PreventAutomaticDismissal = 2397,
    HandwritingView_ShouldInjectEnterKey = 2398,
  }
}
namespace Windows.UI.Xaml.Markup {
  public interface IProvideValueTarget
  public interface IRootObjectProvider
  public interface IUriContext
  public interface IXamlTypeResolver
  public class MarkupExtension {
    virtual object ProvideValue(IXamlServiceProvider serviceProvider);
  }
  public sealed class ProvideValueTargetProperty
}

The post Windows 10 SDK Preview Build 18950 available now! appeared first on Windows Developer Blog.


Try out Nullable Reference Types

$
0
0

Try out Nullable Reference Types

With the release of .NET Core 3.0 Preview 7, C# 8.0 is considered "feature complete". That means that the biggest feature of them all, Nullable Reference Types, is also locked down behavior-wise for the .NET Core release. It will continue to improve after C# 8.0, but it is now considered stable with the rest of C# 8.0.

At this time, our aim is to collect as much feedback about the process of adopting nullability as possible, catch any issues, and collect feedback on further improvements to the feature that we can do after .NET Core 3.0. This is one of the largest features ever built for C#, and although we’ve done our best to get things right, we need your help!

It is at this junction that we especially call upon .NET library authors to try out the feature and begin annotating your libraries. We’d love to hear your feedback and help resolve any issues you come across.

Familiarize yourself with the feature

We recommend reading some of the Nullable Reference Types documentation before getting started with the feature. It covers essentials like:

  • A conceptual overview
  • How to specify a nullable reference type
  • How to control compiler analysis or override compiler analysis

If you’re unfamiliar with these concepts, please give the documentation a quick read before proceeding.

Turn on Nullable Reference Types

The first step in adopting nullability for your library is to turn it on. Here’s how:

Make sure you’re using C# 8.0

If your library explicitly targets netcoreapp3.0, you’ll get C# 8.0 by default. When we ship Preview 8, you’ll get C# 8.0 by default if you target netstandard2.1 too.

.NET Standard itself doesn’t have any nullable annotations yet. If you’re targeting .NET Standard, then you can use multi-targeting for .NET Standard and netcoreapp3.0, even if you don’t need .NET Core specific APIs. The benefit is that the compiler will use the nullable annotations from CoreFX to help you get your own annotations right.

If you cannot update your TFM for some reason, you can set the LangVersion explicitly:

<PropertyGroup>
    <LangVersion>8.0</LangVersion>
</PropertyGroup>

Note that C# 8.0 is not meant for older targets, such as .NET Core 2.x or .NET Framework 4.x. So some additional language features may not work unless you are targeting .NET Core 3.0 or .NET Standard 2.1

From here, we recommend two general approaches to adopting nullability.

Opt in a project, opt out files

This approach is best for projects where you’ll be adding new files over time. The process is straightforward:

  1. Apply the following property to your project file:

    <PropertyGroup>
        <Nullable>enable</Nullable>
    </PropertyGroup>

  2. Disable nullability in every file for that project by adding this to the top of every existing file in the project:

    #nullable disable

  3. Pick a file, remove the #nullable disable directive, and fix the warnings. Repeat until all #nullable disable directives are gone.

This approach requires a bit more up front work, but it means that you can continue working in your library while you’re porting and ensure that any new files are automatically opted-in to nullability. This is the approach we generally recommend, and we are currently using it in some of our own codebases.

Note that you can also apply the Nullable property to a Directory.build.props file if that fits your workflow better.

Opt in files one at a time

This approach is the inverse of the previous one.

  1. Enable nullability in a file for a project by adding this to the top of the file:

    #nullable enable

  2. Continue adding this to files until all files are annotated and all nullability warnings are addressed.

  3. Apply the following property to your project file:

    <PropertyGroup>
        <Nullable>enable</Nullable>
    </PropertyGroup>

  4. Remove all #nullable enable directives in source.

This approach requires more work at the end, but it allows you to start fixing nullability warnings immediately.

Note that you can also apply the Nullable property to a Directory.build.props file if that fits your workflow better.

What’s new in Nullable Reference Types for Preview 7

The most critical additions to the feature are tools for working with generics and more advanced API usage scenarios. These were derived from our experience in beginning to annotate .NET Core.

The notnull generic constraint

It is quite common to intend that a generic type is specifically not allowed to be nullable. For example, given the following interface:

It may be desirable to only allow non-nullable reference and value types. So substituting with string or int should be fine, but substituting with string? or int? should not.

This can be accomplished with the notnull constraint:

This will then generate a warning if any implementing class does not also apply the same notnull constraints:

To fix it, we need to apply the same constraints:

And when creating an instance of that class, if you substitute it with a nullable reference type, a warning will also be generated:

It also works for value types:

This constraint is useful for generic code where you want to ensure that only non-nullable reference types can be used. One prominent example is Dictionary<TKey, TValue, where TKey is now constrained to be notnull, which disallows using null as a key:

However, not all nullability problems with generics can be solved in this way. This is where we’ve added some new attributes to allow you to influence nullable analysis in the compiler.

The issue with T?

So you have have wondered: why not "just" allow T? when specifying a generic type that could be substituted with a nullable reference or value type? The answer is, unfortunately, complicated.

A natural definition of T? would mean, "any nullable type". However, this would imply that T would mean "any non-nullable type", and that is not true! It is possible to substitute a T with a nullable value type today (such as bool?). This is because T is already an unconstrained generic type. This change in semantics would likely be unexpected and cause some grief for the vast amount of existing code that uses T as an unconstrained generic type.

Next, it’s important to note that a nullable reference type is not the same thing as a nullable value type. Nullable value types map to a concrete class type in .NET. So int? is actually Nullable<int>. But for string?, it’s actually the same string but with a compiler-generated attribute annotating it. This is done for backwards compatibility. In other words, string? is kind of a "fake type", whereas int? is not.

This distinction between nullable value types and nullable reference types comes up in a pattern such as this:

void M<T>(T? t) where T: notnull

This would mean that the parameter is the nullable version of T, and T is constrained to be notnull. If T were a string, then the actual signature of M would be M<string>([NullableAttribute] T t), but if T were an int, then M would be M<int>(Nullable<int> t). These two signatures are fundamentally different, and this difference is not reconcilable.

Because of this issue between the concrete representations of nullable reference types and nullable value types, any use of T? must also require you to constrain the T to be either class or struct.

Finally, the existence of a T? that worked for both nullable reference types and nullable value types does not address every issue with generics. You may want to allow for nullable types in a single direction (i.e., as only an input or only an output) and that is not expressible with either notnull nor a T and T? split unless you artificially add separate generic types for inputs and outputs.

Nullable preconditions: AllowNull and DisallowNull

Consider the following example:

This might have been an API that we supported prior to C# 8.0. However, the meaning of string now means non-nullable string! We may wish to actually still allow null values, but always give back some string value with the get. Here’s where AllowNull can come in and let you get fancy:

Since we always make sure that we get no null value with the getter, I’d like the type to remain string. But we want to still accept null values for backwards compatibility. The AllowNull attribute lets you specify that the setter accepts null values. Callers are then affected as you’d expect:

Note: there is currently a bug where assignment of null conflicts with nullable analysis. This will be addressed in a future update of the compiler.

Consider another API:

In this case, MyHandle refers to some handle to a resource. Typical use for this API is that we have a non-null instance that we pass by reference, but when it is cleared, the reference is null. We can get fancy and represent this with DisallowNull:

This will affect any caller by emitting a warning if they pass null, but will warn if you attempt to "dot" into the handle after the method is called:

These two attributes allow us single-direction nullability or non-nullability for those cases where we need them.

More formally:

The AllowNull attribute allows callers to pass null even if the type doesn’t allow it. The DisallowNull attribute disallows callers to pass null even if the type allows it. They can be specified on anything that takes input:

  • Value parameters
  • in parameters
  • ref parameters
  • fields
  • properties
  • indexers

Important: These attributes only affect nullable analysis for the callers of methods that are annotated with them. The bodies of annotated methods and things like interface implementation do not respect these attributes. We may add support for that in the future.

Nullable postconditions: MaybeNull and NotNull

Consider the following example API:

Here we have another problem. We’d like Find to give back default if nothing is found, which is null for reference types. We’d like Resize to accept a possibly null input, but we want to ensure that after Resize is called, the array value passed by reference is always non-null. Again, applying the notnull constraint doesn’t solve this. Uh-oh!

Enter [MaybeNull] and [NotNull]. Now we can get fancy with the nullability of the outputs! We can modify the example as such:

And these can now affect call sites:

The first method specifies that the T that is returned could be a null value. This means that callers of this method must check for null when using its result.

The second method has a trickier signature: [NotNull] ref T[]? array. This means that array could be null as an input, but when Resize is called, array will not be null. This means that if you "dot" into array after calling Resize, you will not get a warning. But after Resize is called, array will no longer be null.

More formally:

The MaybeNull attribute allows for a return type to be null, even if its type doesn’t allow it. The NotNull attribute disallows null results even if the type allows it. They can be specified on anything that produces output:

  • Method returns
  • out parameters (after a method is called)
  • ref parameters (after a method is called)
  • fields
  • properties
  • indexers

Important: These attributes only affect nullable analysis for the callers of methods that are annotated with them. The bodies of annotated methods and things like interface implementation do not respect these attributes. We may add support for that in the future.

Conditional postconditions: MaybeNullWhen(bool) and NotNullWhen(bool)

Consider the following example:

Methods like this are everywhere in .NET, where the return value of true or false corresponds to the nullability (or possible nullability) of a parameter. The MyQueue case is also a bit special, since it’s generic. TryDequeue should give a null for result if the result is false, but only if T is a reference type. If T is a struct, then it won’t be null.

So, we want to do three things:

  1. Signal that if IsNullOrEmpty returns false, then value is non-null
  2. Signal that if TryParse returns true, then version is non-null
  3. Signal that if TryDequeue returns false, then result could be null, provided it’s a reference type

Unfortunately, the C# compiler does not associate the return value of a method with the nullability of one of its parameters! Uh-oh!

Enter NotNullWhen(bool) and MaybeNullWhen(bool). Now we can get even fancier with parameters:

And these can now affect call sites:

This enables callers to work with APIs using the same patterns that they’ve used before, without any spurious warnings from the compiler:

  • If IsNullOrEmpty is true, it’s safe to "dot" into value
  • If TryParse is true, then version was parsed and is safe to "dot" into
  • If TryDequeue is false, then result might be null and a check is needed (example: returning false when the type is a struct is non-null, but false for a reference type means it could be null)

More formally:

The NotNullWhen(bool) signifies that a parameter is not null even if the type allows it, conditional on the bool returned value of the method. The MaybeNullWhen(bool) signifies that a parameter could be null even if the type disallows it, conditional on the bool returned value of the method. They can be specified on any parameter type.

Nullness dependence between inputs and outputs: NotNullIfNotNull(string)

Consider the following example:

In this case, we’d like to return a possibly null string, and we should also be able to accept a null value as input. So the signature accomplishes what I’d like to express.

However, if path is not null, we’d like to ensure that we always give back a string. That is, we want the return value of GetFileName to be non-null, conditional on the nullness of path. There’s no way to express this as-is. Uh-oh!

Enter NotNullIfNotNull(string). This attribute can make your code the fanciest, so use it with care! Here’s how we’ll use it in my API:

And this can now affect call sites:

More formally:

The NotNullIfNotNull(string) attribute signifies that any output value is non-null conditional on the nullability of a given parameter whose name is specified. They can be specified on the following constructs:

  • Method returns
  • ref parameters

Flow attributes: DoesNotReturn and DoesNotReturnIf(bool)

You may work with multiple methods that affect control flow of your program. For example, an exception helper method that will throw an exception if called, or an assertion method that will throw an exception if an input is true or false.

You may wish to do something like assert that a value is non-null, and we think you’d also like it if the compiler could understand that.

Enter DoesNotReturn and DoesNotReturnIf(bool). Here’s an example of how you could use either:

When ThrowArgumentNullException is called in a method, it throws an exception. The DoesNotReturn it is annotated with will signal to the compiler that no nullable analysis needs to happen after that point, since that code would be unreachable.

When MyAssert is called and the condition passed to it is false, it throws an exception. The DoesNotReturnIf(false) that annotates the condition parameter lets the compiler know that program flow will not continue if that condition is false. This is helpful if you want to assert the nullability of a value. In the code path following MyAssert(value != null); the compiler can assume value is not null.

DoesNotReturn can be used on methods. DoesNotReturnIf(bool) can be used on input parameters.

Evolving your annotations

Once you annotate a public API, you’ll want to consider the fact that updating an API can have downstream effects:

  • Adding nullable annotations where there weren’t any may introduce warnings to user code
  • Removing nullable annotations can also introduce warnings (e.g., interface implementation)

Nullable annotations are an integral part of your public API. Adding or removing annotations introduce new warnings. We recommend starting with a preview release where you solicit feedback, with aims to not change any annotations after a full release. This isn’t always going to be possible, but we recommend it nonetheless.

Current status of Microsoft frameworks and libraries

Because Nullable Reference Types are so new, the large majority of Microsoft-authored C# frameworks and libraries have not yet been appropriately annotated.

That said, the "Core Lib" part of .NET Core, which represents about ~20% of the .NET Core shared framework, has been fully updated. It includes namespaces like System, System.IO, and System.Collections.Generic. We’re looking for feedback on our decisions so that we can make appropriate tweaks as soon as possible, and before their usage becomes widespread.

Although there is still ~80% CoreFX to still annotate, the most-used APIs are fully annotated.

Roadmap for Nullable Reference Types

Currently, we view the full Nullable Reference Types experience as being in preview. It’s stable, but the feature involves spreading nullable annotations throughout our own technologies and the greater .NET ecosystem. This will take some time to complete.

That said, we’re encouraging library authors to start annotating their libraries now. The feature will only get better as more libraries adopt nullability, helping .NET become a more null-safe place.

Over the coming year or so, we’re going to continue to improve the feature and spread its use throughout Microsoft frameworks and libraries.

For the language, especially compiler analysis, we’ll be making numerous enhancements so that we can minimize your need to do things like use the null-forgiveness (!) operator. Many of these enhancements are already tracked on the Roslyn repo.

For CoreFX, we’ll be annotating the remaining ~80% of APIs and making appropriate tweaks based on feedback.

For ASP.NET Core and Entity Framework, we’ll be annotating public APIs once some new additions to CoreFX and the compiler are added.

We haven’t yet planned how to annotate WinForms and WPF APIs, but we’d love to hear your feedback on what kinds of things matter!

Finally, we’re going to continue enhancing C# tooling in Visual Studio. We have multiple ideas for features to help using the feature, but we’d love your input as well!

Next steps

If you’re still reading and haven’t tried out the feature in your code, especially your library code, give it a try and please give us feedback on anything you feel ought to be different. The journey to make unanticipated NullReferenceExceptions in .NET go away will be lengthy, but we hope that in the long run, developers simply won’t have to worry about getting bitten by implicit null values anymore. You can help us. Try out the feature and begin annotating your libraries. Feedback on your experience will help shorten that journey.

Cheers, and happy hacking!

The post Try out Nullable Reference Types appeared first on .NET Blog.

Python in Visual Studio Code – August 2019 Release

$
0
0

We are pleased to announce that the August 2019 release of the Python Extension for Visual Studio Code is now available. You can download the Python extensionfrom the Marketplace, or install it directly from the extension gallery in Visual Studio Code. If you already have the Python extension installed, you can also get the latest update by restarting Visual Studio Code. You can learn more about  Python support in Visual Studio Code  in the documentation.  

In this release we made improvements that are listed in our changelog, closing a total of 76 issues including Jupyter Notebook cell debugging, introducing an Insiders program, improvements to auto-indentation and to the Python Language Server. 

Jupyter Notebook cell debugging  

A few weeks agowe showed a preview of debugging Jupyter notebooks cells at EuroPython 2019. We’re happy to announce we’re officially shipping this functionality in this release.  

Now you’ll be able to set up breakpoints and click on the “Debug Cell” option that is displayed at the cell definition. This will initiate a debugging session and you’ll be able to step into, step out and step over your code, inspect variables and set up watches, just like you normally would when debugging Python files or applications.   

Insiders program  

This release includes support for an easy opt-in to our Insiders program. You can try out new features and fixes before the release date by getting automatic installs for the latest Insiders builds of the Python extension, in a weekly or daily cadence.   

To opt-in this program, open the command palette (View Command Palette…) and select “Python: Switch to Insiders Weekly Channel”. You can also open the settings page (File Preferences Settings)look for “Python: Insiders Channel and set the channel to “daily” or “weekly”, as you prefer 

Improvements to auto-indentation

This release also includes automatic one level dedent and indentation for a series of statements on enter such as else, elif, except, finally, break, continue, pass and raise. This was another highly requested feature from our users 

Improvements to the Python Language Server

We’ve added new functionality to “go to definition” with the Python Language Serverwhich now takes you to the place in code where a variable (as an example) is actually defined. To match the previous behavior of “go to definition”, we added go to declaration.   

We’ve also made fixes to our package watcher. Before, whenever you added an import statement for a package you didn’t have installed in your environment, installing the package via pip didn’t fix ‘unresolved imports’ errors and a user would be forced to reload their entire VS Code window. Now, you no longer need to do this – the errors will automagically disappear once a new package is installed and analyzed. 

Other Changes and Enhancements 

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. Some notable changes include: 

  • Add new ‘goto cell’ code lens on every cell that is run from a file. (#6359) 
  • Fixed a bug in pytest test discovery. (thanks Rainer Dreyer) (#6463) 
  • Improved accessibility of the ‘Python Interactive’ window. (#5884) 
  • We now log processes executed behind the scenes in the extension output panel. (#1131) 
  • Fixed indentation after string literals containing escaped characters. (#4241) 

We also started A/B testing new features. If you see something different that was not announced by the team, you may be part of the experiment! To see if you are part of an experiment, you can check the first lines in the Python extension output channel. If you wish to opt-out from A/B testing, disable telemetry in Visual Studio Code. 

Be sure to download the Python extension for Visual Studio Code now to try out the above improvements. If you run into any problems, please file an issue on the Python VS Code GitHub page. 

The post Python in Visual Studio Code – August 2019 Release appeared first on Python.

Get insights into your team’s health with Azure Boards Reports

$
0
0

You can’t fix what you can’t see. That’s why high executing teams want to keep a close eye on the state and health of their work processes. Metrics like Sprint Burndown, Flow of work and Team Velocity give teams the visibility into their progress and help answer questions like:

  • How much work do we have left in this sprint? Are we on track to complete it?
  • What step of the development process is taking the longest? Can we do something about it?
  • Based on previous iterations, how much work should we plan for next sprint?

With Sprint 155 Update, we are making it easier for teams to track these important metrics with minimal effort right inside Azure Boards. We are excited to introduce three new interactive reports: Sprint Burndown under the Sprints hub, and Cumulative Flow Diagram (CFD) and Velocity under the Backlogs and Boards hubs. The new reports are fully interactive and allow teams to adjust them for their needs.

For those of you who are familiar with the previous CFD, Burndown and Velocity charts in boards headers, these are now replaced with the enhanced reports.

Sprint Burndown Report

Burndown is a known Scrum concept, but tracking completed work overtime is not unique to Scrum. In fact, many teams track similar metrics as part of their daily sync. With Azure Boards, every day after you view your sprint board and review what everyone is working on, you can jump into the Analytics tab and evaluate your progress. Using the Sprint Burndown, you can instantly see the work completed so far and detect added scope. While best practices say you shouldn’t add work after the beginning of the sprint, we all know that happens and that is OK! Tracking the scope line helps to see if your team can still meet your goals. The Burndown report also helps you predict the completion of the work; no crystal ball needed! If the work assigned to this sprint is going to slip, you’ll see a red mark at the top of the chart. Knowing in advance gives the team an option to course correct and either reassign work, recruit more resources, or reset expectations.

Sprint 154 burndown –Sprint 154 burndown – Compass team Compass team

The Sprint Burndown is also great for retrospectives. For example, in the image above you can see Compass team’s sprint burndown at the end of Sprint 154. The team has completed 100% of the work as seen in the top left metric. You can also see that there was a scope increase (yellow line) on May 23rd, indicating that the team was able to take more work than planed and still finish on time (Go team compass!). Notice the below Burndown is using count of work items and not restricted to Remaining work like the previous chart. The dates are automatically set to the selected sprint. Changing them in the report will not affect the “official” sprint dates managed by your project admin. If you need help getting around you can click the icon next to the report’s title (Burndown Trend) to start a guided tour.

What has changed?

Some of you might be familiar with a previous version of the sprint burndown displayed in the board header. Here are some of the changes and improvements to the burndown:

  • The location of the burndown moved from the header to the Analytics tab. Old burndown chart in boards header

  • The chart is fully interactive, hover to browse trough the data points, click on the area chart to open a query of the items or click on any of the legend items to visually hide them from the chart.

  • We’ve added additional metrics like: percentage completed, average burndown and the total scope line. Read more about interpreting a Burndown chart . See image below for comparison. Old bunrdown VS the new

  • The Burndown metric supports Count of Work item or any other numeric field in the Iteration work item type. So the chart is useful even if you don’t use remaining work.
  • Dates are flexible. For example, omit the first day of the official sprint dates that your team uses for planning without affecting the actual sprints dates.
  • See non-working days like weekend represented by a gray watermark. So work done during the weekend is still showing in the chart.

Check Burndown documentation for more details on this report.

Cumulative flow and Velocity Reports

Kanban boards and Backlogs are widely used by teams to track and manage work. Usually, different Board columns represent different stages for completing work. It doesn’t matter if the team has as little as four columns (Todo; Doing; In PR and Done) or 10 columns tracking every step of the way. The flow in which work goes through the columns and the amount of work completed per iteration is the heartbeat of the team. Velocity and Cumulative flow diagram (CFD) reports help teams keep an eye on that heartbeat, recognize outliers, and monitor the load.

These new reports (found under the new Analytics tab in Boards and in Backlogs) have a high level metric shown in the menu below. These cards (showing live data) will give you a quick glance on the current state. They show the average Work in Progress and the average of Velocity for the selected team. Don’t worry If you know your numbers by heart to detect a change. Just click on the cards to see the full report showing the trends and more insights. Cards displayed in the analytics tab of Backlogs and Boards

Velocity

We often hear about teams who are stuck in a vicious cycle. They don’t have a stable throughput so they can’t plan correctly and if they don’t plan the amount of work, their throughput is inconsistent. That is where Velocity can help. The Velocity report gives visibility into the amount of work that was delivered each sprint over time. It also exposes the delta between the work that was planned and completed or completed late. Teams should use velocity as a guide for determining how well the team estimates and meets their planned commitments. It’s important to note that tracking velocity over time is not about achieving a higher number per iteration and should not be used a method of comparison between teams! It should be used to improve planning. If the trend is steady teams can use their average number as a threshold for their next sprint commitment.

Velocity report

This report replaces the old velocity chart and enhances it: – Now it’s possible to monitor velocity for all backlog levels by simply swapping the selected backlog on the top right corner. – You can also configure the Velocity metric. For example, using Sum of Story point if your team uses User story estimation.

Learn more about Velocity.

Cumulative flow diagram (CFD)

As the name suggests, the cumulative flow diagram (CFD) helps teams track how well workflows throughout their processes. The stacked area chart shows, at each time interval, the number of items in each column. As time goes by, the chart shows the flow of items through the process. We call it “cumulative” because we’re not measuring the incremental change from interval to interval – we’re always counting every item in each stage, regardless of whether it was in that stage during the last measurement. Seeing the columns trends is very valuable. For example, flat lines for in multiple columns may indicate that work takes longer than planned for. A bulge in a line might indicate that work has built up in one column and it isn’t moving through. More on how to read a CFD can be found in here. The lead metric is WIP. A high number of WIP might for the team might result in context switching and a potential sign that the team is overcommitted is the resources aren’t allocated accordingly. CFD report

This report replaces the old CFD chart and enhances it: – Control the time periods. Pick from a list or customize your own. – Add and remove board columns to gain more focus on the flow your team controls. – Pick and choose the Swimlines.

Share your feedback with us

Read more about the new Azure Boards reports in the documentation, or watch this live demo.

The new reports preview is turned on by default. If you experience any issues, please post on the Developer Community. You can also temporarily go back to the previous chart using Preview features. Toggle the “New Boards reports” as shown in the image below.

The post Get insights into your team’s health with Azure Boards Reports appeared first on Azure DevOps Blog.

High Availability Add-On updates for Red Hat Enterprise Linux on Azure

$
0
0

High availability is crucial to mission-critical production environments. The Red Hat Enterprise Linux High Availability Add-On provides reliability and availability to critical production services that use it. Today, we’re sharing performance improvements and image updates around the High Availability Add-On for Red Hat Enterprise Linux (RHEL) on Azure.

Pacemaker

Pacemaker is a robust and powerful open-source resource manager used in highly available compute clusters. It is a key part of the High Availability Add-On for RHEL.

Pacemaker has been updated with performance improvements in the Azure Fencing Agent to significantly decrease Azure failover time, which greatly reduces customer downtime. This update is available to all RHEL 7.4+ users using either the Pay-As-You-Go images or Bring-Your-Own-Subscription images from the Azure Marketplace.

New pay-as-you-go RHEL images with the High Availability Add-On

We now have RHEL Pay-As-You-Go (PAYG) images with the High Availability Add-On available in the Azure Marketplace. These RHEL images have additional access to the High Availability Add-On repositories. Pricing details for these images are available in the pricing calculator.

The following RHEL HA PAYG images are now available in the Marketplace for all Azure regions, including US Government Cloud:

New pay-as-you-go RHEL for SAP images with the High Availability Add-On

We also have RHEL images that include both SAP packages and the High Availability Add-On available in the Marketplace. These images come with access to SAP repositories as well as 4 years of support per standard Red Hat policies. Pricing details for these images are available in the pricing calculator.

The following RHEL for SAP with HA and Update Services images are available in the Marketplace for all Azure regions, including US Government Cloud:

Refer to the Certified and Supported SAP HANA Hardware Directory to see the list of SAP-certified Azure VM sizes.

    You can also get a full listing of RHEL images on Azure, including the RHEL with HA and RHEL for SAP with HA images with the following Azure CLI command:

    az vm image list --publisher redhat --all

    Support

    All the RHEL with HA and RHEL for SAP with HA images on Azure are fully supported by the Red Hat and Microsoft integrated support team.

    See the support site here and the Red Hat support site here.

    Full details on the Red Hat Enterprise Linux support lifecycle are available here.

    Next steps

    Disaster recovery of Azure disk encryption (V2) enabled virtual machines

    $
    0
    0

    Choosing Azure for your applications and services allows you take advantage of a wide array of security tools and capabilities. These tools and capabilities help make it possible to create secure solutions on Azure. Among these capabilities is Azure disk encryption, designed to help protect and safeguard your data to meet your organizational security and compliance commitments. It uses the industry standard BitLocker Drive Encryption for Windows and DM-Crypt for Linux to provide volume encryption for OS and data disks. The solution is integrated with Azure Key Vault to help you control and manage disk encryption keys and secrets, and ensures that all data on virtual machine (VM) disks are encrypted both in-transit and at rest while in Azure Storage.

    Beyond securing your applications, it is important to have a disaster recovery plan in place to keep your mission critical applications up and running when planned and unplanned outages occur. Azure Site Recovery helps orchestrate replication, failover, and recovery of applications running on Azure Virtual Machines so that they are available from a secondary region if you have any outages in the primary region.

    Azure Site Recovery now supports disaster recovery of Azure disk encryption (V2) enabled virtual machines without Azure Active Directory application. While enabling replication of your VM for disaster recovery, all the required disk encryption keys and secrets are copied from the source region to the target region in the user context. If the user managing disaster recovery does not have the appropriate permissions, the user can hand over the ready-to-use script to the security administrator to copy the keys and secrets and proceed with configuration.

    Configure disaster recovery for Azure disk encryption (V2) enabled virtual machines

    This feature currently supports only Windows VMs using managed disks. The support for Linux VMs using managed disks will be available in the coming weeks. This feature is available in all Azure regions where Azure Site Recovery is available. Configure disaster recovery for Azure disk encryption enabled virtual machines using Azure Site Recovery today and become both secure and protected from outages.

    Viewing all 5971 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>