Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Hot patching SQL Server Engine in Azure SQL Database

$
0
0

In the world of cloud database services, few things are more important to customers than having uninterrupted access to their data. In industries like online gaming and financial services that experience high transaction rates, even the smallest interruptions can potentially impact the end-user’s experience. Azure SQL Database is evergreen, meaning that it always has the latest version of the SQL Engine, but maintaining this evergreen state requires periodic updates to the service that can take the database offline for a second. For this reason, our engineering team is continuously working on innovative technology improvements that reduce workload interruption.

Today’s post, in collaboration with the Visual C++ Compiler team, covers how we patch SQL Server Engine without impacting workload at all.
A diagram showing the details of how hot patching works.

Figure 1 – This is what hot patching looks like under the covers. If you’re interested in the low-level details, see our technical blog post.

The challenge

The SQL Engine we are running in Azure SQL Database is the very latest version of the same engine customers run on their own servers, except we manage and update it. To update SQL Server or the underlying infrastructure (i.e., Azure Service Fabric or the operating system), we must stop the SQL Server process. If that process hosts the primary database replica, we move the replica to another machine, requiring a failover.

During a failover, the database may be offline for a second and still meet our 99.995 percent SLA. However, failover of the primary replica impacts workload because it aborts in-flight queries and transactions. We built features such as resumable index (re)build and accelerated database recovery to address these situations, but not all running operations are automatically resumable. It may be expensive to restart complex queries or transactions that were aborted due to an upgrade. So even though failovers are quick, we want to avoid them.

SQL Server and the overall Azure platform invests significant engineering effort into platform availability and reliability. In SQL database, we have multiple replicas of every database. During upgrade, we ensure that hot standbys are available to take over immediately.

We’ve worked closely with the broader Azure and Service Fabric teams to minimize the number of failovers. When we first decide to fail over a database for upgrade, we apply updates to all components in the stack at the same time: OS, Service Fabric, and SQL Server. We have automatic scheduling that avoids deploying during an Azure region’s core business hours. Just before failover, we attempt to drain active transactions to avoid aborting them. We even utilize database workload patterns to perform failover at the best time for the workload.

Even with all that, we don’t get away from the fact that to update SQL Engine to a new version, we must restart the process and failover the database’s primary replica at least once. Or do we?

Hot patching and results

Hot patching is modifying in-memory code in a running process without restarting the process. In our case, it gives us the capability to modify C++ code in SQL Engine without restarting sqlservr.exe. Since we don’t restart, we don’t failover the primary replica and interrupt the workload. We don't even need to pause SQL Server activity while we patch. Hot patching is unnoticed by the user workload, other than the patch payload, of course!

Hot patching does not replace traditional, restarting upgrades – it complements them. Hot patching currently has limitations that make it unsuitable when there are a large number of changes, such as when a major new feature is introduced. But it is perfect for smaller, targeted changes. More than 80 percent of typical SQL bug fixes are hot patchable. Benefits of hot patching include:

  • Reduced workload disruption - No restart means no database failover and no workload impact.
  • Faster bug fixes - Previously, we weighed the urgency of a bug fix vs. impact on customer workloads from deploying it. Sometimes we would deem a bug fix not important enough for worldwide rollout because of the workload impact. With hot patching, we can now deploy bug fixes worldwide right away.
  • Features available sooner - Even with the 500,000+ functional tests that we run several times per day and thorough testing of every new feature, sometimes we discover problems after a new feature has been made available to customers. In such cases, we may have to disable the feature or delay go-live until the next scheduled full upgrade. With hot patching, we can fix the problem and make the feature available sooner.

We did the first hot patch in production in 2018. Since then, we have hot patched millions of SQL Servers every month. Hot patching increases SQL Database ship velocity by 50 percent, while at the same time improving availability.

How hot patching works

For the technically interested, see our technical blog post for a detailed explanation of how hot patching works under the covers. Start reading at section three.

Closing words and next steps

With the capability in place, we are now working to improve the tooling and remove limitations to make more changes hot patchable with quick turnaround. For now, hot patching is only available in Azure SQL Database, but some day it may also come to SQL Server. Let us know via SQLDBArchitects@microsoft.com if you would be interested in that.

Please leave comments and questions below or contact us on the email above if you would like to see more in-depth coverage of cool technology we work on.


ASP.NET Core and Blazor updates in .NET Core 3.0

$
0
0

ASP.NET Core and Blazor updates in .NET Core 3.0

Today we are thrilled to announce the release of .NET Core 3.0! .NET Core 3.0 is ready for production use, and is loaded with lots of great new features for building amazing web apps with ASP.NET Core and Blazor.

Some of the big new features in this release of ASP.NET Core include:

  • Build rich interactive client-side web apps using C# instead of JavaScript using Blazor).
  • Create high-performance backend services with gRPC.
  • SignalR now has support for automatic reconnection and client-to-server streaming.
  • Generate strongly typed client code for Web APIs with OpenAPI documents.
  • Endpoint routing integrated through the framework.
  • HTTP/2 now enabled by default in Kestrel.
  • Authentication support for Web APIs and single-page apps integrated with IdentityServer
  • Support for certificate and Kerberos authentication.
  • Integrates with the new System.Text.Json serializer.
  • New generic host sets up common hosting services like dependency injection (DI), configuration, and logging.
  • New Worker Service template for building long-running services.
  • New EventCounters created for requests per second, total requests, current requests, and failed requests.
  • Startup errors now reported to the Windows Event Log when hosted in IIS.
  • Request pipeline integrated with with System.IO.Pipelines.
  • Performance improvements across the entire stack.

You can find all the details about what’s new in ASP.NET Core in .NET Core 3.0 in the What’s new in ASP.NET Core 3.0 topic.

See the .NET Core 3.0 release notes for additional details and known issues.

Get started

To get started with ASP.NET Core in .NET Core 3.0 install the .NET Core 3.0 SDK.

If you’re on Windows using Visual Studio, install Visual Studio 2019 16.3, which includes .NET Core 3.0.

Note: .NET Core 3.0 requires Visual Studio 2019 16.3 or later.

There is also a Blazor WebAssembly preview update available with this release. This update to Blazor WebAssembly still has a Preview 9 version, but carries an updated build number. Blazor WebAssembly is still in preview and is not part of the .NET Core 3.0 release.

To install the latest Blazor WebAssembly template run the following command:

dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.0.0-preview9.19465.2

Upgrade an existing project

To upgrade an existing ASP.NET Core app to .NET Core 3.0, follow the migrations steps in the ASP.NET Core docs.

See the full list of breaking changes in ASP.NET Core 3.0.

To upgrade an existing ASP.NET Core 3.0 RC1 project to 3.0:

  • Update all Microsoft.AspNetCore.* and Microsoft.Extensions.* package references to 3.0.0
  • Update all Microsoft.AspNetCore.Blazor.* package references to 3.0.0-preview9.19465.2

That’s it! You should now be all set to use .NET Core 3.0!

Join us at .NET Conf!

Please join us at .NET Conf to learn all about the new features in .NET Core 3.0 and to celebrate the release with us! .NET Conf is a live streaming event open to everyone, and features talks from many talented speakers from the .NET team and the .NET community. Check out the schedule and attend a local event near you. Or join the Virtual Attendee Party for the chance to win prizes!

Give feedback

We hope you enjoy the new features in this release of ASP.NET Core and Blazor in .NET Core 3.0! We are eager to hear about your experiences with this latest .NET Core release. Let us know what you think by filing issues on GitHub.

Thanks for using ASP.NET Core and Blazor!

The post ASP.NET Core and Blazor updates in .NET Core 3.0 appeared first on ASP.NET Blog.

Get Up and Running with the Maps SDK for Unity

$
0
0

The Maps SDK is a map control for Unity that allows us to utilize Bing Maps 3D Data in Unity-Based, mixed reality experiences. Here is a quick start guide on setting up the Maps SDK control and demoing the packaged samples.

Before you begin working in Unity, you need to install the required software, obtain a Bing Maps Key and create a UnityID.

Preparing Unity and the Maps SDK

Software Installation:

  1. Windows 10 Fall Creators Update or newer
  2. Unity 3D (supported versions: https://github.com/microsoft/MapsSDK-Unity/wiki/Unity-Support-Matrix) You can also utilize Unity Hub to target specific Unity editor versions for specific releases.
  3. Visual Studio 2017+

You can obtain your Bing Maps Key by following the instructions in the Getting a Bing Maps Key page. You can create your UnityID by registering it.

Once you have completed the required installations, obtained a Bing Maps Key and created your UnityID, you will need to download the Unity Asset Package for the Maps SDK.

You are now ready to set up the Unity Project.

  1. Open Unity3D and use your UnityID to log in.
  2. Select the "New" Icon to create a new Unity Project.
  3. Once your Unity Project loads, select from the menu bar Assets > Import Package > Custom Package and select the Unity Asset Package for the Maps SDK we downloaded earlier.
    Maps SDK Unity - Screenshot Custom Package
  4. You will be prompted to import the package. Click Import.
    Maps SDK for Unity - Screenshot Import Unity Package
  5. The Maps SDK Unity Package should now be fully loaded into your project.

In the Unity Editor, find the Project Explorer. Here you will see all the files inside of your project. By expanding the Assets folder, you will see all the contents of the package necessary to get running with the Maps SDK within Unity.

Maps SDK for Unity - Screenshot Project Explorer Asset Folder Expanded

Creating the Unity Scene and Map

To begin, you will have to create a Unity Scene. Go to File > New Scene. This should create an untitled scene. Inside the Hierarchy, you should see only two game objects: Main Camera and Directional Light.

Maps SDK for Unity - Screenshot Creating New Scene

To create the map, you must create an empty 'GameObject'. Right Click anywhere inside the hierarchy and select 'Create Empty' or click Game Object > Create Empty from the top menu bar.

Maps SDK for Unity - Screenshot Create Empty Selected

You should know see a new 'GameObject' in your hierarchy. Feel free to rename this gameobject by right clicking it in the hierarchy and selecting rename.

Maps SDK for Unity - Screenshot GameObject

The next step is to add a MapRenderer component to our empty 'GameObject'. Do so by selecting the GameObject. This will bring up the GameObject's components in the Unity Inspector window. In the inspector, click Add Component > Scripts > Microsoft.Maps.Unity > Map Renderer.

Maps SDK for Unity - Screenshot Add Map Renderer

The Unity Inspector window should now display the Map Renderer. The Map Renderer will have several fields and options. You can read more about configuring the MapRenderer here. For now, under API Settings, input your Bing Maps Key to enable the map.

Maps SDK for Unity - Screenshot Inspector Window

Once the key is validated, you should now see Bing Maps overlaid on your GameObject.

Maps SDK for Unity - Screenshot Bing Maps Overlaid on Game Object

Now you are ready to begin adding other GameObjects, labels and animations to the map! Make sure to visit the Maps SDK Wiki for the full SDK reference and additional tutorials.

Demoing the Sample Projects

In the Project Explorer, expand the Assets folder. Inside, open the Microsoft.Maps.Unity.Examples folder. You should know see three sample folders:

  1. City Tour Example
  2. Map Pin Example
  3. Weather Cube Example

Maps SDK for Unity - Assets Folder

Simply open the scene inside the sample folder to load the desired sample. Keep in mind that you will need to provide your Bing Maps key in the Map Renderer Component in the 'Map' GameObject.

For more detailed information about the Maps SDK sample projects visit here.

- Bing Maps Team

Announcing .NET Core 3.0

$
0
0

Announcing .NET Core 3.0

We’re excited to announce the release of .NET Core 3.0. It includes many improvements, including adding Windows Forms and WPF, adding new JSON APIs, support for ARM64 and improving performance across the board. C# 8 is also part of this release, which includes nullable, async streams, and more patterns. F# 4.7 is included, and focused on relaxing syntax and targeting .NET Standard 2.0. You can start updating existing projects to target .NET Core 3.0 today. The release is compatible with previous versions, making updating easy.

You can download .NET Core 3.0, for Windows, macOS, and Linux:

Visual Studio 2019 16.3 was also released today and is a required update to use .NET Core 3.0 with Visual Studio.

ASP.NET Core 3.0 and EF Core 3.0 are also releasing today.

Thank you to everyone that contributed to .NET Core 3.0! Hundreds of people were involved in making this release happen, including major contributions from the community.

Release notes:

What you should know about 3.0

There are some key improvements and guidance that are important to draw attention to before we go into a deep dive on all the new features in .NET Core 3.0. Here’s the quick punch list.

  • .NET Core 3.0 is already battle-tested by being hosted for months at dot.net and on Bing.com. Many other Microsoft teams will soon be deploying large workloads on .NET Core 3.0 in production.
  • Performance is greatly improved across many components and is described in detail at Performance Improvements in .NET Core 3.0.
  • C# 8 add async streams, range/index, more patterns, and nullable reference types. Nullable enables you to directly target the flaws in code that lead to NullReferenceException. The lowest layer of the framework libraries has been annotated, so that you know when to expect null.
  • F# 4.7 focuses on making some thing easier with implicit yield expressions and some syntax relaxations. It also includes support for LangVersion, and ships with nameof and opening of static classes in preview. The F# Core Library now also targets .NET Standard 2.0. You can read more at Announcing F# 4.7.
  • .NET Standard 2.1 increases the set of types you can use in code that can be used woth both .NET Core and Xamarin. .NET Standard 2.1 includes types since .NET Core 2.1.
  • Windows Desktop apps are now supported with .NET Core, for both Windows Forms and WPF (and open source). The WPF designer is part of Visual Studio 2019 16.3. The Windows Forms designer is still in preview and available as a VSIX download.
  • .NET Core apps now have executables by default. In past releases, appss needed to be launched via the dotnet command, like dotnet myapp.dll. Apps can now be launched with an app-specific executable, like myapp or ./myapp, depending on the operating system.
  • High performance JSON APIs have been added, for reader/writer, object model and serialization scenarios. These APIs were built from scratch on top of Span<T> and use UTF8 under the covers instead of UTF16 (like string). These APIs minimize allocations, resulting in faster performance, and much less work for the garbage collector. See The future of JSON in .NET Core 3.0.
  • The garbage collector uses less memory by default, often a lot less. This improvement is very beneficial for scenarios where many applications are hosted on the same server. The garbage collector has also been updated to make better use of large numbers of cores, on machines with >64 cores.
  • .NET Core has been hardened for Docker to enable .NET applications to work predictably and efficiently in containers. The garbage collector and thread pool have been updated to work much better when a container has been configured for limited memory or CPU. .NET Core docker images are smaller, particularly the SDK image.
  • Raspberry Pi and ARM chips are now supported to enable IoT development, including with the remote Visual Studio debugger. You can deploy apps that listen to sensors, and print messages or images on a display, all using the new GPIO APIs. ASP.NET can be used to expose data as an API or as a site that enables configuring an IoT device.
  • .NET Core 3.0 is a ‘current’ release and will be superseded by .NET Core 3.1, targeted for November 2019. .NET Core 3.1 will be a long-term supported (LTS) release (supported for at least 3 years). We recommend that you adopt .NET Core 3.0 and then adopt 3.1. It’ll be very easy to upgrade.
  • .NET Core 3.0 will be available with RHEL 8 in the Red Hat Application Streams, after several years of collaboration with Red Hat.
  • Visual Studio 2019 16.3 is a required update for Visual Studio users on Windows that want to use .NET Core 3.0.
  • Visual Studio for Mac 8.3 is a required update for Visual Studio for Mac users that want to use .NET Core 3.0.
  • Visual Studio Code users should just always use the latest version of the C# extension to ensure that the newest scenarios work, including targeting .NET Core 3.0.
  • Azure Websites deployment of .NET Core 3.0 is currently ongoing.

Platform support

.NET Core 3.0 is supported on the following operating systems:

  • Alpine: 3.9+
  • Debian: 9+
  • openSUSE: 42.3+
  • Fedora: 26+
  • Ubuntu: 16.04+
  • RHEL: 6+
  • SLES: 12+
  • macOS: 10.13+
  • Windows Client: 7, 8.1, 10 (1607+)
  • Windows Server: 2012 R2 SP1+

Note: Windows Forms and WPF apps only work on Windows.

Chip support follows:

  • x64 on Windows, macOS, and Linux
  • x86 on Windows
  • ARM32 on Windows and Linux
  • ARM64 on Linux (kernel 4.14+)

Note: Please ensure that .NET Core 3.0 ARM64 deployments use Linux kernel 4.14 version or later. For example, Ubuntu 18.04 satisfies this requirement, but 16.04 does not.

WPF and Windows Forms

You can build WPF and Windows Forms apps with .NET Core 3, on Windows. We’ve had a strong compatibility goal from the start of the project, to make it easy to migrate desktop applications from .NET Framework to .NET Core. We’ve heard feedback from many developers that have already successfully ported their app to .NET Core 3.0 that the process is straightforward. To a large degree, we took WPF and Windows Forms as-is, and got them working on .NET Core. The engineering project was very different than that, but that’s a good way to think about the project.

The following image shows a .NET Core Windows Forms app:

Visual Studio 2019 16.3 has support for creating WPF apps that target .NET Core. This includes new templates and an updated XAML designer. The designer is similar to the existing XAML designer (that targets .NET Framework), however, you may notice some differences in experience. The big technical difference is that the new designer is hosted in its own process, because we didn’t want two versions of .NET (.NET Framework and .NET Core) in the Visual Studio process. This means that some aspects of the designer, like designer extensions, cannot work in the same way.

The following image shows a WPF app being displayed in the new designer:

The Windows Forms designer is still in preview, and available as a separate download. It will be added to Visual Studio as part of a later release. The designer currently includes support for the most commonly used controls and low-level functionality. We’ll keep improving the designer with monthly updates. We don’t recommend porting your Windows Forms applications to .NET Core just yet, particularly if you rely on the designer. Please do experiment with the designer preview, and give us feedback.

You can also create and build desktop applications from the command line using the .NET CLI.

For example, you can quickly create a new Windows Forms app:

dotnet new winforms -o myapp
cd myapp
dotnet run

You can try WPF using the same flow:

dotnet new wpf -o mywpfapp
cd mywpfapp
dotnet run

We made Windows Forms and WPF open source, back in December 2018. It’s been great to see the community and the Windows Forms and WPF teams working together to improve those UI frameworks. In the case of WPF, we started out with a very small amount of code in the GitHub repo. At this point, almost all of WPF has been published to GitHub, and a few more components will straggle in over time. Like other .NET Core projects, these new repos are part of the .NET Foundation and licensed with the MIT license.

The System.Windows.Forms.DataVisualization package (which includes the chart control) is also available for .NET Core. You can now include this control in your .NET Core WinForms applications. The source for the chart control is available at dotnet/winforms-datavisualization, on GitHub. The control was migrated to ease porting to .NET Core 3, but isn’t a component we expect to update significantly.

Windows Native Interop

Windows offers a rich native API, in the form of flat C APIs, COM and WinRT. We’ve had support for P/Invoke since .NET Core 1.0, and have been adding the ability to CoCreate COM APIs, activate WinRT APIs, and exposed managed code as COM components as part of the .NET Core 3.0 release. We have had many requests for these capabilities, so we know that they will get a lot of use.

Late last year, we announced that we had managed to automate Excel from .NET Core. That was a fun moment. Under the covers, this demo is using COM interop features like NOPIA, object equivalence and custom marshallers. You can now try this and other demos yourself at extension samples.

Managed C++ and WinRT interop have partial support with .NET Core 3.0 and will be included with .NET Core 3.1.

Nullable reference types

C# 8.0 introduces nullable reference types and non-nullable reference types that enable you to make important statements about the properties for reference type variables:

  • A reference is not supposed to be null. When variables aren’t supposed to be null, the compiler enforces rules that ensure it is safe to dereference these variables without first checking that it isn’t null.
  • A reference may be null. When variables may be null, the compiler enforces different rules to ensure that you’ve correctly checked for a null reference.

This new feature provides significant benefits over the handling of reference variables in earlier versions of C# where the design intent couldn’t be determined from the variable declaration. With the addition of nullable reference types, you can declare your intent more clearly, and the compiler both helps you do that correctly and discover bugs in your code.

See This is how you get rid of null reference exceptions forever, Try out Nullable Reference Types and Nullable reference types to learn more.

Default implementations of interface members

Today, once you publish an interface, it’s game over for changing it: you can’t add members to it without breaking all the existing implementers of it.

With C# 8.0, you can provide a body for an interface member. As a result, if a class that implements the interface doesn’t implement that member (perhaps because it wasn’t there yet when they wrote the code), then the calling code will just get the default implementation instead.

In this example, the ConsoleLogger class doesn’t have to implement the Log(Exception) overload of ILogger, because it is declared with a default implementation. Now you can add new members to existing public interfaces as long as you provide a default implementation for existing implementors to use.

Async streams

You can now foreach over an async stream of data using IAsyncEnumerable<T>. This new interface is exactly what you’d expect; an asynchronous version of IEnumerable<T>. The language lets you await foreach over tasks to consume their elements. On the production side, you yield return items to produce an async stream. It might sound a bit complicated, but it is incredibly easy in practice.

The following example demonstrates both production and consumption of async streams. The foreach statement is async and itself uses yield return to produce an async stream for callers. This pattern – using yield return — is the recommended model for producing async streams.

In addition to being able to await foreach, you can also create async iterators, e.g. an iterator that returns an IAsyncEnumerable/IAsyncEnumerator that you can both await and yield return in. For objects that need to be disposed, you can use IAsyncDisposable, which various framework types implement, such as Stream and Timer.

Index and Range

We’ve created new syntax and types that you can use to describe indexers, for array element access or for any other type that exposes direct data access. This includes support for both a single value — the usual definition of an index — or two values, which describing a range.

Index is a new type that describes an array index. You can create an Index from an int that counts from the beginning, or with a prefix ^ operator that counts from the end. You can see both cases in the following example:

Range is similar, consisting of two Index values, one for the start and one for the end, and can be written with a x..y range expression. You can then index with a Range in order to produce a slice of the underlying data, as demonstrated in the following example:

Using Declarations

Are you tired of using statements that require indenting your code? No more! You can now write the following code, which attaches a using declaration to the scope of the current statement block and then disposes the object at the end of it.

Switch Expressions

Anyone who uses C# probably loves the idea of a switch statement, but not the syntax. C# 8 introduces switch expressions, which enable the following:

  • terser syntax
  • returns a value since it is an expression
  • fully integrated with pattern matching

The switch keyword is “infix”, meaning the keyword sits between the tested value (that’s o in the first example) and the list of cases, much like expression lambdas.

The first examples uses the lambda syntax for methods, which integrates well with the switch expressions but isn’t required.

There are two patterns at play in this example. o first matches with the Point type pattern and then with the property pattern inside the {curly braces}. The _ describes the discard pattern, which is the same as default for switch statements.

You can go one step further, and rely on tuple deconstruction and parameter position, as you can see in the following example:

In this example, you can see you do not need to define a variable or explicit type for each of the cases. Instead, the compiler can match the tuple being testing with the tuples defined for each of the cases.

All of these patterns enable you to write declarative code that captures your intent instead of procedural code that implements tests for it. The compiler becomes responsible for implementing that boring procedural code and is guaranteed to always do it correctly.

There will still be cases where switch statements will be a better choice than switch expressions and patterns can be used with both syntax styles.

Introducing a fast JSON API

.NET Core 3.0 includes a new family of JSON APIs that enable reader/writer scenarios, random access with a document object model (DOM) and a serializer. You are likely familiar with using Json.NET. The new APIs are intended to satisfy many of the same scenarios, but with less memory and faster execution.

You can see the initial motivation and description of the plan in The future of JSON in .NET Core 3.0. This includes James Netwon-King, the author of Json.Net, explaining why a new API was created, as opposed to extending Json.NET. In short, we wanted to build a new JSON API that took advantage of all the new performance capabilities in .NET Core, and delivered performance inline with that. It wasn’t possible to do that in an existing code-base like Json.NET while maintaining compatibility.

Let’s take a quick look at the new API, layer by layer.

Utf8JsonReader

System.Text.Json.Utf8JsonReader is a high-performance, low allocation, forward-only reader for UTF-8 encoded JSON text, read from a ReadOnlySpan<byte>. The Utf8JsonReader is a foundational, low-level type, that can be leveraged to build custom parsers and deserializers. Reading through a JSON payload using the new Utf8JsonReader is 2x faster than using the reader from Json.NET. It does not allocate until you need to actualize JSON tokens as (UTF16) strings.

Utf8JsonWriter

System.Text.Json.Utf8JsonWriter provides a high-performance, non-cached, forward-only way to write UTF-8 encoded JSON text from common .NET types like String, Int32, and DateTime. Like the reader, the writer is a foundational, low-level type, that can be leveraged to build custom serializers. Writing a JSON payload using the new Utf8JsonWriter is 30-80% faster than using the writer from Json.NET and does not allocate.

JsonDocument

System.Text.Json.JsonDocument provides the ability to parse JSON data and build a read-only Document Object Model (DOM) that can be queried to support random access and enumeration. It is built on top of the Utf8JsonReader. The JSON elements that compose the data can be accessed via the JsonElement type which is exposed by the JsonDocument as a property called RootElement. The JsonElement contains the JSON array and object enumerators along with APIs to convert JSON text to common .NET types. Parsing a typical JSON payload and accessing all its members using the JsonDocument is 2-3x faster than Json.NET with very little allocations for data that is reasonably sized (i.e. < 1 MB).

JSON Serializer

System.Text.Json.Serialization.JsonSerializer layers on top of the high-performance Utf8JsonReader and Utf8JsonWriter. It deserializes objects from JSON and serializes objects to JSON. Memory allocations are kept minimal and includes support for reading and writing JSON with Stream asynchronously.

See the documentation for information and samples.

Introducing the new SqlClient

SqlClient is the data provider you use to access Microsoft SQL Server and Azure SQL Database, either through one of the popular .NET O/RMs, like EF Core or Dapper, or directly using the ADO.NET APIs. It will now be released and updated as the Microsoft.Data.SqlClient NuGet package, and supported for both .NET Framework and .NET Core applications. By using NuGet, it will be easier for the SQL team to provide updates to both .NET Framework and .NET Core users.

ARM and IoT Support

We added support for Linux ARM64 this release, after having added support for ARM32 for Linux and Windows in the .NET Core 2.1 and 2.2, respectively. While some IoT workloads take advantage of our existing x64 capabilities, many users had been asking for ARM support. That is now in place, and we are working with customers who are planning large deployments.

Many IoT deployments using .NET are edge devices, and entirely network-oriented. Other scenarios require direct access to hardware. In this release, we added the capability to use serial ports on Linux and take advantage of digital pins on devices like the Raspberry Pi. The pins use a variety of protocols. We added support for GPIO, PWM, I2C, and SPI, to enable reading sensor data, interacting with radios and writing text and images to displays, and many other scenarios.

This functionality is available as part of the following packages:

As part of providing support for GPIO (and friends), we took a look at what was already available. We found APIs for C# and also Python. In both cases, the APIs were wrappers over native libraries, which were often licensed as GPL. We didn’t see a path forward with that approach. Instead, we built a 100% C# solution to implement these protocols. This means that our APIs will work anywhere .NET Core is supported, can be debugged with a C# debugger (via sourcelink), and supports multiple underlying Linux drivers (sysfs, libgpiod, and board-specific). All of the code is licensed as MIT. We see this approach as a major improvement for .NET developers compared to what has existed.

See dotnet/iot to learn more. The best places to start are samples or devices. We have built a few experiments while adding support for GPIO. One of them was validating that we could control an Arduino from a Pi through a serial port connection. That was suprisingly easy. We also spent a lot of time playing with LED matrices, as you can see in this RGB LED Matrix sample. We expect to share more of these experiments over time.

.NET Core runtime roll-forward policy update

The .NET Core runtime, actually the runtime binder, now enables major-version roll-forward as an opt-in policy. The runtime binder already enables roll-forward on patch and minor versions as a default policy. We decided to expose a broader set of policies, which we expected would be important for various scenarios, but did not change the default roll-forward behavior.

There is a new property called RollForward, which accepts the following values:

  • LatestPatch — Rolls forward to the highest patch version. This disables the Minor policy.
  • Minor — Rolls forward to the lowest higher minor version, if the requested minor version is missing. If the requested minor version is present, then the LatestPatch policy is used. This is the default policy.
  • Major — Rolls forward to lowest higher major version, and lowest minor version, if the requested major version is missing. If the requested major version is present, then the Minor policy is used.
  • LatestMinor — Rolls forward to highest minor version, even if the requested minor version is present.
  • LatestMajor — Rolls forward to highest major and highest minor version, even if requested major is present.
  • Disable — Do not roll forward. Only bind to specified version. This policy is not recommended for general use since it disable the ability to roll-forward to the latest patches. It is only recommended for testing.

See Runtime Binding Behavior and dotnet/core-setup #5691 for more information.

Docker and cgroup Limits

Many developers are packaging and running their application with containers. A key scenario is limiting a container’s resources such as CPU or memory. We implemented support for memory limits back in 2017. Unfortunately, we found that the implementation wasn’t aggressive enough to reliably stay under the configured limits and applications were still being OOM killed when memory limits are set (particular <500MB). We have fixed that in .NET Core 3.0. We strongly recommend that .NET Core Docker users upgrade to .NET Core 3.0 due to this improvement.

The Docker resource limits feature is built on top of cgroups, which a Linux kernel feature. From a runtime perspective, we need to target cgroup primitives.

You can limit the available memory for a container with the docker run -m argument, as shown in the following example that creates an Alpine-based container with a 4MB memory limit (and then prints the memory limit):

C:>docker run -m 4mb --rm alpine cat /sys/fs/cgroup/memory/memory.limit_in_bytes
4194304

We also added made changes to better support CPU limits (--cpus). This includes changing the way that the runtime rounds up or down for decimal CPU values. In the case where --cpus is set to a value close (enough) to a smaller integer (for example, 1.499999999), the runtime would previously round that value down (in this case, to 1). As a result, the runtime would take advantage of less CPUs than requested, leading to CPU underutilization. By rounding up the value, the runtime augments the pressure on the OS threads scheduler, but even in the worst case scenario (--cpus=1.000000001 — previously rounded down to 1, now rounded to 2), we have not observed any overutilization of the CPU leading to performance degradation.

The next step was ensuring that the thread pool honors CPU limits. Part of the algorithm of the thread pool is computing CPU busy time, which is, in part, a function of available CPUs. By taking CPU limits into account when computing CPU busy time, we avoid various heuristics of the threadpool competing with each other: one trying to allocate more threads to increase the CPU busy time, and the other one trying to allocate less threads because adding more threads doesn’t improve the throughput.

Making GC Heap Sizes Smaller by default

While working on improving support for docker memory limits, we were inspired to make more general GC policy updates to improve memory usage for a broader set of applications (even when not running in a container). The changes better align the generation 0 allocation budget with modern processor cache sizes and cache hierarchy.

Damian Edwards on our team noticed that the memory usage of the ASP.NET benchmarks were cut in half with no negative effect on other performance metrics. That’s a staggering improvement! As he says, these are the new defaults, with no change required to his (or your) code (other than adopting .NET Core 3.0).

The memory savings that we saw with the ASP.NET benchmarks may or may not be representative of what you’ll see with your application. We’d like to hear how these changes reduce memory usage for your application.

Better support for many proc machines

Based on .NET’s Windows heritage, the GC needed to implement the Windows concept of processor groups to support machines with 64+ processors. This implementation was made in .NET Framework, 5-10 years ago. With .NET Core, we made the choice initially for the Linux PAL to emulate that same concept, even though it doesn’t exist in Linux. We have since abandoned this concept in the GC and transitioned it exclusively to the Windows PAL.

The GC now exposes a configuration switch, GCHeapAffinitizeRanges, to specify affinity masks on machines with 64+ processors. Maoni Stephens wrote about this change in Making CPU configuration better for GC on machines with > 64 CPUs.

GC Large page support

Large Pages or Huge Pages is a feature where the operating system is able to establish memory regions larger than the native page size (often 4K) to improve performance of the application requesting these large pages.

When a virtual-to-physical address translation occurs, a cache called the Translation lookaside buffer (TLB) is first consulted (often in parallel) to check if a physical translation for the virtual address being accessed is available, to avoid doing a potentially expensive page-table walk. Each large-page translation uses a single translation buffer inside the CPU. The size of this buffer is typically three orders of magnitude larger than the native page size; this increases the efficiency of the translation buffer, which can increase performance for frequently accessed memory. This win can be even more significant in a virtual machine, which has a two-layer TLB.

The GC can now be configured with the GCLargePages opt-in feature to choose to allocate large pages on Windows. Using large pages reduces TLB misses therefore can potentially increase application perf in general, however, the feature has its own set of limitations that should be considered. Bing has experimented with this feature and seen performance improvements.

.NET Core Version APIs

We have improved the .NET Core version APIs in .NET Core 3.0. They now return the version information you would expect. These changes while they are objectively better are technically breaking and may break applications that rely on existing version APIs for various information.

You can now get access to the following version information:

C:gittestappsversioninfo>dotnet run
**.NET Core info**
Environment.Version: 3.0.0
RuntimeInformation.FrameworkDescription: .NET Core 3.0.0
CoreCLR Build: 3.0.0
CoreCLR Hash: ac25be694a5385a6a1496db40de932df0689b742
CoreFX Build: 3.0.0
CoreFX Hash: 1bb52e6a3db7f3673a3825f3677b9f27b9af99aa

**Environment info**
Environment.OSVersion: Microsoft Windows NT 6.2.9200.0
RuntimeInformation.OSDescription: Microsoft Windows 10.0.18970
RuntimeInformation.OSArchitecture: X64
Environment.ProcessorCount: 8

Event Pipe improvements

Event Pipe now supports multiple sessions. This means that you can consume events with EventListener in-proc and simultaneously have out-of-process event pipe clients.

New Perf Counters added:

  • % Time in GC
  • Gen 0 Heap Size
  • Gen 1 Heap Size
  • Gen 2 Heap Size
  • LOH Heap Size
  • Allocation Rate
  • Number of assemblies loaded
  • Number of ThreadPool Threads
  • Monitor Lock Contention Rate
  • ThreadPool Work Items Queue
  • ThreadPool Completed Work Items Rate

Profiler attach is now implemented using the same Event Pipe infrastructure.

See Playing with counters from David Fowler to get an idea of what you can do with event pipe to perform your own performance investigations or just monitor application status.

See dotnet-counters to install the dotnet-counters tool.

HTTP/2 Support

We now have support for HTTP/2 in HttpClient. The new protocol is a requirement for some APIs, like gRPC and Apple Push Notification Service. We expect more services to require HTTP/2 in the future. ASP.NET also has support for HTTP/2.

Note: the preferred HTTP protocol version will be negotiated via TLS/ALPN and HTTP/2 will only be used if the server selects to use it.

Tiered Compilation

Tiered compilation was added as an opt-in feature in .NET Core 2.1. It’s a feature that enables the runtime to more adaptively use the Just-In-Time (JIT) compiler to get better performance, both at startup and to maximize throughput. It is enabled by default with .NET Core 3.0. We made a lot of improvements to the feature over the last year, including testing it with a variety of workloads, including websites, PowerShell Core and Windows desktop apps. The performance is a lot better, which is what enabled us to enable it by default.

IEEE Floating-point improvements

Floating point APIs have been updated to comply with IEEE 754-2008 revision. The goal of the .NET Core floating point project is to expose all “required” operations and ensure that they are behaviorally compliant with the IEEE spec.

Parsing and formatting fixes:

  • Correctly parse and round inputs of any length.
  • Correctly parse and format negative zero.
  • Correctly parse Infinity and NaN by performing a case-insensitive check and allowing an optional preceding + where applicable.

New Math APIs:

  • BitIncrement/BitDecrement — corresponds to the nextUp and nextDown IEEE operations. They return the smallest floating-point number that compares greater or lesser than the input (respectively). For example, Math.BitIncrement(0.0) would return double.Epsilon.
  • MaxMagnitude/MinMagnitude — corresponds to the maxNumMag and minNumMag IEEE operations, they return the value that is greater or lesser in magnitude of the two inputs (respectively). For example, Math.MaxMagnitude(2.0, -3.0) would return -3.0.
  • ILogB — corresponds to the logB IEEE operation which returns an integral value, it returns the integral base-2 log of the input parameter. This is effectively the same as floor(log2(x)), but done with minimal rounding error.
  • ScaleB — corresponds to the scaleB IEEE operation which takes an integral value, it returns effectively x * pow(2, n), but is done with minimal rounding error.
  • Log2 — corresponds to the log2 IEEE operation, it returns the base-2 logarithm. It minimizes rounding error.
  • FusedMultiplyAdd — corresponds to the fma IEEE operation, it performs a fused multiply add. That is, it does (x * y) + z as a single operation, there-by minimizing the rounding error. An example would be FusedMultiplyAdd(1e308, 2.0, -1e308) which returns 1e308. The regular (1e308 * 2.0) - 1e308 returns double.PositiveInfinity.
  • CopySign — corresponds to the copySign IEEE operation, it returns the value of x, but with the sign of y.

.NET Platform Dependent Intrinsics

We’ve added APIs that allow access to certain performance-oriented CPU instructions, such as the SIMD or Bit Manipulation instruction sets. These instructions can help achieve big performance improvements in certain scenarios, such as processing data efficiently in parallel. In addition to exposing the APIs for your programs to use, we have begun using these instructions to accelerate the .NET libraries too.

The following CoreCLR PRs demonstrate a few of the intrinsics, either via implementation or use:

For more information, take a look at .NET Platform Dependent Intrinsics, which defines an approach for defining this hardware infrastructure, allowing Microsoft, chip vendors or any other company or individual to define hardware/chip APIs that should be exposed to .NET code.

Supporting TLS 1.3 and OpenSSL 1.1.1 now Supported on Linux

NET Core can now take advantage of TLS 1.3 support in OpenSSL 1.1.1. There are multiple benefits of TLS 1.3, per the OpenSSL team:

  • Improved connection times due to a reduction in the number of round trips required between the client and server
  • Improved security due to the removal of various obsolete and insecure cryptographic algorithms and encryption of more of the connection handshake

.NET Core 3.0 is capable of utilizing OpenSSL 1.1.1, OpenSSL 1.1.0, or OpenSSL 1.0.2 (whatever the best version found is, on a Linux system). When OpenSSL 1.1.1 is available, the SslStream and HttpClient types will use TLS 1.3 when using SslProtocols.None (system default protocols), assuming both the client and server support TLS 1.3.

.NET Core will support TLS 1.3 on Windows and macOS — we expect automatically — when support becomes available.

Cryptography

We added support for AES-GCM and AES-CCM ciphers, implemented via System.Security.Cryptography.AesGcm and System.Security.Cryptography.AesCcm. These algorithms are both Authenticated Encryption with Association Data (AEAD) algorithms, and the first Authenticated Encryption (AE) algorithms added to .NET Core.

NET Core 3.0 now supports the import and export of asymmetric public and private keys from standard formats, without needing to use an X.509 certificate.

All key types (RSA, DSA, ECDsa, ECDiffieHellman) support the X.509 SubjectPublicKeyInfo format for public keys, and the PKCS#8 PrivateKeyInfo and PKCS#8 EncryptedPrivateKeyInfo formats for private keys. RSA additionally supports PKCS#1 RSAPublicKey and PKCS#1 RSAPrivateKey. The export methods all produce DER-encoded binary data, and the import methods expect the same; if a key is stored in the text-friendly PEM format the caller will need to base64-decode the content before calling an import method.

PKCS#8 files can be inspected with the System.Security.Cryptography.Pkcs.Pkcs8PrivateKeyInfo class.

PFX/PKCS#12 files can be inspected and manipulated with System.Security.Cryptography.Pkcs.Pkcs12Info and System.Security.Cryptography.Pkcs.Pkcs12Builder, respectively.

New Japanese Era (Reiwa)

On May 1st, 2019, Japan started a new era called Reiwa. Software that has support for Japanese calendars, like .NET Core, must be updated to accommodate Reiwa. .NET Core and .NET Framework have been updated and correctly handle Japanese date formatting and parsing with the new era.

.NET relies on operating system or other updates to correctly process Reiwa dates. If you or your customers are using Windows, download the latest updates for your Windows version. If running macOS or Linux, download and install ICU version 64.2, which has support the new Japanese era.

Handling a new era in the Japanese calendar in .NET blog has more information about .NET support for the new Japanese era.

Assembly Load Context Improvements

Enhancements to AssemblyLoadContext:

  • Enable naming contexts
  • Added the ability to enumerate ALCs
  • Added the ability to enumerate assemblies within an ALC
  • Made the type concrete – so instantiation is easier (no requirement for custom types for simple scenarios)

See dotnet/corefx #34791 for more details. The appwithalc sample demonstrates these new capabilities.

By using AssemblyDependencyResolver along with a custom AssemblyLoadContext, an application can load plugins so that each plugin’s dependencies are loaded from the correct location, and one plugin’s dependencies will not conflict with another. The AppWithPlugin sample includes plugins that have conflicting dependencies and plugins that rely on satellite assemblies or native libraries.

Assembly Unloadability

Assembly unloadability is a new capability of AssemblyLoadContext. This new feature is largely transparent from an API perspective, exposed with just a few new APIs. It enables a loader context to be unloaded, releasing all memory for instantiated types, static fields and for the assembly itself. An application should be able to load and unload assemblies via this mechanism forever without experiencing a memory leak.

We expect this new capability to be used for the following scenarios:

  • Plugin scenarios where dynamic plugin loading and unloading is required.
  • Dynamically compiling, running and then flushing code. Useful for web sites, scripting engines, etc.
  • Loading assemblies for introspection (like ReflectionOnlyLoad), although MetadataLoadContext will be a better choice in many cases.

Assembly Metadata Reading with MetadataLoadContext

We added MetadataLoadContext, which enables reading assembly metadata without affecting the caller’s application domain. Assemblies are read as data, including assemblies built for different architectures and platforms than the current runtime environment. MetadataLoadContext overlaps with the ReflectionOnlyLoad type, which is only available in the .NET Framework.

MetdataLoadContext is available in the System.Reflection.MetadataLoadContext package. It is a .NET Standard 2.0 package.

Scenarios for MetadataLoadContext include design-time features, build-time tooling, and runtime light-up features that need to inspect a set of assemblies as data and have all file locks and memory freed after inspection is performed.

Native Hosting sample

The team posted a Native Hosting sample. It demonstrates a best practice approach for hosting .NET Core in a native application.

As part of .NET Core 3.0, we now expose general functionality to .NET Core native hosts that was previously only available to .NET Core managed applications through the officially provided .NET Core hosts. The functionality is primarily related to assembly loading. This functionality should make it easier to produce native hosts that can take advantage of the full feature set of .NET Core.

Other API Improvements

We optimized Span<T>, Memory<T> and related types that were introduced in .NET Core 2.1. Common operations such as span construction, slicing, parsing, and formatting now perform better. Additionally, types like String have seen under-the-cover improvements to make them more efficient when used as keys with Dictionary<TKey, TValue> and other collections. No code changes are required to enjoy these improvements.

The following improvements are also new:

  • Brotli support built-in to HttpClient
  • ThreadPool.UnsafeQueueWorkItem(IThreadPoolWorkItem)
  • Unsafe.Unbox
  • CancellationToken.Unregister
  • Complex arithmetic operators
  • Socket APIs for TCP keep alive
  • StringBuilder.GetChunks
  • IPEndPoint parsing
  • RandomNumberGenerator.GetInt32
  • System.Buffers.SequenceReader

Applications now have native executables by default

.NET Core applications are now built with native executables. This is new for framework-dependent application. Until now, only self-contained applications had executables.

You can expect the same things with these executables as you would other native executables, such as:

  • You can double click on the executable to start the application.
  • You can launch the application from a command prompt, using myapp.exe, on Windows, and ./myapp, on Linux and macOS.

The executable that is generated as part of the build will match your operating system and CPU. For example, if you are on a Linux x64 machine, the executable will only work on that kind of machine, not on a Windows machine and not on a Linux ARM machine. That’s because the executables are native code (just like C++). If you want to target another machine type, you need to publish with a runtime argument. You can continue to launch applications with the dotnet command, and not use native executables, if you prefer.

Optimize your .NET Core apps with ReadyToRun images

You can improve the startup time of your .NET Core application by compiling your application assemblies as ReadyToRun (R2R) format. R2R is a form of ahead-of-time (AOT) compilation. It is a publish-time, opt-in feature in .NET Core 3.0.

R2R binaries improve startup performance by reducing the amount of work the JIT needs to do as your application is loading. The binaries contain similar native code as what the JIT would produce, giving the JIT a bit of a vacation when performance matters most (at startup). R2R binaries are larger because they contain both intermediate language (IL) code, which is still needed for some scenarios, and the native version of the same code, to improve startup.

To enable the ReadyToRun compilation:

  • Set the PublishReadyToRun property to true.
  • Publish using an explicit RuntimeIdentifier.

Note: When the application assemblies get compiled, the native code produced is platform and architecture specific (which is why you have to specify a valid RuntimeIdentifier when publishing).

Here’s an example:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp3.0</TargetFramework>
    <PublishReadyToRun>true</PublishReadyToRun>
  </PropertyGroup>
</Project>

And publish using the following command:

dotnet publish -r win-x64 -c Release

Note: The RuntimeIdentifier can be set to another operating system or chip. It can also be set in the project file.

Assembly linking

The .NET core 3.0 SDK comes with a tool that can reduce the size of apps by analyzing IL and trimming unused assemblies. It is another publish-time opt-in feature in .NET Core 3.0.

With .NET Core, it has always been possible to publish self-contained apps that include everything needed to run your code, without requiring .NET to be installed on the deployment target. In some cases, the app only requires a small subset of the framework to function and could potentially be made much smaller by including only the used libraries.

We use the IL linker to scan the IL of your application to detect which code is actually required, and then trim unused framework libraries. This can significantly reduce the size of some apps. Typically, small tool-like console apps benefit the most as they tend to use fairly small subsets of the framework and are usually more amenable to trimming.

To use the linker:

  • Set the PublishTrimmed property to true.
  • Publish using an explicit RuntimeIdentifier.

Here’s an example:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp3.0</TargetFramework>
    <PublishTrimmed>true</PublishTrimmed>
  </PropertyGroup>
</Project>

And publish using the following command:

dotnet publish -r win-x64 -c Release

Note: The RuntimeIdentifier can be set to another operating system or chip. It can also be set in the project file.

The publish output will include a subset of the framework libraries, depending on what the application code calls. For a helloworld app, the linker reduces the size from ~68MB to ~28MB.

Applications or frameworks (including ASP.NET Core and WPF) that use reflection or related dynamic features will often break when trimmed, because the linker doesn’t know about this dynamic behavior and usually can’t determine which framework types will be required for reflection at run time. To trim such apps, you need to tell the linker about any types needed by reflection in your code, and in any packages or frameworks that you depend on. Be sure to test your apps after trimming. We are working on improving this experience for .NET 5.

For more information about the IL Linker, see the documentation, or visit the mono/linker repo.

Note: In previous versions of .NET Core, ILLink.Tasks was shipped as an external NuGet package and provided much of the same functionality. It is no longer supported – please update to the .NET Core 3.0 SDK and try the new experience!

The linker and ReadyToRun compiler can be used for the same application. In general, the linker makes your application smaller, and then the ready-to-run compiler will make it a bit larger again, but with a significant performance win. It is worth testing in various configurations to understand the impact of each option.

Publishing single-file executables

You can now publish a single-file executable with dotnet publish. This form of single EXE is effectively a self-extracting executable. It contains all dependencies, including native dependencies, as resources. At startup, it copies all dependencies to a temp directory, and loads them for there. It only needs to unpack dependencies once. After that, startup is fast, without any penalty.

You can enable this publishing option by adding the PublishSingleFile property to your project file or by adding a new switch on the commandline.

To produce a self-contained single EXE application, in this case for 64-bit Windows:

dotnet publish -r win10-x64 /p:PublishSingleFile=true

Note: The RuntimeIdentifier can be set to another operating system or chip. It can also be set in the project file.

See Single file bundler for more information.

Assembly trimmer, ahead-of-time compilation (via crossgen) and single file bundling are all new features in .NET Core 3.0 that can be used together or separately.

We expect that some of you will prefer single exe provided by an ahead-of-time compiler, as opposed to the self-extracting-executable approach that we are providing in .NET Core 3.0. The ahead-of-time compiler approach will be provided as part of the .NET 5 release.

dotnet build now copies dependencies

dotnet build now copies NuGet dependencies for your application from the NuGet cache to your build output folder during the build operation. Until this release,those dependencies were only copied as part of dotnet publish. This change allows you to xcopy your build output to different machines.

There are some operations, like linking and razor page publishing that require publishing.

.NET Core Tools — local installation

.NET Core tools has been updated to allow local installation. They have advantages over global tools, which were added in .NET Core 2.1.

Local installation enables the following:

  • Limit the scope by which a tool can be used.
  • Always use a specific version of the tool, which might differ from a globally-installed tool or another local installation. This is based on the version in the local tools manifest file.
  • Launched with dotnet, like in dotnet mytool.

Note: See Local Tools Early Preview Documentation for more information.

.NET Core SDK installers will now Upgrade in Place

The .NET Core SDK MSI installers for Windows will start upgrading patch versions in place. This will reduce the number of SDKs that are installed on both developer and production machines.

The upgrade policy will specifically target .NET Core SDK feature bands. Feature bands are defined in hundreds groups in the patch section of the version number. For example, 3.0.101 and 3.0.201 are versions in two different feature bands while 3.0.101 and 3.0.199 are in the same feature band.

This means when .NET Core SDK 3.0.101 becomes available and is installed, .NET Core SDK 3.0.100 will be removed from the machine if it exists. When .NET Core SDK 3.0.200 becomes available and is installed on the same machine, .NET Core SDK 3.0.101 will not be removed. In that situation, .NET Core SDK 3.0.200 will still be used by default, but .NET Core SDK 3.0.101 (or higher .1xx versions) will still be usable if it is configured for use via global.json.

This approach aligns with the behavior of global.json, which allows roll forward across patch versions, but not feature bands of the SDK. Thus, upgrading via the SDK installer will not result in errors due to a missing SDK. Feature bands also align with side by side Visual Studio installations for those users that install SDKs for Visual Studio use.

For more information, please check out:

.NET Core SDK Size Improvements

The .NET Core SDK is significantly smaller with .NET Core 3.0. The primary reason is that we changed the way we construct the SDK, by moving to purpose-built “packs” of various kinds (reference assemblies, frameworks, templates). In previous versions (including .NET Core 2.2), we constructed the SDK from NuGet packages, which included many artifacts that were not required and wasted a lot of space.

.NET Core 3.0 SDK Size (size change in brackets)

Operating System Installer Size (change) On-disk Size (change)
Windows 164MB (-440KB; 0%) 441MB (-968MB; -68.7%)
Linux 115MB (-55MB; -32%) 332MB (-1068MB; -76.2%)
macOS 118MB (-51MB; -30%) 337MB (-1063MB; -75.9%)

The size improvements for Linux and macOS are dramatic. The improvement for Windows is smaller because we have added WPF and Windows Forms as part of .NET Core 3.0. It’s amazing that we added WPF and Windows Forms in 3.0 and the installer is still (a little bit) smaller.

You can see the same benefit with .NET Core SDK Docker images (here, limited to x64 Debian and Alpine).

Distro 2.2 Size 3.0 Size
Debian 1.74GB 706MB
Alpine 1.48GB 422MB

You can see how we calculated these file sizes in .NET Core 3.0 SDK Size Improvements. Detailed instructions are provided so that you can run the same tests in your own environment.

Docker Publishing Update

Microsoft teams are now publishing container images to the Microsoft Container Registry (MCR). There are two primary reasons for this change:

  • Syndicate Microsoft-provided container images to multiple registries, like Docker Hub and Red Hat.
  • Use Microsoft Azure as a global CDN for delivering Microsoft-provided container images.

On the .NET team, we are now publishing all .NET Core images to MCR. As you can see from the links (if you click on it), we continue to have “home pages” on Docker Hub. We intend for that to continue indefinitely. MCR does not offer such pages, but relies of public registries, like Docker Hub, to provide users with image-related information.

The links to our old repos, such as microsoft/dotnet and microsoft/dotnet-nightly now forward to the new locations. The images that existed at those locations still exists and will not be deleted.

We will continue servicing the floating tags in the old repos for the supported life of the various .NET Core versions. For example, 2.1-sdk, 2.2-runtime, and latest are examples of floating tags that will be serviced. A three-part version tag like 2.1.2-sdk will not be serviced, which was already the case. We will only be supporting .NET Core 3.0 images in MCR.

For example, the correct tag string to pull the 3.0 SDK image now looks like the following:

mcr.microsoft.com/dotnet/core/sdk:3.0

The new MCR string will be used with both docker pull and in Dockerfile FROM statements.

See .NET Core Images now available via Microsoft Container Registry for more information.

SDK Docker Images Contain PowerShell Core

PowerShell Core has been added to the .NET Core SDK Docker container images, per requests from the community. PowerShell Core is a cross-platform (Windows, Linux, and macOS) automation and configuration tool/framework that works well with your existing tools and is optimized for dealing with structured data (e.g. JSON, CSV, XML, etc.), REST APIs, and object models. It includes a command-line shell, an associated scripting language and a framework for processing cmdlets.

You can try out PowerShell Core, as part of the .NET Core SDK container image, by running the following Docker command:

docker run --rm mcr.microsoft.com/dotnet/core/sdk:3.0 pwsh -c Write-Host "Hello Powershell"

There are two main scenarios that having PowerShell inside the .NET Core SDK container image enables, which were not otherwise possible:

Example syntax for launching PowerShell for a (volume-mounted) containerized build:

  • docker run -it -v c:myrepo:/myrepo -w /myrepo mcr.microsoft.com/dotnet/core/sdk:3.0 pwsh build.ps1
  • docker run -it -v c:myrepo:/myrepo -w /myrepo mcr.microsoft.com/dotnet/core/sdk:3.0 ./build.ps1

For the second example to work, on Linux, the .ps1 file needs to have the following pattern, and needs to be formatted with Unix (LF) not Windows (CRLF) line endings:

#!/usr/bin/env pwsh
Write-Host "test"

If you are new to PowerShell and would like to learn more, we recommend reviewing the getting started documentation.

Note: PowerShell Core is now available as part of .NET Core 3.0 SDK container images. It is not part of the .NET Core 3.0 SDK.

Red Hat Support

In April 2015, we announced that .NET Core would be coming to Red Hat Enterprise Linux. Through an excellent engineering partnership with Red Hat, .NET Core 1.0 appeared as a component available in the Red Hat Software Collections, June 2016. Working with Red Hat engineers, we have learned (and continue to learn!) much about the releasing software to the Linux community.

Over the last four years, Red Hat has shipped many .NET Core updates and significant releases, such as 2.1 and 2.2, on the same day as the Microsoft. With .NET Core 2.2, Red Hat expanded their .NET Core offerings to include OpenShift platforms. With the release of RHEL 8, we are excited to have .NET Core 2.1 and soon, 3.0, available in the Red Hat Application Streams.

Closing

.NET Core 3.0 is a major new release of .NET Core, and includes a vast set of improvements. We recommend that you start adopting .NET Core 3.0 as soon as you can. It greatly improves .NET Core in many ways, like the massive reduction in size of the SDK, and by greatly improving support for key scenarios like containers and Windows desktop applications. There are also many small improvements that were not included in this post, that you are sure to benefit from over time.

Please share your feedback with us, either in the coming days, weeks or months. We hope you enjoy it. We had a lot of fun making it for you.

If you still want to read more, the following recent posts are recommended reading:

The post Announcing .NET Core 3.0 appeared first on .NET Blog.

.NET Core Support and More in Visual Studio 2019 version 16.3 – Update Now!

$
0
0

As we continue to deliver on our mission of any developer, any app, any platform, it’s always an exciting time on the Visual Studio team when we get to launch major features.  Today we’ve released Visual Studio 2019 version 16.3 which contains support for the release of .NET Core 3.0, significant C++ improvements, and great updates for Python developers as well as TypeScript 3.6 support. You can download version 16.3 on visualstudio.com or update from the Visual Studio installer.

We are also releasing the first preview of Visual Studio 2019 version 16.4 which can be downloaded from visualstudio.com. For additional information on what’s in Preview 1, check out the release notes.

So, grab your favorite fall beverage, click the update button in the Visual Studio Installer or download the latest version, and while the update commences, peruse this overview of what’s new and awesome in this release.

.NET Core 3.0

Visual Studio version 16.3 includes support for .NET Core 3.0.  Why is .NET Core 3.0 exciting?  Here’s what Scott Hanselman has to say:

“.NET Core is open source and cross-platform.  You can use .NET Core to run server applications on Windows, Mac, a dozen Linuxes, iPhone, IoT devices, and more! .NET Core is open source, cross-platform, and fast as heck. And it’s out today. Fully supported. Open source, yes, but fully supported with the full weight of Microsoft.

Together with .NET Core 3.0, C# 8.0 is out today!  It’s also open source and is the language that many of you will use to make your applications.  Visual Studio 16.3 supports both C# 8.0 and .NET Core 3.0, and provides tooling support for all new .NET Core 3.0 features.  This includes support for building desktop applications with Windows Forms and WPF, client-side web applications with Blazor and back-end microservices using gRPC.

While .NET Core 3.0 is cross-platform, you can also create platform-specific applications!  This means your apps can “light up” with operating system-specific features.  For example, if you want to talk to a light sensor on a Raspberry Pi with .NET Core, you can!

Taking this to obvious next steps, you take (if you want) a 15-year-old existing Windows Forms or WPF app and swap out it’s “engine” for all new .NET Core 3.0 and reap the benefits. It’s a brain transplant that can make your application faster, easier to deploy, and easier to maintain but it will still be a Windows app using your existing code.

You might think because .NET Core 3.0 includes support for Windows Forms and WPF that it might be heavier or take up more space.  In fact, this support exists in optional NuGet packages.  Your .NET Core apps are smaller than ever (and will get even tighter in future releases) and run amazingly well in containers/Docker and in the cloud where density is needed.”

There are so many exciting features in .NET Core 3.0. Head over to the .NET Blog to read all of the details.

NOTE if you are working with .NET Core 3.0, you will need to use Visual Studio version 16.3 or greater.

.NET Core Desktop Application Support

.NET Core 3.0 includes full support of Windows Forms at run-time and today we are happy to announce the first preview version of the Windows Forms Designer for .NET Core projects.

We are in the very early days of the designer, so it’s available as a Visual Studio extension (“VSIX”). Once you install the .NET Core Designer, Visual Studio will automatically pick the correct designer based on the target framework of your application. This preview of the designer supports a subset of controls, but more will be added every month in further preview versions. That’s why we don’t recommend porting your Windows Forms applications to .NET Core yet if you need to use the designer on a regular basis.

Please reach out with your suggestions, issues, and feature requests. We appreciate your engagement!

Download .NET Core Windows Forms Designer Preview 1 

.NET Applications in Containers

Developers building Azure Functions (v2) can now add Docker container support (Linux only) to their C# projects. This can be done by right-clicking the project name in Solution Explorer and selecting Add > Docker Support. In addition to adding a Dockerfile to your project, the debug target will be set to “Docker” which means when you debug your Function app it will happen inside the running container.

.NET Applications in Containers
.NET Applications in Containers

 

Also, be sure to check out the Visual Studio Container Tools Extensions (Preview) for a glimpse of even better tooling coming in Visual Studio 2019 version 16.4 Preview 2.

.NET Productivity

Since C# 8.0 and .NET Core 3.0 are out today, Visual Studio tooling is updated to make you more productive when using these new tools.  Here’s a taste of the dozens of refactorings and happiness features we’ve added.

You can wrap chains of fluent calls with a refactoring.  To try this out, place your cursor on a call chain and press Ctrl + . to open the Quick Actions and Refactorings menu.

Wrap Chains of Fluent Calls
Wrap Chains of Fluent Calls with Quick Actions and Refactoring

 

Now you are also able to rename a file when renaming an interface, enum or class.  To do so, just place the cursor in the class name and type Ctrl + R, R to open the Rename dialogue and check the Rename file box.

Easily rename a file when renaming interface, enum or class
You can easily rename a file when renaming an interface, enum, or class.
.NET in version 16.4 Preview 1

If you are a developer wanting to try the cutting-edge tools in .NET, check out the features in Visual Studio 2019 version 16.4 Preview 1.  It includes new .NET Core 3.0 app publishing options:  Ready to Run (Crossgen), Linking, and SingleExe (make tiny .NET Core 3.0 apps) as well as new templates. Again, the release notes contain a larger list of features.

C++

Visual Studio 2019 version 16.3 brings new productivity features to all C++ developers and enhancements to the C++ cross-platform development experience.

Beyond those two aspects (which we’ll dive into next), those of you following our C++ Standard conformance efforts will be glad to hear that in the C++ Standard Library (STL), several new preview features are available under the /std:c++latest switch, including C++ Concepts! Concepts are predicates that can be used to express a generic algorithm’s expectations on its template arguments.

C++ Productivity

There are several improvements for C++ developers to be excited about. For example, you can toggle line comments using the keyboard shortcut Ctrl + K, Ctrl + / to easily set aside code you don’t want to compile just yet.

Set Aside Code to Compile for Later
Set Aside Code to Compile for Later

 

The IntelliSense completion list is now more powerful than ever with a built-in filter that considers type qualifiers. For example, if you type after const std::vector, the list will now filter out functions that would illegally modify it, such as push_back.

Intellisense Improvement
Intellisense Build-in Filter that Considers Type Qualifiers

 

Next, a new default semantic colorization scheme allows you to better understand your code at a glance. You will notice new colors in the following areas: functions, local variables, escape characters, keyword – control (if/else/for/return), string escape characters, and macros. There is also an option to differentiate between global and member functions and variables. The screenshots below illustrate new colorization for the blue and dark themes of Visual Studio:

The Colorization for the blue and dark themes of Visual Studio

 

Lastly, we turned IntelliCode on by default for C++ developers for AI-powered IntelliSense, added a way to configure the Call Stack window to hide or show template arguments for improved readability, and added some new CppCoreCheck rules to Visual Studio Code Analysis, including a new ‘Enum Rules’ rule set and additional const, enum, and type rules.

C++ Cross-Platform

Switching gears from productivity to cross-platform development, we made several user experience improvements. First of all, for CMake projects, you can now install missing 3rd party libraries that your application depends on straight from the IDE, using Vcpkg, our cross-platform C++ library manager. You will need to have Vcpkg installed on your machine, have run ‘vcpkg integrate install’ to set it up, and have a vcpkg toolchain file in your CMake project to take advantage of this feature. When you activate this feature, Vcpkg will download your library from source, compile it for you, and make it available for use for your future builds. This quick action will also install the package’s upstream dependencies for you.

Install Missing 3rd Party Libraries

 

Next, the CMake Settings Editor has been updated with better settings descriptions and links to documentation so it is easier than ever to configure your project. Below is a screenshot of the new experience:

CMake Editor Updates
CMake Settings Editor has been Updated

 

There were a few more improvements to the cross-platform development experience. This includes environment variable support for configuring debug targets and custom tasks in launch.vs.json and tasks.vs.json. In addition, remote header copies for Linux projects now run in parallel for improved performance. Visual Studio’s native support for WSL also supports parallel builds for MSBuild-based Linux projects. Lastly, you can now specify a list of local build outputs to deploy to a remote system with Linux Makefile projects.

Python

With this release you will enjoy a revamped testing experience for your Python projects. Not only there is now support for the popular pytest framework, but the support for the unittest framework has been improved to provide you with a more seamless testing experience. Let’s walk through some of those improvements from configuring & executing tests, to debugging, and finally code coverage.

Configuring and Executing Tests

Let’s look at how you do this for Python projects, and then for the Open Folder scenario.

To enable the testing experience within Visual Studio for Python projects, right-click on the project name and select the ‘Properties’ option. This option opens the project designer, which allows you to configure tests by going to the ‘Test’ tab. From the ‘Test’ tab, simply click the ’Test Framework’ dropdown box to select the testing framework you wish to use, as you can see in this screenshot:

Test Framework Dropdown Box
Test Framework dropdown box for configuring testing.

 

Pressing CTRL+S initiates test discovery for the testing framework you have selected, whether that is pytest, or unittest.

For Open Folder scenarios, the testing experience relies on the PythonSettings.json file for configuration. This file is located within your ‘local settings’ folder as shown here:

Configuring the .json file
Configure the PythonSettings.json file for Open Folder scenarios.
Code Coverage for Tests

Below you can see how Code Coverage is supported for unittest and pytest in both project mode and open folder scenarios:

Code Coverage is supported for unittest and pytest
Code Coverage supported for unittest and pytest in both project mode and open folder scenarios.

 

To enable Code Coverage for your currently opened project/folder, you must install the Python package, coverage, into your active virtual environment. Then, you can analyze Code Coverage by going to the Test Explorer and selecting Analyze Code Coverage for All Tests.

Read our Python documentation for further details on making the most of the new testing experience.

Version 16.4:  Our Next Servicing Baseline

When version 16.4 moves to the release channel later this year, it will be the second “servicing baseline” for Visual Studio 2019. We introduced servicing baselines with Visual Studio 2019 to provide large organizations increased flexibility over when they adopt the new features in minor version updates included in the Enterprise and Professional editions. Unlike versions 16.1, 16.2, and 16.3, which received servicing fixes only until the next minor update is releases, we offer fixes for servicing baselines for an extended period. We will service version 16.4 for 12 months after the next servicing baseline is declared.

As version 16.0 is the first servicing baseline, it will continue to receive servicing fixes for one year after version 16.4 releases later this year. Full details can be found at Visual Studio Product Lifecycle and Servicing.

Update now and let us know what you think

If the above summary got you as excited as we are, head on over to visualstudio.microsoft.com/downloads to get the latest releases. As always, you can continue to use the Report a Problem tool in Visual Studio or head over to the Visual Studio Developer Community to track issues or suggest a feature. We continue to make many tweaks and improvements along the way to address your feedback, and rest assured that we will continue doing so in releases going forward.

The post .NET Core Support and More in Visual Studio 2019 version 16.3 – Update Now! appeared first on The Visual Studio Blog.

Announcing F# 4.7

$
0
0

We’re excited to announce general availability of F# 4.7 in conjunction with the .NET Core 3.0 release! In this post, I’ll show you how to get started, explain everything in F# 4.7 and give you a sneak peek at what we’re doing for the next version of F#.

F# 4.7 is another incremental release of F# with a focus on infrastructural changes to the compiler and core library and some relaxations on previously onerous syntax requirements.

F# 4.7 was developed entirely via an open RFC (requests for comments) process. The F# community has offered very detailed feedback in discussions for this version of the language. You can view all RFCs that correspond with this release here:

Get started

First, install either:

If you are a Visual Studio user, you will get an appropriate .NET Core installed by default. Once you have installed either .NET Core or Visual Studio 2019, you can use F# 4.7 with Visual Studio, Visual Studio for Mac, or Visual Studio Code with Ionide.

FSharp.Core now targets .NET Standard 2.0

Starting with FSharp.Core 4.7.0 and F# 4.7, we’re officially dropping support for .NET Standard 1.6. Now that FSharp.Core targets .NET Standard 2.0, you can enjoy a few new goodies on .NET Core:

  • Simpler dependencies, especially if using a tool like Paket
  • FromConverter and ToConverter static methods on FSharpFunc<'T, 'TResult>
  • Implicit conversions between FSharpFunc<'T, 'TResult> and Converter<'T, 'TResult>
  • The FuncConvert.ToFSharpFunc<'T> method
  • Access to the MatchFailureException type
  • The WebExtensions namespace for working with older web APIs in an F#-friendly way

Additionally, the FSharp.Core API surface area has expanded to better support parallel and sequential asynchronous computations:

  • Async.Parallel has an optional maxDegreesOfParallelism parameter so you can tune the degree of parallelism used
  • Async.Sequential to allow sequential processing of async computations

Thanks to Fraser Waters for contributing the new FSharp.Core additions.

Support for LangVersion

F# 4.7 introduces the ability to tune your effective language version with your compiler. We’re incredibly excited about this feature, because it allows us to deliver preview features alongside released features for any given compiler release.

If you’re interested in trying out preview features and giving feedback early, it’s very easy to get started. Just set the following property in your project file:

Once you save the project file, the compiler will now give you access to all preview features that shipped with that compiler.

When using F# in preview versions of .NET Core and/or Visual Studio, the language version will be set to preview by default.

The lowest-supported language version is F# 4.6. We do not plan on retrofitting language version support for F# 4.5 and lower.

Implicit yields

In the spirit of making things easier, F# 4.7 introduces implicit yields for lists, arrays, sequences, and any Computation Expression that defines the Yield, Combine, Delay, and Zero members.

A longstanding issue with learning F# has been the need to always specify the yield keyword in F# sequence expressions. Now you can delete all the yield keywords, since they’re implicit!

This makes F# sequence expressions align with list and array expressions.

But that’s not all! Prior to F# 4.7, even with lists and arrays, if you wanted to conditionally generate values it was a requirement to specify yield everywhere, even if you only had one place you did it. All the yield keywords can now be removed:

This feature was inspired by Fable programs that use F# list expressions as HTML templating DSLs.

Syntax relaxations

There are two major relaxations for F# syntax added in F# 4.7. Both should make F# code easier to write, especially for beginners.

No more required double underscore

Prior to F# 4.7, if you wanted to specify member declarations and you didn’t want to name the ‘this’ identifier on F# objects, you had to use a double underscore. Now, you can only specify a single underscore, which previous language versions would reject:

This same rule has been relaxed for C-style for loops where the indexer is not meaningful:

Thanks to Gustavo Leon for contributing this feature.

Indentation relaxations for parameters passed to constructors and static methods

Another annoyance with previous F# compilers was a requirement to indent parameters to constructors or static methods. This was due to an old rule in the compiler where the first parameter determined the level of indentation required for the rest of the parameters. This is now relaxed:

Preview features

As I mentioned previously, F# 4.7 introduces the concept of an effective language version for the compiler. In the spirit of shipping previews as early as possible, we’ve included two new preview features: nameof and opening of static classes.

Nameof

The nameof function has been of the most-requested feature to add to F#. It’s very convenient when you want to log the names of things (like parameters or classes) and have the name change as you’d expect if you refactor those symbols to use different names over time. We’re still not 100% resolute on the design of it, but the core functionality is good enough that we’d love people to try it out and give us feedback. Here’s a little taste of what you can do with it:

You can also contribute to its design by proposing changes to the corresponding RFC.

Open static classes

Much like nameof, opening of static classes has been requested a lot. Not only does it allow better usage of C# APIs that assume the ability to open static classes, it can also improve F# DSLs. However, we’re also not 100% resolute on its overall design. Here’s a little taste of what it’s like:


You can also contribute to its design by proposing changes to the corresponding RFC.

F# Interactive for .NET Core Preview

Starting with F# 4.7 and .NET Core 3, you can now use F# interactive (FSI) from .NET Core! Just open a command line and type dotnet fsi to get started.

The FSI experience for .NET Core is now a very, very stable preview. There are still some quirks with dependency resolution when pulling in packages and their transitive references. We’re addressing these by adding #r “nuget:package-name” support for FSI, and we’re hoping that you’ll transition away from manually referencing third-party .dlls and instead using packages as the unit of reference for FSI.

This package management support is still only available in nightly builds of the compiler. It will become available for general usage in forthcoming support for Jupyter Notebooks via the .NET Kernel and in the first preview of .NET 5.

Updates to F# tools for Visual Studio

The Visual Studio 2019 update 16.3 release corresponds with F# 4.7 and .NET Core 3. In this release, we’ve made tooltips a bit nicer and fixed some longstanding issues in the compiler and tools that affect your experience in Visual Studio. We also spent a lot of time doing more infrastructural work to make the F# integration with Roslyn significantly more stable than it was in the past.

Record definition tooltips use a more canonical formatting:

Anonymous Records also do the same:

And record value output in FSI also uses a more canonical form:

Properties with explicit get/set modifiers will also reflect those modifiers in tooltips:

Looking back at the past year or so of F# evolution

The past year (plus a few months) has seen a lot of additions to the F# language and tools. We’ve shipped:

  • F# 4.5, F# 4.6, and now F# 4.7 with 14 new language features between the three of them
  • 6 updates to the Visual Studio tools for F#
  • Massive performance improvements to F# tooling for larger codebases
  • 2 preview features for the next version of F#
  • A revamped versioning scheme for FSharp.Core
  • A new home for F# OSS development under the .NET Foundation

It’s been quite a rush, and while the sheer number of updates and fundamental shifts to F#, we’re planning on ramping up these efforts!

Looking head towards F# 5 and .NET 5

As .NET undergoes a monumental shift towards .NET 5, F# will also feature a bit of a shift. While F# is a general-purpose language – the functional programming language for .NET – it also has a strong heritage of being used for “analytical” workloads: processing data, doing numerical work, data science and machine learning, etc. We feel that F# is positioned extremely well to continue this path, and we intend on emphasizing features that can align with these workloads more.

Some of the concrete things we’ll focus on is making F# a first-class language for Jupyter Notebooks via the .NET Kernel. We’ll also emphasize language features that make it easier to work with collections of data.

I like to think of these things as being “in addition to” everything F# is focused on so far: first-class .NET support, excellent tooling, wonderful features that make general purpose F# programming great, and now an influx of work aligned with “analytical” programming. We’re incredibly excited about the work ahead of us, and we hope you’ll also contribute in the way you see best.

Cheers, and happy hacking!

The post Announcing F# 4.7 appeared first on .NET Blog.

Announcing Entity Framework Core 3.0 and Entity Framework 6.3 General Availability

$
0
0

We are extremely excited to announce the general availability of EF Core 3.0and EF 6.3 on nuget.org.

The final versions of .NET Core 3.0 and ASP.NET Core 3.0 are also available now.

How to get EF Core 3.0

EF Core 3.0 is distributed exclusively as a set of NuGet packages. For example, to add the SQL Server provider to your project, you can use the following command using the dotnet tool:

dotnet add package Microsoft.EntityFrameworkCore.SqlServer --version 3.0.0

When upgrading applications that target older versions of ASP.NET Core to 3.0, you also have to add the EF Core packages as an explicit dependency.

Also starting in 3.0, the dotnet ef command-line tool is no longer included in the .NET Core SDK. Before you can execute EF Core migration or scaffolding commands, you’ll have to install this package as either a global or local tool. To install the final version of our 3.0.0 tool as a global tool, use the following command:

dotnet tool install --global dotnet-ef --version 3.0.0

It’s possible to use this new version of dotnet ef with projects that use older versions of the EF Core runtime. However, older versions of the tool will not work with EF Core 3.0.

What’s new in EF Core 3.0

Including major features, minor enhancements, and bug fixes, EF Core 3.0 contains more than 600 product improvements. Here are some of the most important ones:

LINQ overhaul

We rearchitected our LINQ provider to enable translating more query patterns into SQL, generating efficient queries in more cases, and preventing inefficient queries from going undetected. The new LINQ provider is the foundation over which we’ll be able to offer new query capabilities and performance improvements in future releases, without breaking existing applications and data providers.

Restricted client evaluation

The most important design change has to do with how we handle LINQ expressions that cannot be converted to parameters or translated to SQL.

In previous versions, EF Core identified what portions of a query could be translated to SQL, and executed the rest of the query on the client. This type of client-side execution is desirable in some situations, but in many other cases it can result in inefficient queries.

For example, if EF Core 2.2 couldn’t translate a predicate in a Where() call, it executed an SQL statement without a filter, transferred all the rows from the database, and then filtered them in-memory:

var specialCustomers = 
  context.Customers
    .Where(c => c.Name.StartsWith(n) && IsSpecialCustomer(c));

That may be acceptable if the database contains a small number of rows but can result in significant performance issues or even application failure if the database contains a large number or rows.

In EF Core 3.0, we’ve restricted client evaluation to only happen on the top-level projection (essentially, the last call to Select()). When EF Core 3.0 detects expressions that can’t be translated anywhere else in the query, it throws a runtime exception.

To evaluate a predicate condition on the client as in the previous example, developers now need to explicitly switch evaluation of the query to LINQ to Objects:

var specialCustomers =
  context.Customers
    .Where(c => c.Name.StartsWith(n)) 
    .AsEnumerable() // switches to LINQ to Objects
    .Where(c => IsSpecialCustomer(c));

See the breaking changes documentation for more details about how this can affect existing applications.

Single SQL statement per LINQ query

Another aspect of the design that changed significantly in 3.0 is that we now always generate a single SQL statement per LINQ query. In previous versions, we used to generate multiple SQL statements in certain cases, like to translate Include() calls on collection navigation properties and to translate queries that followed certain patterns with subqueries. Although this was in some cases convenient, and for Include() it even helped avoid sending redundant data over the wire, the implementation was complex, it resulted in some extremely inefficient behaviors (N+1 queries), and there was situations in which the data returned across multiple queries could be inconsistent.

Similarly to client evaluation, if EF Core 3.0 can’t translate a LINQ query into a single SQL statement, it throws a runtime exception. But we made EF Core capable of translating many of the common patterns that used to generate multiple queries to a single query with JOINs.

Cosmos DB support

The Cosmos DB provider for EF Core enables developers familiar with the EF programing model to easily target Azure Cosmos DB as an application database. The goal is to make some of the advantages of Cosmos DB, like global distribution, “always on” availability, elastic scalability, and low latency, even more accessible to .NET developers. The provider enables most EF Core features, like automatic change tracking, LINQ, and value conversions, against the SQL API in Cosmos DB.

See the Cosmos DB provider documentation for more details.

C# 8.0 support

EF Core 3.0 takes advantage of a couple of the new features in C# 8.0:

Asynchronous streams

Asynchronous query results are now exposed using the new standard IAsyncEnumerable<T> interface and can be consumed using await foreach.

var orders = 
  from o in context.Orders
  where o.Status == OrderStatus.Pending
  select o;

await foreach(var o in orders)
{
  Process(o);
}

See the asynchronous streams in the C# documentation for more details.

Nullable reference types

When this new feature is enabled in your code, EF Core examines the nullability of reference type properties and applies it to corresponding columns and relationships in the database: properties of non-nullable references types are treated as if they had the [Required] data annotation attribute.

For example, in the following class, properties marked as of type string? will be configured as optional, whereas string will be configured as required:

public class Customer
{
  public int Id { get; set; }
  public string FirstName { get; set; }
  public string LastName { get; set; }
  public string? MiddleName { get; set; }
}

See nullable reference types in the C# documentation for more details.

Interception of database operations

The new interception API in EF Core 3.0 allows providing custom logic to be invoked automatically whenever low-level database operations occur as part of the normal operation of EF Core. For example, when opening connections, committing transactions, or executing commands.

Similarly to the interception features that existed in EF 6, interceptors allow you to intercept operations before or after they happen. When you intercept them before they happen, you are allowed to by-pass execution and supply alternate results from the interception logic.

For example, to manipulate command text, you can create an IDbCommandInterceptor:

public class HintCommandInterceptor : DbCommandInterceptor
{
  public override InterceptionResult ReaderExecuting(
    DbCommand command, 
    CommandEventData eventData, 
    InterceptionResult result)
  {
    // Manipulate the command text, etc. here...
    command.CommandText += " OPTION (OPTIMIZE FOR UNKNOWN)";
    return result;
  }
}

And register it with your DbContext:

services.AddDbContext(b => b
  .UseSqlServer(connectionString)
  .AddInterceptors(new HintCommandInterceptor()));

Reverse engineering of database views

Query types, which represent data that can be read from the database but not updated, have been renamed to keyless entity types. As they are an excellent fit for mapping database views in most scenarios, EF Core now automatically creates keyless entity types when reverse engineering database views.

For example, using the dotnet ef command-line tool you can type:

dotnet ef dbcontext scaffold "Server=(localdb)mssqllocaldb;Database=Blogging;Trusted_Connection=True;" Microsoft.EntityFrameworkCore.SqlServer

And the tool will now automatically scaffold types for views and tables without keys:

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
  modelBuilder.Entity<Names>(entity =>
  {
    entity.HasNoKey();
    entity.ToView("Names");
  });

  modelBuilder.Entity<Things>(entity =>
  {
    entity.HasNoKey();
  });
}

Dependent entities sharing a table with principal are now optional

Starting with EF Core 3.0, if OrderDetails is owned by Order or explicitly mapped to the same table, it will be possible to add an Order without an OrderDetails and all of the OrderDetails properties, except the primary key will be mapped to nullable columns.

When querying, EF Core will set OrderDetails to null if any of its required properties doesn’t have a value, or if it has no required properties besides the primary key and all properties are null.

public class Order
{
    public int Id { get; set; }
    public int CustomerId { get; set; }
    public OrderDetails Details { get; set; }
}

[Owned]
public class OrderDetails
{
    public int Id { get; set; }
    public string ShippingAddress { get; set; }
}

What’s new in EF 6.3

We understand that many existing applications use previous versions of EF, and that porting them to EF Core only to take advantage of .NET Core can require a significant effort. For that reason, we decided to port the newest version of EF 6 to run on .NET Core 3.0. The developer community also contributed to this release with several bug fixes and enhancements.

Here are some of the most notable improvements:

  • Support for .NET Core 3.0
    • The EF 6.3 runtime package now targets .NET Standard 2.1 in addition to .NET Framework 4.0 and 4.5.
    • The migration commands have been rewritten to execute out of process and work with SDK-style projects.
  • Support for SQL Server hierarchyid
  • Improved compatibility with Roslyn and NuGet PackageReference
  • Added the ef6.exe utility for enabling, adding, scripting, and applying migrations from assemblies. This replaces migrate.exe

There are certain limitations when using EF 6.3 in .NET Core. For example:

  • Data providers need to be also ported to .NET Core. We only ported the SQL Server provider, which is included in the EF 6.3 package.
  • Spatial support won’t be enabled with SQL Server because the spatial types aren’t enabled to work with .NET Core.
  • There’s currently no support for using the EF designer directly on .NET Core or .NET Standard projects.

For more details on the EF 6.3 release, and a workaround to the latter limitation, see What’s new in EF 6.3 in the product’s documentation.

What’s next: EF Core 3.1

The EF team is now focused on the EF Core 3.1 release, which is planned for later this year, and on making sure that the documentation for EF Core 3.0 is complete.

EF Core 3.1 will be a long-term support (LTS) release, which means it will be supported for at least 3 years. Hence the focus is on stabilizing and fixing bugs rather than adding new features and risky changes. We recommend that you adopt .NET Core 3.0 today and then adopt 3.1 when it becomes available. There won’t be breaking changes between these two releases.

The full set of issues fixed in 3.1 can be seen in our issue tracker. Here are some worth mentioning:

  • Fixes and improvements for issues recently found in Cosmos
  • DB provider Fixes and improvements for issues recently found in the new LINQ implementation
  • Lots of regressions tests added for issues verified as fixed in 3.0
  • Test stability improvements
  • Code cleanup

The first preview of EF Core 3.1 will be available very soon.

Thank you

If you either sent code contributions or feedback for any of our preview releases, thanks a lot! You helped make EF Core 3.0 and EF 6.3 significantly better!

We hope everyone will now enjoy the results.

The post Announcing Entity Framework Core 3.0 and Entity Framework 6.3 General Availability appeared first on .NET Blog.

Joining the .NET Foundation Maturity Model Pilot

$
0
0

Joining the .NET Foundation Maturity Model Pilot

The .NET Foundation is starting a new pilot program to increase quality and user confidence in open source projects, using a new project maturity model. We’ve been working with the Technical Review Action Group at the Foundation to help shape the program. We’re happy to see the pilot being launched and that the .NET Team is participating in the project. For us, this includes the underlying .NET platform, and also the packages we release.

We get to talk with larger organizations frequently, both from the private and public sectors, about open source. On one end of the spectrum, we see enthusiastic adopters of open source and on the other, an “open source isn’t safe for our business” approach. We also see organizations at all points between, and listen to their feedback about their practices using (or not using) open source and why. There are merits for each pattern we see. A big part of our contribution to the pilot was generalizing the underlying reasons for those approaches, and validating that the new maturity model will provide benefit to these organizations, and make adoption of open source safer and easier for them.

This new pilot program is similar to programs already in place at other foundations, like Cloud Native Computing Foundation (CNCF) and Apache Foundation. It is great to see the .NET Foundation expanding its role and taking on some of the same kind of charter as other communities use. The track record at these foundations speaks for itself, so it makes sense to emulate their approach.

The .NET Foundation is proposing three new programs:

.NET Foundation Project Maturity Model

These programs should be great additions to the .NET ecosystem and solve challenges that need to be addressed. We’re interesting in helping each of these programs. For the project forge, in particular, we have at least one lab project that we’d be happy to donate to the Foundation as significant starter code for a new project, run by new maintainers.

The .NET Foundation Technical Action Group has set an ambitious plan to improve the .NET ecosystem, with these three new programs, and the guidance and structure that go along with them. We will do our part in supporting these programs and the Technical Actions Group. We’re looking forward to seeing these programs develop in our larger ecosystem.

The post Joining the .NET Foundation Maturity Model Pilot appeared first on .NET Blog.


Setting HTTP header attributes to enable Azure authentication/authorization using HTTPRepl

$
0
0
Posted on behalf of Ahmed Metwally

The HTTP Read-Eval-Print Loop (REPL) is a lightweight, cross-platform command-line tool that’s supported everywhere .NET Core is supported. It’s used for making HTTP requests to test ASP.NET Core web APIs and view their results. You can use the HTTPRepl to navigate and interrogate any API in the same manner that you would navigate a set of folders on a file system. If the service that you are testing has a swagger.json file, specifying that file to HTTPRepl will enable auto-completion.

To install the HTTP REPL, run the following command:

>dotnet tool install -g Microsoft.dotnet-httprepl


For more information on how to use HTTPRepl, read Angelos’ post on the ASP.NET blog. As we continue to improve the tool, we look to add new commands to facilitate the use of HTTPRepl with different types of secure API services. As of this release, HTTPRepl supports authentication and authorization schemes achievable through header manipulation, like basic, bearer token, and digest authentication. For example, to use a bearer token to authenticate to a service, use the command “set header”. Set the “Authorization” header to the bearer token value using the following command:

>set header Authorization “bearer <token_value>”


And replace <token_value> with your authorization bearer token for the service. Don’t forget to use the quotation marks to wrap the word bearer along with the <token_value> in the same literal string. Otherwise, the tool will treat them as two different values and will fail to set the header properly. To ensure that the header in the HTTP request is being formatted as expected, enable echoing using the “echo on” command.

Using the “set header” command, you can leverage HTTPRepl to test and navigate any secure REST API service including your Azure-hosted API services or the Azure Management API. To access a secure service hosted on Azure, you need a bearer token. Get a bearer token for your Azure subscription, using the Azure CLI to get an access token for the required Azure subscription:

>az login


Copy your subscription ID from the Azure portal and paste it in the “az account set” command:

>az account set --subscription "<subscription ID>" 

>az account get-access-token 

{ 
  "accessToken": "<access_token_will_be_displayed_here>", 
  "expiresOn": "<expiry date/time will be displayed here>", 
  "subscription": "<subscription ID>", 
  "tenant": "<tenant ID>", 
  "tokenType": "Bearer" 
} 


Copy the text that appears in place of <access_token_will_be_displayed_here>. This is your access token. Finally, run HTTPRepl:

>httprepl
(disconnected)~ connect https://management.azure.com
Using a base address of https://management.azure.com/
Unable to find a swagger definition
https://management.azure.com/~ set header Authorization "bearer <em>&lt;paste_token_here&gt;</em>"
https://management.azure.com/~ cd subscriptions
https://management.azure.com/subscriptions/~ cd <subscription_ID>


For example, to search for a list of your Azure app services, issue the “get” command for the list of sites through the Microsoft web provider:

  https://management.azure.com/subscriptions/<subscription_ID>/~ get providers/Microsoft.Web/sites?api-version=2016-08-01
  HTTP/1.1 200 OK
  Cache-Control: no-cache
  Content-Length: 35948
  Content-Type: application/json; charset=utf-8
  Date: Thu, 19 Sep 2019 23:04:03 GMT
  Expires: -1
  Pragma: no-cache
  Strict-Transport-Security: max-age=31536000; includeSubDomains
  X-Content-Type-Options: nosniff
  x-ms-correlation-request-id: <em>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx</em>
  x-ms-original-request-ids: <em>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx;xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx</em>
  x-ms-ratelimit-remaining-subscription-reads: 11999
  x-ms-request-id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
  x-ms-routing-request-id: WESTUS:xxxxxxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx
  {
    "value": [
<list of azure resources>
    ]
  }
  https://management.azure.com/subscriptions/<subscription_ID>/~

You can use the full list of Azure REST APIs to browse and manage services in your Azure subscriptions. For more details on how HTTPRepl works, please check the ASPNET blog. To use HTTPRepl, download and install the global tool from the .NET Core CLI.

Give us feedback

It’s not HTTPie, it’s not Curl, but it’s also not PostMan. It’s something that you run and stays running and its aware of its current context. We find this experience valuable, but ultimately what matters the most is what you think. Please let us know your opinion by leaving comments below or on GitHub.


Ahmed Metwally, Sr. Program Manager, .NET dev tools @ahmedMsftAhmed is a Program Manager on the .NET tooling team focused on improving web development for .NET developers.

The post Setting HTTP header attributes to enable Azure authentication/authorization using HTTPRepl appeared first on ASP.NET Blog.

Visual Studio 2019 for Mac version 8.3

$
0
0

Today, we’re releasing version 8.3 of Visual Studio 2019 for Mac – our .NET IDE, built natively for macOS. This release is predominantly driven by your feedback: delivering a faster and more reliable ASP.NET Core web developer experience, reducing the time between coding and testing Xamarin UI changes, and including a few “delighters” to make your experience even better. 

These are the 3 top requests we’ve focused on in this release: 

ASP.NET Core developers will find this release really exciting. In addition to the items mentioned above, we’ve made the following improvements for your daily coding lives: 

Mobile developers using .NET and Xamarin also have new features to look forward to: 

Finally, v8.3 also includes several “delighters” across the product such as tab pinning, support for launchSettings.json in .NET Core projects, and an easier way to get started with your preferred keyboard shortcuts.  

In this post, we’ll cover a few of the highlights mentioned above. To learn about all the changes in this release, be sure to check out therelease notes. 

Support for .NET Core 3.0, .NET Standard 2.1, and C# 8.0 

This release officially supports .NET Core 3.0. .NET Standard 2.1, and C# 8.0Whether you install the IDE for the first time or update from a previous release, the .NET Core 3.0 SDK will be installed automatically for you. You can create, build, run, debug, and publish .NET Core 3.0 applications. 

When you’re editing C# 8.0 files in Visual Studio for Mac, you’ll have access to new C# 8.0 features like readonly members and switch expressions. All the existing editor functionality, such as IntelliSense and Quick Fixes, will also continue to work. For more info on what’s new in C# 8.0, head over to the docs to read about  What’s new in C# 8.0. 

NuGet solution-level package management 

We’ve also added support for NuGet solution-level package management functionality, a top requested item on the Developer Community site. As the number of projects grow within a solution, it becomes harder to keep same version of packages across the projects. With the improvements we made in this area, now it’s easier to consolidate to a single version of packages across the solution. 

NuGet package management dialog, showing package consolidation in Visual Studio for Mac

Multi-Targeting 

When building modern .NET libraries, it’s common for library authors to target a variety of platforms and devices. .NET Standard is the best solution for adding support for multiple platforms, but sometimes it’s necessary to use APIs in .NET frameworks that don’t support .NET Standard. In that case, the best solution is to use multi-targeting to build for multiple .NET frameworks. Recently, we included support for working on projects that support multi-targeting, another highly requested feature. When editing code in one of these projects, you can use a Target Framework drop down at the top of the editor window to focus your editing experience on a specific target framework. 

 

Dependencies are also now displayed broken down by target framework, and when running your project you can choose the target framework against which to debug.  

All web editors, now updated 

A major focus of the Visual Studio for Mac v8.3 release is optimizing the ASP.NET Core developer workflow. We’ve heard from hundreds of .NET Core developers and focused our efforts on addressing the community’s feedback. In this release, we’re introducing new web editors based on the same editors (and code) as Visual Studio on Windows, and support for managing NuGet packages across multiple projects at the solution level.  

 Since the initial release of Visual Studio 2019 for Mac in April, we’ve been working to update all the editors within the IDE. In v8.1, we introduced the new C# editor. v8.2 brought the new XAML editor to Visual Studio for Mac. In v8.3, we’re updating all the web editors! The new web editors are based on the same native UI as the C# and XAML editors and provide all the advanced features recently introduced to Visual Studio for Mac, such as multi-caret editing, RTL support, and native input support. In addition to these high-level editor features, the new web experience is also powered by the same core as Visual Studio on Windows, so you can expect the same language service features that make Visual Studio such a productive IDE. These language services provide vital features, such as IntelliSense as well as code formatting, syntax highlighting, and navigation support.   

The new editors support a variety of web files, including HTML, CSHTML, JS, JSON, and CSSThis also brings support for a common request, IntelliSense and syntax highlighting for languages embedded in .cshtml: JavaScript, C#, and CSS! This means you get all features as appropriate for the file types you are working within, so you will see advanced IntelliSense in JS, CSHTML and more. We have also improved support for LESS and SASS files. The web experience in Visual Studio for Mac has never been better! 

Typing CSS into a .cshtml file in Visual Studio for Mac, showing suggestions for CSS properties while typing. 

ASP.NET Core: File Nesting support 

We’ve also added automatic File Nesting for ASP.NET Core projects. The auto file nesting rules applied are the same as what you find in Visual Studio. With file nesting enabled, you can focus better on the files that you edit most frequently. Generated files, and less frequently edited files will be nested under other related files. Check out the screenshot of the Solution Pad showing the nesting behavior. 

The solution pad open, showing an Index.cshtml.cs file nested underneath an Index.cshtml file

 

Debugging ASP.NET Core apps on multiple web browsers 

Finally, for ASP.NET Core development, we’ve added one more popular feature request – support for targeting multiple web browsers. Now, when debugging an ASP.NET Core app, you can pick the browser in which you want to run your app. This makes it a lot easier to make sure you’ve got just the right experience in each browser your app supports. 

Drop down menu in Visual Studio for Mac showing target web browser selection  

XAML Hot Reload for Xamarin.Forms Preview 

We are making XAML Hot Reload for Xamarin.Forms available in this release as a preview. XAML Hot Reload enables you to rapidly iterate on your Xamarin.Forms UI without needing to build and deploy. When debugging your app, you can now edit your XAML and hit save to see the changes immediately reflected on the running appThis works on all valid deployment targets, including simulators, emulators, and physical devices. To get started, check out the XAML Hot Reload documentation. 

Editing .xaml files in VS for Mac, seeing the UI update automatically in the simulator while editing.

Android 10, Xcode 11, and iOS 13 Support for Xamarin 

With Visual Studio for Mac version 8.3, Xamarin developers can take advantage of the latest-and-greatest features from both Google and Apple.  

Android 10 introduces a variety of new features such as dark theme, gestural navigation, and optimizations for foldable devices. iOS 13 provides the next generation of existing features like SiriKit and ARKit, while also introducing new features such as Dark Mode and Sign In with AppleTo learn more about how you can use these new features in your apps, head over to our Android 10 with Xamarin and Introduction to iOS 13 documentation pages. 

New “Delighters” for all developers 

A common request we’ve heard from developers who use both Windows and macOS is to support more of the same keyboard shortcuts as Visual Studio on Windows. Visual Studio for Mac has long had support for configurable “Key Bindings” that allow you to select from a set of pre-defined profiles and customize shortcuts to your liking. You can configure these from the Preferences > Key Bindings screen. 

In this release, we’ve added a new prompt on first launch of the IDE, to make it easier for you to customize the IDE to work the way you want it. 

Selecting Visual Studio for Mac keyboard shortcuts, with a list of shortcuts for Visual Studio (Windows), Visual Studio Code, or Xcode

Another helpful improvement added into this release is support for document pinning. Now, you can take any document in the IDE, right-click on it, and choose to “pin” it to remain open on the left-hand side of all your document tabs. 

Selecting to pin a tab, and then unpinning it

 

Download today 

Download the Visual Studio 2019 for Mac v8.3 release todayor if you have it installed already – updatto the latest release using the Stable channel 

If you run into any issues with the v8.3 release, please use the Help > Report a Problem menu in the IDE to let us know about it. You can also provide suggestions for future improvements by using the Provide a Suggestion menu. 

report a problem context menu

Finally, make sure to follow us on Twitter at @VisualStudioMac to stay up to date on the latest Visual Studio for Mac news and let us know what your experience has been like. We look forward to hearing from you! 

The post Visual Studio 2019 for Mac version 8.3 appeared first on The Visual Studio Blog.

The Future of C++/CLI and .NET Core 3

$
0
0

.NET Core 3.0 will be available soon and we have received a lot of questions about what that means for the future of C++/CLI. First, we would like to let everyone know that we are committed to supporting C++/CLI for .NET Core to enable easy interop between C++ codebases and .NET technologies such as WPF and Windows Forms. This support isn’t going to be ready when .NET Core 3.0 first ships, but it will be available in .NET Core 3.1 which ships with Visual Studio 2019 16.4 – roadmap.

C++/CLI will have full IDE support for targeting .NET Core 3.1 and higher. This support will include projects, IntelliSense, and mixed-mode debugging (IJW) on Windows. We don’t currently have plans for C++/CLI for targeting macOS or Linux. Additionally, compiling with “/clr:pure” and “/clr:safe” won’t be supported for .NET Core.

The first public previews for C++/CLI are right around the corner. Visual Studio 2019 16.4 Preview 1 includes an updated compiler with “/clr:dotnet” if you want to try it out with full IDE support coming in a subsequent preview. Keep an eye on the C++ Team Blog for more info; it’s coming soon! As always, let us know if you have any questions. Feedback and suggestions can be posted on Developer Community.

The post The Future of C++/CLI and .NET Core 3 appeared first on C++ Team Blog.

Announcing free C#, .NET, and ASP.NET for beginners video courses and tutorials

$
0
0

If you've been thinking about learning C#, now is the time to jump in! I've been working on this project for months and I'm happy to announce http://dot.net/videos 

There's nearly a hundred short videos (with more to come!) that will teach you topics like C# 101, .NET, making desktop apps, making ASP.NET web apps, learning containers and Dockers, or even starting with Machine Learning. There's a ton of great, slow-paced beginner videos. Most are less than 10 minutes long and all are organized into Playlists on YouTube!

If you are getting started, I'd recommend starting with these three series in this order - C#, .NET, then ASP.NET. After that, pick the topics that make you the happiest.

Lots of .NET learning videos and tutorials up on YouTube, free!

If you don't have access to YouTube where you are, all these videos are also on Channel 9 *and* can be downloaded locally via RSS feed! https://channel9.msdn.com/Browse/Series

Lots of .NET learning videos and tutorials up on YouTube, free!

If you like these, let me know what other topics you'd like us to cover! We are just getting started and already have intermediate and advanced C# classes in the works!


Sponsor: Like C#? We do too! That’s why we've developed a fast, smart, cross-platform .NET IDE which gives you even more coding power. Clever code analysis, rich code completion, instant search and navigation, an advanced debugger... With JetBrains Rider, everything you need is at your fingertips. Code C# at the speed of thought on Linux, Mac, or Windows. Try JetBrains Rider today!



© 2019 Scott Hanselman. All rights reserved.
     

Introducing cost-effective increment snapshots of Azure managed disks in preview

$
0
0

The preview of incremental snapshots of Azure managed disks is now available. Incremental snapshots are a cost-effective point-in-time backup of managed disks. Unlike current snapshots, which are billed for the full size, incremental snapshots are billed for the delta changes to disks since the last snapshot. They are always stored on the most cost-effective storage i.e., standard HDD irrespective of the storage type of the parent disks. Additionally, for increased reliability, they are stored on Zone redundant storage (ZRS) by default in regions that support ZRS. They cannot be stored on premium storage. If you are using current snapshots on premium storage to scale up virtual machine deployments, we recommend you to use custom images on standard storage in Shared Image Gallery. It will help you to achieve a more massive scale with lower cost. 

Incremental snapshots provide a differential capability, a unique capability available only in Azure managed disks. It enables customers and independent solution vendors (ISV) to build backup and disaster recovery solutions for managed disks. It allows you to get the changes between two snapshots of the same disk, thus copying only changed data between two snapshots across regions, reducing time and cost for backup and disaster recovery. For example, you can download the first incremental snapshot as a base blob in another region. For the subsequent incremental snapshots, you can copy only the changes since the last snapshot to the base blob. After copying the changes, you can take snapshots on the base blob that represent your point in time backup of the disk in another region. You can restore your disk either from the base blob or from a snapshot on the base blob in another region.

image

Incremental snapshots inherit all the compelling capabilities of current snapshots. They have a lifetime independent of their parent managed disks, making them available even when the parent managed disk is deleted. Moreover, they are accessible instantaneously meaning you can read the underlying VHD of incremental snapshots or restore disks from them as soon as they are created.

You can create incremental snapshots by setting the new incremental property to true.

az snapshot create 
-g yourResourceGroupName 
-n yourSnapshotName 
-l westcentralus 
--source subscriptions/yourSubscriptionId/resourceGroups/yourResourceGroupName/providers/Microsoft.Compute/disks/yourDiskName 
--incremental

You can identify incremental snapshots of the same disk by using the SourceResourceId and SourceUniqueId properties of snapshots. SourceResourceId is the Azure Resource Manager (ARM) resource Id of the parent disk. SourceUniqueId is the value inherited from the UniqueId property of the disk. If you delete a disk and then create a disk with the same name, the value of the UniqueId property will change.

az snapshot show 
-g yourResourceGroupName 
-n yourSnapshotName 
--query [creationData.sourceResourceId] -o tsv

az snapshot show 
-g yourResourceGroupName 
-n yourSnapshotName 
--query [creationData.sourceUniqueId] -o tsv

Availability and pricing

You can now create incremental snapshots and generate SAS URI for reading the underlying data in West Central US region via Azure Compute Rest API version 2019-03-01. You can also use the latest Azure PowerShell SDK, .Net SDK and CLI to perform these operations. The differential capability is supported via the pre-released versions of .NET, Python, and CPP Storage SDKs only. Please email AzureDisks@microsoft.com to get access to these SDKs. We are going to add support for other SDKs and other regions soon.

The per GB pricing of incremental snapshots is the same as the current full snapshots. You can visit the managed disk pricing for more details about the snapshot pricing.

Getting started

  1. Please email AzureDisks@microsoft.com to get access to the preview. 
  2. Create an incremental snapshot using CLI.
  3. Create an incremental snapshot using PowerShell.

New Azure Blueprint enables SWIFT CSP compliance on Azure

$
0
0

This morning at the SIBOS conference in London we announced how our new Azure Blueprint is being introduced by Microsoft in conjunction with the recent efforts to enable SWIFT connectivity in the cloud. It supports our joint customers in compliance monitoring and auditing of SWIFT infrastructure for cloud native payments, as described on the Official Microsoft Blog

SWIFT is the world’s leading provider of secure financial messaging services used and trusted by more than 11,000 financial institutions in more than 200 countries and territories. Today, enterprises and banks conduct these transactions by sending payment messages over the highly secure SWIFT network which leverages on-premises installations of SWIFT technology. SWIFT Cloud Connect creates a bank-like wire transfer experience with the added operational, security, and intelligence benefits the Microsoft Cloud offers.

Azure Blueprints is a free service that enables customers to define a repeatable set of Azure resources that implement and adhere to standards, patterns, and requirements. Azure Blueprints allow customers to set up governed Azure environments that can scale to support production implementations for large-scale migrations. Azure Blueprints include mappings for key compliance standards such as ISO 27001, NIST SP 800-53, PCI-DSS, UK Official, IRS 1075, and UK NHS. 

The new SWIFT blueprint maps Azure built-in polices to CSP's security controls framework, enabling financial service organizations to have agility in creating and monitoring secure and compliant SWIFT infrastructure environments.

The Azure blueprint includes mappings to:

  • Account management. Helps with the review of accounts of that may not comply with an organization’s account management requirements.
  • Separation of duties. Helps in maintaining an appropriate number of Azure subscription owners.
  • Least privilege. Audits accounts that should be prioritized for review.
  • Remote access. Helps with monitoring and control of remote access.
  • Audit review, analysis, and reporting. Helps ensure that events are logged and enforces deployment of the Log Analytics agent on Azure virtual machines.
  • Least functionality. Helps monitor virtual machines where an application white list is recommended but has not yet been configured.
  • Identification and authentication. Helps restrict and control privileged access.
  • Vulnerability scanning. Helps with the management of information system vulnerabilities.
  • Denial of service protection. Audits if the Azure DDoS Protection standard tier is enabled.
  • Boundary protection. Helps with the management and control of the system boundary.
  • Transmission confidentiality and integrity. Helps protect the confidentiality and integrity of transmitted information.
  • Flaw remediation. Helps with the management of information system flaws.
  • Malicious code protection. Helps the management of endpoint protection, including malicious code protection.
  • Information system monitoring. Helps with monitoring a system by auditing and enforcing logging across Azure resources

We are committed to helping our customers leverage Azure in a secure and compliant manner. Over the next few months, we will release new built-in blueprints for HITRUST, FedRAMP, and Center for Internet Security (CIS) Benchmark. If you have suggestions for new or existing compliance blueprints, please share them via the Azure Governance Feedback Forum.

Learn more about the SWIFT CSP blueprint in our documentation.

12 TB VMs, Expanded SAP partnership on Blockchain, Azure Monitor for SAP Solutions

$
0
0

A few months back, at SAP’s SAPPHIRE NOW event, we announced the availability of Azure Mv2 Virtual Machines (VMs) with up to 6 TB of memory for SAP HANA. We also reiterated our commitment to making Microsoft Azure the best cloud for SAP HANA. I’m glad to share that Azure Mv2 VMs with 12 TB of memory will become generally available and production certified in the coming weeks, in US West 2, US East, US East 2, Europe North, Europe West and Southeast Asia regions. In addition, over the last few months, we have expanded regional availability for M-series VMs, offering up to 4 TB, in Brazil, France, Germany, South Africa and Switzerland. Today, SAP HANA certified VMs are available in 34 Azure regions, enabling customers to seamlessly address global growth, run SAP applications closer to their customers and meet local regulatory needs.

Learn how you can leverage Azure Mv2 VMs for SAP HANA by watching this video.
An image of a video player, clicking takes you to the video.


Running mission critical SAP applications requires continuous monitoring to ensure system performance and availability. Today, we are launching private preview of Azure Monitor for SAP Solutions, an Azure Marketplace offering that monitors SAP HANA infrastructure through the Azure Portal. Customers can combine monitoring data from the Azure Monitor for SAP Solutions with existing Azure Monitor data and create a unified dashboard for all their Azure infrastructure telemetry. You can sign up by contacting your Microsoft account team.

We continue to co-innovate with SAP to help accelerate our customers’ digital transformation journey. At SAPPHIRE NOW, we announced several such co-innovations with SAP. First, we announced general availability of SAP Data Custodian, a governance, risk and compliance offering from SAP, which leverages Azure’s deep investments in security and compliance features such as Customer Lockbox.

Second, we announced general availability of Azure IoT integration with SAP Leonardo IoT, offering customers the ability to contextualize and enrich their IoT data with SAP business data to drive new business outcomes. Third, we shared that SAP’s Data Intelligence solution leverages Azure Cognitive Services Containers to offer intelligence services such as face, speech, and text recognition. Lastly, we announced a joint collaboration of the integration of Azure Active Directory with SAP Cloud Platform Identity Authentication Service (SAP IAS) for a seamless single sign on and user provisioning experience across SAP and non-SAP applications. Azure AD Integration with SAP IAS for seamless SSO is generally available and the user provisioning integration is now in public preview. Azure AD integration with SAP SuccessFactors for simplified user provisioning will become available soon.

Another place I am excited to deepen our partnership is in blockchain. SAP has long been an industry leader in solutions for supply chain, logistics, and life sciences. These industries are digitally transforming with the help of blockchain, which adds trust and transparency to these applications, and enables large consortiums to transact in a trusted manner. Today, I am excited to announce that SAP’s blockchain-integrated application portfolio will be able to connect to Azure blockchain service. This will enable our joint customers to bring the trust and transparency of blockchain to important business processes like material traceability, fraud prevention, and collaboration in life sciences.

Together with SAP, we are offering a trusted path to digital transformation with our best in class SAP certified infrastructure, business process and application innovation services, and a seamless set of offerings. As a result, we help migrate to Azure SAP customers across the globe such as Carlsberg and CONA Services, who have large scale mission critical SAP applications. Here are a few additional customers benefiting from migrating their SAP applications to Azure:

Al Jomaih and Shell Lubricating Oil Company: JOSLOC, the joint venture between Al Jomaih Holding and Shell Lubricating Oil Company, migrated their mission critical SAP ERP to Azure, offering them enhanced business continuity and reduced IT complexity and effort, while saving costs. Migrating SAP to Azure has enabled the joint venture to prepare for their upgrade to SAP S/4HANA in 2020.

TraXall France: TraXall France provides vehicle fleet management services for upwards of 40,000 managed vehicles. TraXall chose Microsoft Azure to run their SAP S/4HANA due to the simplified infrastructure management and business agility, and to meet compliance requirements such as GDPR.

Zuellig Pharma: Amid a five-year modernization initiative, Singapore-based Zuellig Pharma wanted to migrate their SAP solution from IBM DB2 to SAP HANA. Zuellig Pharma now runs its SAP ERP on HANA with 1 million daily transactions and 12 TB of production workloads at a 40 percent savings compared to their previous hosting provider.

If you’re attending SAP TechEd in Las Vegas, stop by at the Microsoft booth #601 or attend one of the Microsoft Azure sessions to learn more about these announcements and to see these product offerings in action.

To learn more about how migrating SAP to Azure can help you accelerate your digital transformation, visit our website at https://azure.com/sap.


Enhance your security posture with Microsoft Azure Sentinel—now generally available

Windows 10 SDK Preview Build 18985 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 18985 or greater). The Preview SDK Build 18985 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017 and 2019. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1903 or earlier to the Microsoft Store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2019 here.
  • This build of the Windows SDK will install on only on Windows 10 Insider Preview builds.
  • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following static URL: https://software-download.microsoft.com/download/sg/Windows_InsiderPreview_SDK_en-us_18985_1.iso.

Tools Updates

Message Compiler (mc.exe)

  • Now detects the Unicode byte order mark (BOM) in .mc files. If the If the .mc file starts with a UTF-8 BOM, it will be read as a UTF-8 file. Otherwise, if it starts with a UTF-16LE BOM, it will be read as a UTF-16LE file. If the -u parameter was specified, it will be read as a UTF-16LE file. Otherwise, it will be read using the current code page (CP_ACP).
  • Now avoids one-definition-rule (ODR) problems in MC-generated C/C++ ETW helpers caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of MCGEN_EVENTWRITETRANSFER are linked into the same binary, the MC-generated ETW helpers will now respect the definition of MCGEN_EVENTWRITETRANSFER in each .cpp file instead of arbitrarily picking one or the other).

Windows Trace Preprocessor (tracewpp.exe)

  • Now supports Unicode input (.ini, .tpl, and source code) files. Input files starting with a UTF-8 or UTF-16 byte order mark (BOM) will be read as Unicode. Input files that do not start with a BOM will be read using the current code page (CP_ACP). For backwards-compatibility, if the -UnicodeIgnore command-line parameter is specified, files starting with a UTF-16 BOM will be treated as empty.
  • Now supports Unicode output (.tmh) files. By default, output files will be encoded using the current code page (CP_ACP). Use command-line parameters -cp:UTF-8 or -cp:UTF-16 to generate Unicode output files.
  • Behavior change: tracewpp now converts all input text to Unicode, performs processing in Unicode, and converts output text to the specified output encoding. Earlier versions of tracewpp avoided Unicode conversions and performed text processing assuming a single-byte character set. This may lead to behavior changes in cases where the input files do not conform to the current code page. In cases where this is a problem, consider converting the input files to UTF-8 (with BOM) and/or using the -cp:UTF-8 command-line parameter to avoid encoding ambiguity.

TraceLoggingProvider.h

  • Now avoids one-definition-rule (ODR) problems caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of TLG_EVENT_WRITE_TRANSFER are linked into the same binary, the TraceLoggingProvider.h helpers will now respect the definition of TLG_EVENT_WRITE_TRANSFER in each .cpp file instead of arbitrarily picking one or the other).
  • In C++ code, the TraceLoggingWrite macro has been updated to enable better code sharing between similar events using variadic templates.

Signing your apps with Device Guard Signing

Breaking Changes

Removal of api-ms-win-net-isolation-l1-1-0.lib

In this release api-ms-win-net-isolation-l1-1-0.lib has been removed from the Windows SDK. Apps that were linking against api-ms-win-net-isolation-l1-1-0.lib can switch to OneCoreUAP.lib as a replacement.

Removal of IRPROPS.LIB

In this release irprops.lib has been removed from the Windows SDK. Apps that were linking against irprops.lib can switch to bthprops.lib as a drop-in replacement.

API Updates, Additions and Removals

The following APIs have been added to the platform since the release of Windows 10 SDK, version 1903, build 18362.

Additions:

 
 
namespace Windows.AI.MachineLearning {
  public sealed class LearningModelSessionOptions {
    bool CloseModelOnSessionCreation { get; set; }
  }
}
namespace Windows.ApplicationModel {
  public sealed class AppInfo {
    public static AppInfo Current { get; }
    Package Package { get; }
    public static AppInfo GetFromAppUserModelId(string appUserModelId);
    public static AppInfo GetFromAppUserModelIdForUser(User user, string appUserModelId);
  }
  public interface IAppInfoStatics
  public sealed class Package {
    StorageFolder EffectiveExternalLocation { get; }
    string EffectiveExternalPath { get; }
    string EffectivePath { get; }
    string InstalledPath { get; }
    bool IsStub { get; }
    StorageFolder MachineExternalLocation { get; }
    string MachineExternalPath { get; }
    string MutablePath { get; }
    StorageFolder UserExternalLocation { get; }
    string UserExternalPath { get; }
    IVectorView<AppListEntry> GetAppListEntries();
    RandomAccessStreamReference GetLogoAsRandomAccessStreamReference(Size size);
  }
}
namespace Windows.ApplicationModel.AppService {
  public enum AppServiceConnectionStatus {
    AuthenticationError = 8,
    DisabledByPolicy = 10,
    NetworkNotAvailable = 9,
    WebServiceUnavailable = 11,
  }
  public enum AppServiceResponseStatus {
    AppUnavailable = 6,
    AuthenticationError = 7,
    DisabledByPolicy = 9,
    NetworkNotAvailable = 8,
    WebServiceUnavailable = 10,
  }
  public enum StatelessAppServiceResponseStatus {
    AuthenticationError = 11,
    DisabledByPolicy = 13,
    NetworkNotAvailable = 12,
    WebServiceUnavailable = 14,
  }
}
namespace Windows.ApplicationModel.Background {
  public sealed class BackgroundTaskBuilder {
    void SetTaskEntryPointClsid(Guid TaskEntryPoint);
  }
  public sealed class BluetoothLEAdvertisementPublisherTrigger : IBackgroundTrigger {
    bool IncludeTransmitPowerLevel { get; set; }
    bool IsAnonymous { get; set; }
    IReference<short> PreferredTransmitPowerLevelInDBm { get; set; }
    bool UseExtendedFormat { get; set; }
  }
  public sealed class BluetoothLEAdvertisementWatcherTrigger : IBackgroundTrigger {
    bool AllowExtendedAdvertisements { get; set; }
  }
}
namespace Windows.ApplicationModel.ConversationalAgent {
  public sealed class ActivationSignalDetectionConfiguration
  public enum ActivationSignalDetectionTrainingDataFormat
  public sealed class ActivationSignalDetector
  public enum ActivationSignalDetectorKind
  public enum ActivationSignalDetectorPowerState
  public sealed class ConversationalAgentDetectorManager
  public sealed class DetectionConfigurationAvailabilityChangedEventArgs
  public enum DetectionConfigurationAvailabilityChangeKind
  public sealed class DetectionConfigurationAvailabilityInfo
  public enum DetectionConfigurationTrainingStatus
}
namespace Windows.ApplicationModel.DataTransfer {
  public sealed class DataPackage {
    event TypedEventHandler<DataPackage, object> ShareCanceled;
  }
}
namespace Windows.Devices.Bluetooth {
  public sealed class BluetoothAdapter {
    bool IsExtendedAdvertisingSupported { get; }
    uint MaxAdvertisementDataLength { get; }
  }
}
namespace Windows.Devices.Bluetooth.Advertisement {
  public sealed class BluetoothLEAdvertisementPublisher {
    bool IncludeTransmitPowerLevel { get; set; }
    bool IsAnonymous { get; set; }
    IReference<short> PreferredTransmitPowerLevelInDBm { get; set; }
    bool UseExtendedAdvertisement { get; set; }
  }
  public sealed class BluetoothLEAdvertisementPublisherStatusChangedEventArgs {
    IReference<short> SelectedTransmitPowerLevelInDBm { get; }
  }
  public sealed class BluetoothLEAdvertisementReceivedEventArgs {
    BluetoothAddressType BluetoothAddressType { get; }
    bool IsAnonymous { get; }
    bool IsConnectable { get; }
    bool IsDirected { get; }
    bool IsScannable { get; }
    bool IsScanResponse { get; }
    IReference<short> TransmitPowerLevelInDBm { get; }
  }
  public enum BluetoothLEAdvertisementType {
    Extended = 5,
  }
  public sealed class BluetoothLEAdvertisementWatcher {
    bool AllowExtendedAdvertisements { get; set; }
  }
  public enum BluetoothLEScanningMode {
    None = 2,
  }
}
namespace Windows.Devices.Bluetooth.Background {
  public sealed class BluetoothLEAdvertisementPublisherTriggerDetails {
    IReference<short> SelectedTransmitPowerLevelInDBm { get; }
  }
}
namespace Windows.Devices.Display {
  public sealed class DisplayMonitor {
    bool IsDolbyVisionSupportedInHdrMode { get; }
  }
}
namespace Windows.Devices.Input {
  public sealed class PenButtonListener
  public sealed class PenDockedEventArgs
  public sealed class PenDockListener
  public sealed class PenTailButtonClickedEventArgs
  public sealed class PenTailButtonDoubleClickedEventArgs
  public sealed class PenTailButtonLongPressedEventArgs
  public sealed class PenUndockedEventArgs
}
namespace Windows.Devices.Sensors {
  public sealed class Accelerometer {
    AccelerometerDataThreshold ReportThreshold { get; }
  }
  public sealed class AccelerometerDataThreshold
  public sealed class Barometer {
    BarometerDataThreshold ReportThreshold { get; }
  }
  public sealed class BarometerDataThreshold
  public sealed class Compass {
    CompassDataThreshold ReportThreshold { get; }
  }
  public sealed class CompassDataThreshold
  public sealed class Gyrometer {
    GyrometerDataThreshold ReportThreshold { get; }
  }
  public sealed class GyrometerDataThreshold
  public sealed class Inclinometer {
    InclinometerDataThreshold ReportThreshold { get; }
  }
  public sealed class InclinometerDataThreshold
  public sealed class LightSensor {
    LightSensorDataThreshold ReportThreshold { get; }
  }
  public sealed class LightSensorDataThreshold
  public sealed class Magnetometer {
    MagnetometerDataThreshold ReportThreshold { get; }
  }
  public sealed class MagnetometerDataThreshold
}
namespace Windows.Foundation.Metadata {
  public sealed class AttributeNameAttribute : Attribute
  public sealed class FastAbiAttribute : Attribute
  public sealed class NoExceptionAttribute : Attribute
}
namespace Windows.Globalization {
  public sealed class Language {
    string AbbreviatedName { get; }
    public static IVector<string> GetMuiCompatibleLanguageListFromLanguageTags(IIterable<string> languageTags);
  }
}
namespace Windows.Graphics.Capture {
  public sealed class GraphicsCaptureSession : IClosable {
    bool IsCursorCaptureEnabled { get; set; }
  }
}
namespace Windows.Graphics.DirectX {
  public enum DirectXPixelFormat {
    SamplerFeedbackMinMipOpaque = 189,
    SamplerFeedbackMipRegionUsedOpaque = 190,
  }
}
namespace Windows.Graphics.Holographic {
  public sealed class HolographicFrame {
    HolographicFrameId Id { get; }
  }
  public struct HolographicFrameId
  public sealed class HolographicFrameRenderingReport
  public sealed class HolographicFrameScanoutMonitor : IClosable
  public sealed class HolographicFrameScanoutReport
  public sealed class HolographicSpace {
    HolographicFrameScanoutMonitor CreateFrameScanoutMonitor(uint maxQueuedReports);
  }
}
namespace Windows.Management.Deployment {
  public sealed class AddPackageOptions
  public enum DeploymentOptions : uint {
    StageInPlace = (uint)4194304,
  }
  public sealed class PackageManager {
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> AddPackageByUriAsync(Uri packageUri, AddPackageOptions options);
    IIterable<Package> FindProvisionedPackages();
    PackageStubPreference GetPackageStubPreference(string packageFamilyName);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackageByNameAsync(string name, RegisterPackageOptions options);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackageByUriAsync(Uri manifestUri, RegisterPackageOptions options);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackagesByFullNameAsync(IIterable<string> packageFullNames, DeploymentOptions deploymentOptions);
    void SetPackageStubPreference(string packageFamilyName, PackageStubPreference useStub);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> StagePackageByUriAsync(Uri packageUri, StagePackageOptions options);
  }
  public enum PackageStubPreference
  public enum PackageTypes : uint {
    All = (uint)4294967295,
  }
  public sealed class RegisterPackageOptions
  public enum RemovalOptions : uint {
    PreserveRoamableApplicationData = (uint)128,
  }
  public sealed class StagePackageOptions
  public enum StubPackageOption
}
namespace Windows.Media.Audio {
  public sealed class AudioPlaybackConnection : IClosable
  public sealed class AudioPlaybackConnectionOpenResult
  public enum AudioPlaybackConnectionOpenResultStatus
  public enum AudioPlaybackConnectionState
}
namespace Windows.Media.Capture {
  public sealed class MediaCapture : IClosable {
    MediaCaptureRelativePanelWatcher CreateRelativePanelWatcher(StreamingCaptureMode captureMode, DisplayRegion displayRegion);
  }
  public sealed class MediaCaptureInitializationSettings {
    Uri DeviceUri { get; set; }
    PasswordCredential DeviceUriPasswordCredential { get; set; }
  }
  public sealed class MediaCaptureRelativePanelWatcher : IClosable
}
namespace Windows.Media.Capture.Frames {
  public sealed class MediaFrameSourceInfo {
    Panel GetRelativePanel(DisplayRegion displayRegion);
  }
}
namespace Windows.Media.Devices {
  public sealed class PanelBasedOptimizationControl
}
namespace Windows.Media.MediaProperties {
  public static class MediaEncodingSubtypes {
    public static string Pgs { get; }
    public static string Srt { get; }
    public static string Ssa { get; }
    public static string VobSub { get; }
  }
  public sealed class TimedMetadataEncodingProperties : IMediaEncodingProperties {
    public static TimedMetadataEncodingProperties CreatePgs();
    public static TimedMetadataEncodingProperties CreateSrt();
    public static TimedMetadataEncodingProperties CreateSsa(byte[] formatUserData);
    public static TimedMetadataEncodingProperties CreateVobSub(byte[] formatUserData);
  }
}
namespace Windows.Networking.BackgroundTransfer {
  public sealed class DownloadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
  public sealed class UploadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
}
namespace Windows.Networking.Connectivity {
  public enum NetworkAuthenticationType {
    Owe = 12,
  }
}
namespace Windows.Networking.NetworkOperators {
  public sealed class NetworkOperatorTetheringAccessPointConfiguration {
    TetheringWiFiBand Band { get; set; }
    bool IsBandSupported(TetheringWiFiBand band);
    IAsyncOperation<bool> IsBandSupportedAsync(TetheringWiFiBand band);
  }
  public sealed class NetworkOperatorTetheringManager {
    public static void DisableNoConnectionsTimeout();
    public static IAsyncAction DisableNoConnectionsTimeoutAsync();
    public static void EnableNoConnectionsTimeout();
    public static IAsyncAction EnableNoConnectionsTimeoutAsync();
    public static bool IsNoConnectionsTimeoutEnabled();
  }
  public enum TetheringWiFiBand
}
namespace Windows.Networking.PushNotifications {
  public static class PushNotificationChannelManager {
    public static event EventHandler<PushNotificationChannelsRevokedEventArgs> ChannelsRevoked;
  }
  public sealed class PushNotificationChannelsRevokedEventArgs
  public sealed class RawNotification {
    IBuffer ContentBytes { get; }
  }
}
namespace Windows.Security.Authentication.Web.Core {
  public sealed class WebAccountMonitor {
    event TypedEventHandler<WebAccountMonitor, WebAccountEventArgs> AccountPictureUpdated;
  }
}
namespace Windows.Security.Isolation {
  public sealed class IsolatedWindowsEnvironment
  public enum IsolatedWindowsEnvironmentActivator
  public enum IsolatedWindowsEnvironmentAllowedClipboardFormats : uint
  public enum IsolatedWindowsEnvironmentAvailablePrinters : uint
  public enum IsolatedWindowsEnvironmentClipboardCopyPasteDirections : uint
  public struct IsolatedWindowsEnvironmentContract
  public struct IsolatedWindowsEnvironmentCreateProgress
  public sealed class IsolatedWindowsEnvironmentCreateResult
  public enum IsolatedWindowsEnvironmentCreateStatus
  public sealed class IsolatedWindowsEnvironmentFile
  public static class IsolatedWindowsEnvironmentHost
  public enum IsolatedWindowsEnvironmentHostError
  public sealed class IsolatedWindowsEnvironmentLaunchFileResult
  public enum IsolatedWindowsEnvironmentLaunchFileStatus
  public sealed class IsolatedWindowsEnvironmentOptions
  public static class IsolatedWindowsEnvironmentOwnerRegistration
  public sealed class IsolatedWindowsEnvironmentOwnerRegistrationData
  public sealed class IsolatedWindowsEnvironmentOwnerRegistrationResult
  public enum IsolatedWindowsEnvironmentOwnerRegistrationStatus
  public sealed class IsolatedWindowsEnvironmentProcess
  public enum IsolatedWindowsEnvironmentProcessState
  public enum IsolatedWindowsEnvironmentProgressState
  public sealed class IsolatedWindowsEnvironmentShareFolderRequestOptions
  public sealed class IsolatedWindowsEnvironmentShareFolderResult
  public enum IsolatedWindowsEnvironmentShareFolderStatus
  public sealed class IsolatedWindowsEnvironmentStartProcessResult
  public enum IsolatedWindowsEnvironmentStartProcessStatus
  public sealed class IsolatedWindowsEnvironmentTelemetryParameters
  public static class IsolatedWindowsHostMessenger
  public delegate void MessageReceivedCallback(Guid receiverId, IVectorView<object> message);
}
namespace Windows.Storage {
  public static class KnownFolders {
    public static IAsyncOperation<StorageFolder> GetFolderAsync(KnownFolderId folderId);
    public static IAsyncOperation<KnownFoldersAccessStatus> RequestAccessAsync(KnownFolderId folderId);
    public static IAsyncOperation<KnownFoldersAccessStatus> RequestAccessForUserAsync(User user, KnownFolderId folderId);
  }
  public enum KnownFoldersAccessStatus
  public sealed class StorageFile : IInputStreamReference, IRandomAccessStreamReference, IStorageFile, IStorageFile2, IStorageFilePropertiesWithAvailability, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFile> GetFileFromPathForUserAsync(User user, string path);
  }
  public sealed class StorageFolder : IStorageFolder, IStorageFolder2, IStorageFolderQueryOperations, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFolder> GetFolderFromPathForUserAsync(User user, string path);
  }
}
namespace Windows.Storage.Provider {
  public sealed class StorageProviderFileTypeInfo
  public sealed class StorageProviderSyncRootInfo {
    IVector<StorageProviderFileTypeInfo> FallbackFileTypeInfo { get; }
  }
  public static class StorageProviderSyncRootManager {
    public static bool IsSupported();
  }
}
namespace Windows.System {
  public sealed class UserChangedEventArgs {
    IVectorView<UserWatcherUpdateKind> ChangedPropertyKinds { get; }
  }
  public enum UserWatcherUpdateKind
}
namespace Windows.UI.Composition.Interactions {
  public sealed class InteractionTracker : CompositionObject {
    int TryUpdatePosition(Vector3 value, InteractionTrackerClampingOption option, InteractionTrackerPositionUpdateOption posUpdateOption);
  }
  public enum InteractionTrackerPositionUpdateOption
}
namespace Windows.UI.Input {
  public sealed class CrossSlidingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class DraggingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class GestureRecognizer {
    uint HoldMaxContactCount { get; set; }
    uint HoldMinContactCount { get; set; }
    float HoldRadius { get; set; }
    TimeSpan HoldStartDelay { get; set; }
    uint TapMaxContactCount { get; set; }
    uint TapMinContactCount { get; set; }
    uint TranslationMaxContactCount { get; set; }
    uint TranslationMinContactCount { get; set; }
  }
  public sealed class HoldingEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class ManipulationCompletedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class ManipulationInertiaStartingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationStartedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationUpdatedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class RightTappedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class SystemButtonEventController : AttachableInputObject
  public sealed class SystemFunctionButtonEventArgs
  public sealed class SystemFunctionLockChangedEventArgs
  public sealed class SystemFunctionLockIndicatorChangedEventArgs
  public sealed class TappedEventArgs {
    uint ContactCount { get; }
  }
}
namespace Windows.UI.Input.Inking {
  public sealed class InkModelerAttributes {
    bool UseVelocityBasedPressure { get; set; }
  }
}
namespace Windows.UI.Text {
  public enum RichEditMathMode
  public sealed class RichEditTextDocument : ITextDocument {
    void GetMath(out string value);
    void SetMath(string value);
    void SetMathMode(RichEditMathMode mode);
  }
}
namespace Windows.UI.ViewManagement {
  public sealed class ApplicationView {
    bool CriticalInputMismatch { get; set; }
    bool TemporaryInputMismatch { get; set; }
    void ApplyApplicationUserModelID(string value);
  }
  public sealed class UISettings {
    event TypedEventHandler<UISettings, UISettingsAnimationsEnabledChangedEventArgs> AnimationsEnabledChanged;
    event TypedEventHandler<UISettings, UISettingsMessageDurationChangedEventArgs> MessageDurationChanged;
  }
  public sealed class UISettingsAnimationsEnabledChangedEventArgs
  public sealed class UISettingsMessageDurationChangedEventArgs
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    event TypedEventHandler<CoreInputView, CoreInputViewHidingEventArgs> PrimaryViewHiding;
    event TypedEventHandler<CoreInputView, CoreInputViewShowingEventArgs> PrimaryViewShowing;
  }
  public sealed class CoreInputViewHidingEventArgs
  public enum CoreInputViewKind {
    Symbols = 4,
  }
  public sealed class CoreInputViewShowingEventArgs
  public sealed class UISettingsController
}
namespace Windows.UI.Xaml.Controls {
  public class HandwritingView : Control {
    UIElement HostUIElement { get; set; }
    public static DependencyProperty HostUIElementProperty { get; }
    CoreInputDeviceTypes InputDeviceTypes { get; set; }
    bool IsSwitchToKeyboardButtonVisible { get; set; }
    public static DependencyProperty IsSwitchToKeyboardButtonVisibleProperty { get; }
    double MinimumColorDifference { get; set; }
    public static DependencyProperty MinimumColorDifferenceProperty { get; }
    bool PreventAutomaticDismissal { get; set; }
    public static DependencyProperty PreventAutomaticDismissalProperty { get; }
    bool ShouldInjectEnterKey { get; set; }
    public static DependencyProperty ShouldInjectEnterKeyProperty { get; }
    event TypedEventHandler<HandwritingView, HandwritingViewCandidatesChangedEventArgs> CandidatesChanged;
    event TypedEventHandler<HandwritingView, HandwritingViewContentSizeChangingEventArgs> ContentSizeChanging;
    void SelectCandidate(uint index);
    void SetTrayDisplayMode(HandwritingViewTrayDisplayMode displayMode);
  }
  public sealed class HandwritingViewCandidatesChangedEventArgs
  public sealed class HandwritingViewContentSizeChangingEventArgs
  public enum HandwritingViewTrayDisplayMode
}
namespace Windows.UI.Xaml.Core.Direct {
  public enum XamlEventIndex {
    HandwritingView_ContentSizeChanging = 321,
  }
  public enum XamlPropertyIndex {
    HandwritingView_HostUIElement = 2395,
    HandwritingView_IsSwitchToKeyboardButtonVisible = 2393,
    HandwritingView_MinimumColorDifference = 2396,
    HandwritingView_PreventAutomaticDismissal = 2397,
    HandwritingView_ShouldInjectEnterKey = 2398,
  }
}

The post Windows 10 SDK Preview Build 18985 available now! appeared first on Windows Developer Blog.

Tracepoints: Debug with less clutter

$
0
0

Have you ever accidentally shipped a log statement to production? Are you tired of cleaning up log statements while debugging? The tool to solve your problems has been here all along!

Do you use log statements to debug?

Let’s be honest we have all done it at some point. Whether it be Debug.WriteLine(), console.log(), print(), etc. logging output to the console is a common practice that leads to what some might call “immediate feedback”. But what seems like a simple and enjoyable approach to debugging quickly turns into a lot of cleanup work because the log statements are now littered through your code. After all no one wants to see your log statements shipped to production.

Do you find code cleanup tedious?

If so, then Tracepoints are a great tool you can use in Visual Studio. This feature allows you to log desired information without modifying your code and is initialized in a similar fashion to breakpoints. When you are done debugging simply click on a tracepoint to remove it.

The solution has been here all along

Tracepoints are not a new feature. In fact, they have existed in Visual Studio since 2005, but we feel that many developers do not know about this capability. In this post, we will go over what tracepoints can do, how to use them, and why they are a feature worth using.
For an even more thorough explanation of tracepoints, see our docs page: https://docs.microsoft.com/en-us/visualstudio/debugger/using-tracepoints?view=vs-2019.

Let’s look at an example

The following program is a for loop with a counter variable increasing by one each time the loop iterates. Let’s say we wanted to print out the value of counter for each iteration of the for loop. One solution is to use a log statement such as Debug.WriteLine(counter) to print out the values. Let’s see what that would look like:

While that certainly accomplished this simple task, it required us to modify our code and will necessitate we delete the statement later so that the log statement is not shipped to production. You also will need to delete log statements periodically even before shipping to production as you add newer log statements so that the Output window in Visual Studio is not cluttered with irrelevant information. Furthermore, there is no conditional logic to when these statements print such as only printing the “counter” variable when it is an odd number. Adding conditions would require more code, further complicating the debugging process and creating more cleanup for later. We believe that there is a better way to handle these situations.

Tracepoints to the rescue

The GIF below demonstrates how to initialize a tracepoints.

Notice how when you add a message in the “Show a message in the Output window field” under the actions menu you are not modifying your original code in any way. By this I mean you do not need to add print statements or functions such as Debug.WriteLine() in the middle of your code just to see information in Visual Studio’s Output window. This allows you to get the desired information in Visual Studio’s Output window that you wanted before without compromising the readability of your code. Furthermore, when you are done debugging simply click on the tracepoint once to delete it. Simple as that. If you forget to delete a tracepoint don’t fret about the extraneous output showing up in production. That’s because tracepoints only exist locally on your machine.

You can add conditions too

What about those cases earlier when we wanted conditions? Let’s say we wanted every other count or the value of counter during a specific iteration of the for loop. Well it turns out we can add conditions too in a similar fashion to conditional breakpoints.


There are three condition types:

  • Conditional Expression: Output message displayed only under certain conditions such as “counter >= 5”.
  • Hit Count: This condition allows you to output only after a pre-specified number of times the line the tracepoint was set on has been executed.
  • Filter: Tracepoint will only be activated on specified devices, processes or threads.

Adding these conditions will not modify your original code and unlike breakpoints does not stop the program and require a user to repeatedly step into or over a program (as long as the “Continue code” box under Actions is checked).

Tips and tricks

Currently tracepoint messages go to Visual Studio’s Output window. It is easy to lose track of the messages amongst the many other things that get sent to the same window.

  • If you right click within the Output window, you can turn off classes of messages such as Exception Messages, Step Filtering Messages, Process Exit Messages, etc by clicking on them. By turning off some of these classes of messages that you may not want, it will make it easier to focus on your tracepoint output.
  • If your current task requires you keep all the classes of messages on, another trick to make it easier to find your output is to prefix your action’s message with a unique phrase like “AA”. Once you start debugging your program you can use the CTRL-F command in the Output window to search for the prefix you set and it will take you straight to your output message (see image below).

  • To temporarily disable a tracepoint without deleting it is to hit “Shift + left click” on the tracepoint.
  • To view, disable, and/or delete all the Tracepoints and Breakpoints in your current file at once hit Debug -> Windows -> Breakpoints to access the Breakpoints window.

When logging might be useful

In some cases, a language’s log statement such as Debug.WriteLine() in C# may be a better choice than using tracepoints. For example, if you want to always see some output in the debugger that persists beyond the current debug session then Debug.WriteLine() might be the right option in that context. Tracepoints do not persist beyond a single (or possibly a few) debug sessions. Another consideration is efficiency. Tracepoints are also less efficient at debug time so if they too slow for your needs try a log statement instead. Lastly, tracepoints have limitations in what data they can collect because they can only virtually execute function evaluations. Despite some of these restrictions, we still feel like tracepoints are a great tool to have in your debugging toolkit.

Wrapping up

In conclusion, tracepoints are a great way to keep your code clean during debugging. You will not need to modify your original code or remove statements later. If you want conditions you can add those as well without needing continuously stop and step through your program. We hope you enjoy using tracepoints and that they streamline your workflow! For more information on tracepoints please check out our docs page: https://docs.microsoft.com/en-us/visualstudio/debugger/using-tracepoints?view=vs-2019

If you have any feedback, please feel free to reach out to us. We would love to hear from you!

The post Tracepoints: Debug with less clutter appeared first on The Visual Studio Blog.

ML.NET and Model Builder at .NET Conf 2019 (Machine Learning for .NET)

$
0
0

We are excited today to announce updates to Model Builder and improvements in ML.NET. You can learn more in the “What’s new in ML.NET?.” session at .NET Conf.

ML.NET is an open-source and cross-platform machine learning framework (Windows, Linux, macOS) for .NET developers.

ML.NET offers Model Builder Model Builder (a simple UI tool) and CLI to make it super easy to build custom ML Models using AutoML.

Using ML.NET, developers can leverage their existing tools and skillsets to develop and infuse custom AI into their applications by creating custom machine learning models for common scenarios like Sentiment Analysis, Recommendation, Image Classification and more!.

Following are the key highlights:

Model Builder updates

This release of Model Builder adds support for a new scenario and address many customer reported issues.

Model Builder screenshot 1

Feature engineering: In previous versions of Model Builder, after selecting your dataset, either from a file or from SQL Server, you only had the option to choose the column to predict (the Label). Any other columns in the dataset were automatically used to make the prediction (Features). Any columns that you did not want to include, you had to manipulate your dataset outside of Model Builder and then upload the modified dataset.

Feature engineering in Model Builder

Model consumption made easy!: In previous versions of Model Builder, there were numerous steps that you had to take after Model Builder’s code and model generation in order to consume the trained model in your app, including adding a reference to the generated library project, setting the model Copy to Output property to “Copy If Newer,” and adding the Microsoft.ML NuGet package to your app.

This has all been simplified and automated, so now all you have to do is copy + paste the code from the Next Steps in Model Builder, and then you can run your app and start making predictions!

Address customer feedback: This release also address many customer reported issues around installation errors, usability feedback and stability improvements and more. Learn more here.

ML.NET updates

This is a short summary of the features and enhancements added to ML.NET over the last few months.

Deep learning with ML.NET

Documentation updates

We have been working hard to add more documentation across tutorials, how-to guides, and more for Model Builder, CLI, and ML.NET Framework. We have also simplified the table of contents for the ML.NET Docs so that you can easily discover the content.

Documentation updates

New learn series for ML.NET

To help users get started with the basics of Machine Learning and ML.NET, we have created a set of learning videos. Please watch the series here.

ML.NET video series

Broad range of samples to learn from

We have added many scenarios for a variety of use cases with Machine Learning. You can learn and customize these samples for your scenario. Please find more samples on the ML.NET Samples GitHub repo.

ML.NET Samples

Try ML.NET and Model Builder today!

We are excited to release these updates for you, and we look forward to seeing what you will build with ML.NET. If you have any questions or feedback, you can ask them here for ML.NET and Model Builder.

Thanks and happy coding with ML.NET!

The ML.NET Team.

The post ML.NET and Model Builder at .NET Conf 2019 (Machine Learning for .NET) appeared first on .NET Blog.

New disk support capabilities in Azure Storage Explorer

$
0
0

The release of Storage Explorer 1.10.0 brings many exciting updates and new features that we hope can help you be more productive and efficient when working with your Azure Storage Accounts. If you’ve never used Storage Explorer before, make sure to head to our product page, and download it for your favorite operating system. In this post, we’ll go over the newly added support for virtual machine (VM) disk management that was added in the 1.10.0 release.

Easily backup and restore VMs with disk support

Managed disks have been simplifying Azure VM creation and maintenance over page blobs, blob containers and storage accounts. Today, Azure managed disks are the default storage option for Azure IaaS VMs. Recently, we introduced the Direct Upload API that allows you to upload data from on-premises without staging the data in a storage account. Azure Storage Explorer further simplifies those tasks by providing performant upload and download capabilities for creating and accessing managed disks. Here are two example scenarios for how the new features benefit customers like you:

We learned it is common to migrate VMs from on-premises to Azure. With Storage Explorer you can conveniently perform this task using the following steps in the documentation.

Figure 1: Upload a VHD using Storage Explorer

A gif showing an upload of a VHD using Storage Explorer.

Backup and restore operations are also very common practices in customers’ disaster recovery strategy. A typical scenario is rolling back VMs to last known good version by restoring disks from snapshots after a regional outage or an application upgrade failure.

The workflow is now simplified with managed disks support in Storage Explorer. In the 1.10.0 release you can snapshot a disk just like any other blob to back up the current version. In upcoming releases, we will fully support creating disks from snapshots to complete the end-to-end scenario.

Figure 1: Capturing snapshot of VHDs from an Azure VM

A gif showing the capturing of screenshots of VHDs from an Azure VM.

Next steps

Download Storage Explorer 1.10.0 today and start efficiently managing your VMs and disks. If you have any feedback, please make sure to open a new issue on our GitHub repo. If you are experiencing difficulties using the product, please open a support ticket following these instructions.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>