Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Manage Azure HDInsight clusters using .NET, Python, or Java

$
0
0

We are pleased to announce the general availability of the new Azure HDInsight management SDKs for .NET, Python, and Java.

Highlights of this release

  • More languages: In addition to .NET, you can now easily manage your HDInsight clusters using Python or Java.
  • Manage HDInsight clusters: The SDK provides several useful operations to manage your HDInsight clusters, including the ability to create clusters, delete clusters, scale clusters, list existing clusters, get cluster details, update cluster tags, execute script actions, and more.
  • Monitor HDInsight clusters: Manage your HDInsight cluster's integration with Azure Monitor logs. HDInsight clusters can emit metrics into queryable tables in a Log Analytics workspace so you can monitor all of your clusters in one place. Use the SDK to enable, disable, or view the status of Azure Monitor Logs integration on a cluster.
  • Script actions: Use the SDK to execute, delete, list, and view details for script actions on your HDInsight clusters.  Script actions allow you to run scripts as Ambari operations to configure and customize your cluster.

Getting started

You can learn how to get started with the HDInsight management SDK in the language of your choice here:

Reference documentation

We also provide reference documentation that you can use to learn about all available functions in the HDInsight Management SDK.

Try HDInsight now

We hope you will take full advantage of the HDInsight management SDKs for .NET, Python, and Java and we are excited to see what you will build with Azure HDInsight. Read this developer guide and follow the quick start guide to learn more about implementing these pipelines and architectures on Azure HDInsight. Stay up-to-date on the latest Azure HDInsight news and features by following us on Twitter #AzureHDInsight and @AzureHDInsight. For questions and feedback, reach out to AskHDInsight@microsoft.com.

About HDInsight

Azure HDInsight is an easy, cost-effective, enterprise-grade service for open source analytics that enables customers to easily run popular open source frameworks including Apache Hadoop, Spark, Kafka, and others. The service is available in 36 public regions and Azure Government and National Clouds. Azure HDInsight powers mission-critical applications in a wide variety of sectors and enables a wide range of use cases including ETL, streaming, and interactive querying.


Put IoT in action to overcome public building safety challenges

$
0
0

 

Transparency is an important part of public safety projects. Officials must know what is happening in real time, as well as be able to collaborate with others involved in the project. To create a collaborative and transparent environment, security officials are reinventing their approach to public safety, especially regarding the protection of public buildings. While keeping public buildings safe, smart, and secure is a top priority, it’s also a constant challenge.

Recent advancements in cloud computing, intelligent edge, artificial intelligence, and data analytics create many new opportunities for Internet of Things (IoT) devices to improve public safety. However, the seamless framework must work together, and device makers often run into challenges with hardware, software, networks, security, and platform management.

In this post, we’ll explore how IoT initiatives are solving these top public safety concerns. To learn more, be sure to attend Microsoft’s upcoming IoT in Action webinar, IoT and the New Safety Net.

IoT in Action webinar: IoT and the New Safety Net.

Safe: Using IoT for visitor lifecycle management

With cyber criminals able to clone a radio frequency identification (RFID) badge in seconds, security officials are increasingly turning to IoT technology to prevent unauthorized access to government and other public buildings. Instead of viewing logical (physical) security and cybersecurity as separate issues, SoloInsight, a Microsoft partner, uses a multi-layered behavior-based approach that secures both physical and cyber assets, including check-in, PACS, elevators, parking, loading docks, machines, workstations, applications, websites, documents, and payments.

By using SoloInsight’s Cloudgate, built on Microsoft Azure, organizations can use self-service kiosks in low-traffic areas to provide a higher level of security than previously possible. A supply chain management and logistics solutions company uses the platform to capture a picture of each employee entering the building. If an employee’s face is not authenticated, they cannot enter the building. A traditional system would allow entry if the person had possession of an easily cloned proxy card.

Smart: Improving operations and creating a positive user experience

Because IoT devices collect a wide range of data, security officials can use the information to gain real-time insights and protect data. Self-serve kiosks can remember previous visitors and employees and immediately grant access to the building, which increases satisfaction. Additionally, IoT visitor management systems can deactivate access to the network and data when a person physically leaves the building. By using systems that connect both the physical and logical components, you can ensure unauthorized personnel are not using someone else’s credentials.

Building management can also use data to make decisions that drive efficiency, such as predicting visitor and employee traffic patterns. For example, housekeeping and maintenance work can be scheduled at times when most employees in a section of the building are not at work. When devices are part of an overall IoT platform, such as Microsoft Azure, data collected from devices can easily be stored in the cloud, used with artificial intelligence to predict what will likely happen in the future and off-loaded from the cloud to IoT devices.

Secure: Collecting and protecting data

As IoT devices receive and store information, they often include operational and personally identifiable information (PII) data. This makes it essential for platforms to securely store and manage all data collected. The Microsoft Azure Sphere platform includes three components that work together to bring the promise of a secured, connected future to microcontroller unit (MCU) devices everywhere and includes three components that work together to lock down device security: the Azure Sphere MCU, the built-in Azure Sphere OS, and the turnkey cloud security service.

One of the biggest challenges with IoT technology is the number of devices and access points, which often include mobile devices. Security officials need the ability to monitor the health of all IoT devices in real time and remove compromised devices from the network. With the sensitive nature of PII, security officials find that they require a platform with strict privacy controls, authorization levels, and compliance tools. With IoT devices built on a secure platform that uses the latest technology, public buildings can transform into smart buildings while continuing to provide both physical and logical safely.

Coming April 25, 2019: Make public safety collaborative with IoT

Discover how collaborative IoT can improve public safety by registering for the IoT in Action webinar, IoT and the New Safety Net. Get insights from industry experts and Microsoft partner SoloInsight around how transparent frameworks create secure buildings.

Microsoft open sources Data Accelerator, an easy-to-configure pipeline for streaming at scale

$
0
0

This blog post was co-authored by Dinesh Chandnani, Principal Group Engineering Manager​, Microsoft.  

Standing up a data pipeline for the first time can be a challenge and decisions you make at the start of a project can limit your choices long after the initial deployment has been rolled out. Often what is needed is a playground in which to learn about and evaluate the available options and capabilities in the solution space. To that end, we are excited to be announcing that an internal Microsoft project known as Data Accelerator is now being open sourced.

Data Accelerator started in 2017 as a large-scale data processing project in Microsoft’s Developer Division that eventually landed on streaming on Apache Spark for reasons of scale and speed. The pipeline today operates at Microsoft scale.

Some of the reasons we think it will have value to the wider community:

  • Fast Dev-Test loop: Events can be sampled to support local execution of queries, short circuiting the wait and delay of submitting your job to the cluster for it to fail seven minutes later due to a misplaced semicolon.
  • One-box deployment for local testing and discovery: Learn before you commit to a prototype.
  • Designer-based rules and query building: Stand up an end-to-end ETL pipeline without writing any code or dig right into the details.
  • Time-windowing, reference data, and output capabilities added to SQL-Spark syntax: Keyword extensions to SQL-Spark syntax avoid the complexity and error-prone management of these common tasks.

The Developer Division of Microsoft is using Data Accelerator in production every day and will continue to make improvements in the toolchain over time, but we recognize the toolset could do many more things given the need. We hope that by opening this project some of you will find Data Accelerator even more helpful.

To learn more about the open sourcing of Data Accelerator visit the announcement on the Open Source blog.

Start developing on Windows 10 May 2019 Update today

$
0
0

We’re excited to announce the Windows 10 May 2019 Update (build 18362) in the Release Preview Windows Insider ring. You may also know this as Windows 10, version 1903. The Windows 10 SDK for the May 2019 Update is now available as well with a go-live license today!

New APIs and Features for developers

Every update of Windows 10 is loaded with new APIs but don’t worry, Windows Dev Center has a full list of what is new for developers. Here are a few of my favorites.

  1. XAML Islands v1: This first version comes with a lot of improvements from preview release inside 1809. Some highlights are: resolve airspace issues in popups, the XAML content matches host’s DPI awareness, Narrator works with the XAML content, allow islands in multiple top-level windows on one thread, support MRT localization & resource loading, and keyboard accelerators work cross frameworks. Windows Community Toolkit v6, being released in June, will include wrappers for WPF and WinForms.
  2. Windows Subsystem for Linux: While the team did a great 1903 recap blog, here is a recap on how you can now access Linux files from within Windows, and there are better command line options!
    1. You can now use wslconfig.exe commands in wsl.exe
    2. There are some news commands in wsl.exe:
      • –user,-u : Run a distro as a specified user
      • –import : Import a distro into WSL from a tarball
      • –export : Export a distro from WSL to a tarball
      • –terminate, t : Terminate a running distribution
  1. Windows UI Library 2.1: WinUI is open source and everyone can check out the WinUI GitHub repo to file issues, discuss new features, and even contribute code. Inside WinUI 2.1, we’ve added new controls such as an animated visual player, enhancing the menubar, teaching tip, item repeater and much more. There are also features where we’ve addressed many accessibility, visual and functional issues reported by developers. We encourage everyone to use WinUI in their UWP apps – it’s the best way to get the latest Fluent design, controls, and is backward-compatible to Windows 10 Anniversary Update.

Update your dev environment in two simple steps today

The first is to update your system to Windows 10 May 2019 Update by using the Release Preview Ring. The Insider team has a great blog post LINK on exact details getting on the Release Preview Ring. Once you do that, just go into Visual Studio 2017 or 2019 and grab the new SDK and you’re good to go. Once 1903 goes general availability, the SDK will become the default SDK inside Visual Studio.

Today with Windows Insider Release Preview for Windows 10 1903 Update:

  1. Run the installer or go to https://www.visualstudio.com/downloads/ and download it
  2. Go to “Individual Components”
  3. Go to “SDKs, libraries, and frameworks” section
  4. Check “Windows 10 SDK (10.0. 18362)”
  5. Click “Install”

Screen Preview of the new Windows 10 May 2019 Update

When Windows 10 May 2019 is fully released:

  1. Run the Visual Studio installer or go to https://www.visualstudio.com/downloads/ and download it
  2. Select “Universal Windows Platform development” under Workloads, Windows 10 SDK (10.0.18362) will be included by default
  3. Click “Install”

More useful tips

Do you want tools for C++ desktop or game development for UWP? Be sure one of these two are selected:

  • C++ Universal Windows Platform tools in the UWP Workload section
  • Desktop development with C++ Workload and the Windows SDK 10 (10.0.18362)
  • If you want the Universal Windows Platform tools, Select the Universal Windows Platform tools workload

Once your systems are updated and recompiled and your app is tested, submit your app to Dev Center.

Your take on the Windows 10 May 2019 Update

Tell us what crazy things you’ve been working on with the new update by tweeting @ClintRutkas or @WindowsDev.

Known issue that could affect you

There is a known issue that affects the following scenario when upgrading to or performing a clean installation of Windows 10, version 1903. When input–output memory management unit (IOMMU) is running on VMware Hypervisor, any guest client or server virtual machine (VM) that uses IOMMU may stop working. Typical use scenarios include when Virtualization-based Security (VBS) and Windows Credential Guard are enabled for a guest VM.

We are working to provide a solution as soon as possible. You must integrate the solution into the customer image before deployment.

The post Start developing on Windows 10 May 2019 Update today appeared first on Windows Developer Blog.

Announcing the .NET Framework 4.8

$
0
0

We are thrilled to announce the release of the .NET Framework 4.8 today. It’s included in the Windows 10 May 2019 Update. .NET Framework 4.8 is also available on Windows 7+ and Windows Server 2008 R2+.

You can install .NET 4.8 from our .NET Download site. For building applications targeting .NET Framework 4.8, you can download the NET 4.8 Developer Pack. If you just want the .NET 4.8 runtime, you can try:

The .NET Framework 4.8 includes an updated toolset as well as improvements in several areas:

  • [Runtime] JIT and NGEN Improvements
  • [BCL] Updated ZLib
  • [BCL] Reducing FIPS Impact on Cryptography
  • [WinForms] Accessibility Enhancements
  • [WCF] Service Behavior Enhancements
  • [WPF] High DPI Enhancements, UIAutomation Improvements

You can see the complete list of improvements in the .NET Framework 4.8 release notesReference sources have also been updated for .NET 4.8.

Supported Windows Versions

Windows Client versions: Windows 10 version 1903, Windows 10 version 1809, Windows 10 version 1803, Windows 10 version 1709, Windows 10 version 1703, Windows 10 version 1607, Windows 8.1, Windows 7 SP1
Windows Server versions: Windows Server 2019, Windows Server version 1803, Windows Server 2016, Windows Server 2012, Windows Server 2012 R2, Windows Server 2008 R2 SP1

New Features in .NET Framework 4.8

Runtime – JIT improvements

The JIT in .NET 4.8 is based on .NET Core 2.1.  All bug fixes and many code generation-based performance optimizations from .NET Core 2.1 are now available in the .NET Framework.

Runtime – NGEN improvements

NGEN images in the .NET Framework no longer contain writable & executable sections. This reduces the surface area available to attacks that attempt to execute arbitrary code by modifying memory that will be executed.

While there will still be writable & executable data in memory at runtime, this change removes those mapped from NGEN images, allowing them to run in restricted environments that don’t permit executable/writable sections in images.

Runtime – Antimalware Scanning for All Assemblies

In previous versions of .NET Framework, Windows Defender or third-party antimalware software would automatically scan all assemblies loaded from disk for malware. However, assemblies loaded from elsewhere, such as by using Assembly.Load(byte[]), would not be scanned and could potentially carry viruses undetected.

.NET Framework 4.8 on Windows 10 triggers scans for those assemblies by Windows Defender and many other antimalware solutions that implement the Antimalware Scan Interface. We expect that this will make it harder for malware to disguise itself in .NET programs.

BCL – Updated ZLib

Starting with .NET Framework 4.5 we used the native version of ZLib (a native external compression library used for data compression) from http://zlib.net in clrcompression.dll in order to provide an implementation for the deflate algorithm. In .NET Framework 4.8 we updated clrcompression.dll to use version 1.2.11 which includes several key improvements and fixes.

BCL – Reducing FIPS Impact on Cryptography

.NET Framework 2.0+ have cryptographic provider classes such as SHA256Managed, which throw a CryptographicException when the system cryptographic libraries are configured in “FIPS mode”. These exceptions are thrown because the managed versions have not undergone FIPS (Federal Information Processing Standards) 140-2 certification (JIT and NGEN image generation would both invalidate the certificate), unlike the system cryptographic libraries. Few developers have their development machines in “FIPS mode”, which results in these exceptions being raised in production (or on customer systems). The “FIPS mode” setting was also used by .NET Framework to block cryptographic algorithms which were not considered an approved algorithm by the FIPS rules.

For applications built for .NET Framework 4.8, these exceptions will no longer be thrown (by default). Instead, the SHA256Managed class (and the other managed cryptography classes) will redirect the cryptographic operations to a system cryptography library. This policy change effectively removes a potentially confusing difference between developer environments and the production environments in which the code runs and makes native components and managed components operate under the same cryptographic policy.

Applications targeting .NET Framework 4.8 will automatically switch to the newer, relaxed policy and will no longer see exceptions being thrown from MD5Cng, MD5CryptoServiceProvider, RC2CryptoServiceProvider, RIPEMD160Managed, and RijndaelManaged when in “FIPS mode”. Applications which depend on the exceptions from previous versions can return to the previous behavior by setting the AppContext switch “Switch.System.Security.Cryptography.UseLegacyFipsThrow” to “true”.

Windows Forms – Accessibility Enhancements

In .NET Framework 4.8 WinForms is adding three new features to enable developers to write more accessible applications. The features added are intended to make communication of application data to visually impaired users more robust. We’ve added support for ToolTips when a user navigates via the keyboard, we’ve added LiveRegions and Notification Events to many commonly used controls.

To enable these features your application needs to have the following AppContextSwitches enabled in the App.config file:

Windows Forms – UIA LiveRegions Support in Labels and StatusStrips

UIA Live Regions allow application developers to notify screen readers of a text change on a control that is located apart from the location where the user is working. Examples of where this would come in handy could be a StatusStrip that shows a connection status. If the connection is dropped and the Status changes, the developer might want to notify the screen reader of this change. Windows Forms has implemented UIA LiveRegions for both the Label control and the StatusStrip control.

Example use of the LiveRegion in a Label Control:

Narrator will now announce “Ready” Regardless of where the user is interacting with the application.
You can also implement your UserControl as a Live region:

Windows Forms – UIA Notification Events

In Windows 10 Fall Creators Update Windows introduced a new method of having an application notify Narrator that content has changed, and Narrator should announce the change. The UIA Notification event provides a way for your app to raise a UIA event which leads to Narrator simply making an announcement based on text you supply with the event, without the need to have a corresponding control in the UI. In some scenarios, this could be a straightforward way to dramatically improve the accessibility of your app.  For more information about UIA Notification Events, see this blog post.

An example of where a Notification might come in handy is to notify the progress of some process that may take some time.

An example of raising the Notification event:

Windows Forms – ToolTips on keyboard access

Currently a control tooltip can only be triggered to pop up by moving a mouse pointer into the control. This new feature enables a keyboard user to trigger a control’s tooltip by focusing the control using a Tab key or arrow keys with or without modifier keys. This particular accessibility enhancement requires an additional AppContextSwitch as seen in the following example:

  1. Create a new WinForms application
  2. Add the following XML to the App.config file

  1. Add several buttons and a ToolTip control to the application’s form.
  2. Set tooltips for the buttons.
  3. Run the application and navigate between the buttons using a keyboard:

Windows Forms – DataGridView control accessible hierarchy changes

Currently the accessible hierarchy (UI Automation tree) shows the editing box tree element as a child of currently edited cell but not as a root child element of DataGridView. The hierarchy tree update can be observed using Inspect tool:

 WCF – ServiceHealthBehavior

Health endpoints have many benefits and are widely used by orchestration tools to manage the service based on the service health status. Health checks can also be used by monitoring tools to track and alert on the availability and performance of the service, where they serve as early problem indicators.

ServiceHealthBehavior is a WCF service behavior that extends IServiceBehavior.  When added to the ServiceDescription.Behaviors collection, it will enable the following:

  • Return service health status with HTTP response codes: One can specify in the query string the HTTP status code for a HTTP/GET health probe request.
  • Publication of service health: Service specific details including service state and throttle counts and capacity are displayed using an HTTP/GET request using the “?health” query string. Knowing and easily having access to the information displayed is important when trouble-shooting a mis-behaving WCF service.

Config ServiceHealthBehavior:

There are two ways to expose the health endpoint and publish WCF service health information: by using code or by using a configuration file.

  1. Enable health endpoint using code 

  1. Enable health endpoint using config


Return service health status with HTTP response codes:

Health status can be queried by query parameters (OnServiceFailure, OnDispatcherFailure, OnListenerFailure, OnThrottlePercentExceeded). HTTP response code (200 – 599) can be specified for each query parameter. If the HTTP response code is omitted for a query parameter, a 503 HTTP response code is used by default.

Query parameters and examples:

  1. OnServiceFailure:
  • Example: by querying https://contoso:81/Service1?health&OnServiceFailure=450, a 450 HTTP response status code is returned when ServiceHost.State is greater than CommunicationState.Opened.
  1. OnDispatcherFailure:
  • Example: by querying https://contoso:81/Service1?health&OnDispatcherFailure=455, a 455 HTTP response status code is returned when the state of any of the channel dispatchers is greater than CommunicationState.Opened.
  1. OnListenerFailure:
  • Example: by querying https://contoso:81/Service1?health&OnListenerFailure=465, a 465 HTTP response status code is returned when the state of any of the channel listeners is greater than CommunicationState.Opened.
  1. OnThrottlePercentExceeded: Specifies the percentage {1 – 100} that triggers the response and its HTTP response code {200 – 599}.
  • Example: by querying https://contoso:81/Service1?health&OnThrottlePercentExceeded= 70:350,95:500, when the throttle percentage is equal or larger than 95%, 500 the HTTP response code is returned; when the percentage is equal or larger than 70% and less then 95%,   350 is returned; otherwise, 200 is returned.

Publication of service health:

After enabling the health endpoint, the service health status can be displayed either in html (by specifying the query string: https://contoso:81/Service1?health) or xml (by specifying the query string: https://contoso:81/Service1?health&Xml) formats. https://contoso:81/Service1?health&NoContent returns empty html page.

Note:

It’s best practice to always limit access to the service health endpoint. You can restrict access by using the following mechanisms:

  1. Use a different port for the health endpoint than what’s used for the other services as well as use a firewall rule to control access.
  2. Add the desirable authentication and authorization to the health endpoint binding.

WPF – Screen narrators no longer announce elements with Collapsed or Hidden visibility

Elements with Collapsed or Hidden visibility are no longer announced by the screen readers. User interfaces containing elements with a Visibility of Collapsed or Hidden can be misrepresented by screen readers if such elements are announced to the user. In .NET Framework 4.8, WPF no longer includes Collapsed or Hidden elements in the Control View of the UIAutomation tree, so that the screen readers can no longer announce these elements.

WPF – SelectionTextBrush Property for use with Non-Adorner Based Text Selection

In the .NET Framework 4.7.2 WPF added the ability to draw TextBox and PasswordBox text selection without using the adorner layer (See Here). The foreground color of the selected text in this scenario was dictated by SystemColors.HighlightTextBrush.

In the .NET Framework 4.8 we are adding a new property, SelectionTextBrush, that allows developers to select the specific brush for the selected text when using non-adorner based text selection.

This property works only on TextBoxBase derived controls and PasswordBox in WPF applications with non-adorner based text selection enabled. It does not work on RichTextBox. If non-adorner based text selection is not enabled, this property is ignored.

To use this property, simply add it to your XAML code and use the appropriate brush or binding.

The resulting text selection will look like this:

You can combine the use of SelectionBrush and SelectionTextBrush to generate any color combination of background and foreground that you deem appropriate.

WPF – High DPI Enhancements

WPF has added support for Per-Monitor V2 DPI Awareness and Mixed-Mode DPI scaling in .NET 4.8. Additional information about these Windows concepts is available here.

The latest Developer Guide for Per monitor application development in WPF states that only pure-WPF applications are expected to work seamlessly in a high-DPI WPF application and that Hosted HWND’s and Windows Forms controls are not fully supported.

.NET 4.8 improves support for hosted HWND’s and Windows Forms interoperation in High-DPI WPF applications on platforms that support Mixed-Mode DPI scaling (Windows 10 v1803). When hosted HWND’s or Windows Forms controls are created as Mixed-Mode DPI scaled windows, (as described in the “Mixed-Mode DPI Scaling and DPI-aware APIs” documentation by calling SetThreadDpiHostingBehavior and SetThreadDpiAwarenessContext API’s), it will be possible to host such content in a Per-Monitor V2 WPF application and have them be sized and scaled appropriately. Such hosted content will not be rendered at the native DPI – instead, the OS will scale the hosted content to the appropriate size.

The support for Per-Monitor v2 DPI awareness mode also allows WPF controls to be hosted (i.e., parented) under a native window in a high-DPI application. Per-Monitor V2 DPI Awareness support will be available on Windows 10 v1607 (Anniversary Update). Windows adds support for child-HWND’s to receive DPI change notifications when Per-Monitor V2 DPI Awareness mode is enabled via the application manifest.

This support is leveraged by WPF to ensure that controls that are hosted under a native window can respond to DPI changes and update themselves. For e.g.- a WPF control hosted in a Windows Forms or a Win32 application that is manifested as Per Monitor V2 – will now be able to respond correctly to DPI changes and update itself.

Note that Windows supports Mixed-Mode DPI scaling on Windows 10 v1803, whereas Per-Monitor V2 is supported on v1607 onwards.

To try out these features, the following application manifest and AppContext flags must be enabled:

  1. Enable Per-Monitor DPI in your application
      • Turn on Per-Monitor V2 in your app.manifest

  2. Turn on High DPI support in WPF
    • Target .NET Framework 4.6.2 or greater

and

3. Set AppContext switch in your app.config

Alternatively,

Set AppContextSwitch Switch.System.Windows.DoNotUsePresentationDpiCapabilityTier2OrGreater=false in App.Config to enable Per-Monitor V2 and Mixed-Mode DPI support introduced in .NET 4.8.

The runtime section in the final App.Config might look like this:

AppContext switches can also be set in registry. You can refer to the AppContext Class for additional documentation.

WPF – Support for UIAutomation ControllerFor property

UIAutomation’s ControllerFor property returns an array of automation elements that are manipulated by the automation element that supports this property. This property is commonly used for Auto-suggest accessibility. ControllerFor is used when an automation element affects one or more segments of the application UI or the desktop. Otherwise, it is hard to associate the impact of the control operation with UI elements. This feature adds the ability for controls to provide a value for ControllerFor property.

A new virtual method has been added to AutomationPeer:

To provide a value for the ControllerFor property, simply override this method and return a list of AutomationPeers for the controls being manipulated by this AutomationPeer:

WPF – Tooltips on keyboard access

Currently tooltips only display when a user hovers the mouse cursor over a control. In .NET Framework 4.8, WPF is adding a feature that enables tooltips to show on keyboard focus, as well as via a keyboard shortcut.

To enable this feature, an application needs to target .NET Framework 4.8 or opt-in via AppContext switch “Switch.UseLegacyAccessibilityFeatures.3” and “Switch.UseLegacyToolTipDisplay”.

Sample App.config file:

Once enabled, all controls containing a tooltip will start to display it once the control receives keyboard focus. The tooltip can be dismissed over time or when keyboard focus changes. Users can also dismiss the tooltip manually via a new keyboard shortcut Ctrl + Shift + F10. Once the tooltip has been dismissed it can be displayed again via the same keyboard shortcut.

Note: RibbonToolTips on Ribbon controls won’t show on keyboard focus – they will only show via the keyboard shortcut.

WPF – Added Support for SizeOfSet and PositionInSet UIAutomation properties

Windows 10 introduced new UIAutomation properties SizeOfSet and PositionInSet which are used by applications to describe the count of items in a set. UIAutomation client applications such as screen readers can then query an application for these properties and announce an accurate representation of the application’s UI.

This feature adds support for WPF applications to expose these two properties to UIAutomation. This can be accomplished in two ways:

      1. DependencyProperties 

New DependencyProperties SizeOfSet and PositionInSet have been added to the System.Windows.Automation.AutomationProperties namespace. A developer can set their values via XAML:

    2. AutomationPeer virtual methods 

Virtual methods GetSizeOfSetCore and GetPositionInSetCore have also been added to the AutomationPeer class. A developer can provide values for SizeOfSet and PositionInSet by overriding these methods:

 Automatic values 

Items in ItemsControls will provide a value for these properties automatically without additional action from the developer. If an ItemsControl is grouped, the collection of groups will be represented as a set and each group counted as a separate set, with each item inside that group providing its position inside that group as well as the size of the group. Automatic values are not affected by virtualization. Even if an item is not realized, it is still counted towards the total size of the set and affects the position in the set of it’s sibling items.

Automatic values are only provided if the developer is targeting .NET Framework 4.8 or has set the AppContext switch “Switch.UseLegacyAccessibilityFeatures.3” – for example via App.config file:


Closing

Please try out these improvements in the .NET Framework 4.8 and share your feedback in the comments below or via GitHub.

Thank you!

 

The post Announcing the .NET Framework 4.8 appeared first on .NET Blog.

Updated Razor support in Visual Studio Code, now with Blazor support

$
0
0

Today we are pleased to announce improved Razor tooling support in Visual Studio Code with the latest C# extension. This latest release includes improved Razor diagnostics and support for tag helpers and Blazor apps.

Get Started

To use this preview of Razor support in Visual Studio Code install the following:

To try out Visual Studio Code with Blazor apps, also install:

  • .NET Core 3.0 (Preview 4 or later)
  • The latest Blazor CLI templates:

    dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.0.0-preview4-19216-03
    

What’s new in this release?

Improved diagnostics

We’ve improved the Razor diagnostics in Visual Studio Code for a variety of scenarios, including floating @ characters:

Floating @ character

Missing end braces:

Missing end brace

And missing end tags in code blocks:

Missing end tag

Tag helpers

Tag helper completions are now supported in ASP.NET Core projects:

Tag helper completion

As well as completions for tag helper attribute names and values:

Tag helper attribute completion

Blazor

Visual Studio Code now works with Blazor apps too!

You get completions for components and component parameters:

Component completions

Also data-binding, event handlers and lots of other Blazor goodies!

Blazor todos

Limitations and known issues

This is an alpha release of the Razor tooling for Visual Studio Code, so there are a number of limitations and known issues:

  • Razor editing is currently only supported in ASP.NET Core and Blazor projects (no support for ASP.NET projects)
  • Limited support for colorization

Note that if you need to disable the Razor tooling:

  • Open the Visual Studio Code User Settings: File -> Preferences -> Settings
  • Search for “razor”
  • Check the “Razor: Disabled” checkbox

Feedback

Please let us know what you think about this latest update to the Razor tooling support in Visual Studio Code by reporting issues in the Razor.VSCode repo. When reporting Razor tooling related issues please use the “Report a Razor Issue” command in Visual Studio Code to capture all of the relevant longs and diagnostic information. Just run the command and then follow the instructions.

Thanks for trying out Razor in Visual Studio Code!

The post Updated Razor support in Visual Studio Code, now with Blazor support appeared first on ASP.NET Blog.

Blazor now in official preview!

$
0
0

With this newest Blazor release we’re pleased to announce that Blazor is now in official preview! Blazor is no longer experimental and we are committing to ship it as a supported web UI framework including support for running client-side in the browser on WebAssembly.

A little over a year ago we started the Blazor experimental project with the goal of building a client web UI framework based on .NET and WebAssembly. At the time Blazor was little more than a prototype and there were lots of open questions about the viability of running .NET in the browser. Since then we’ve shipped nine experimental Blazor releases addressing a variety of concerns including component model, data binding, event handling, routing, layouts, app size, hosting models, debugging, and tooling. We’re now at the point where we think Blazor is ready to take its next step.

Blazor icon

Simplifying the naming and versioning

For a while, we’ve used the terminology Razor Components in some cases, and Blazor in other cases. This has proven to be confusing, so following a lot of community feedback, we’ve decided to drop the name ASP.NET Core Razor Components, and return to the name Server-side Blazor instead.

This emphasizes that Blazor is a single client app model with multiple hosting models:

  • Server-side Blazor runs on the server via SignalR
  • Client-side Blazor runs client-side on WebAssembly

… but either way, it’s the same programming model. The same Blazor components can be hosted in both environments.

Also, since Blazor is now part of .NET Core, the client-side Blazor package versions now align with the .NET Core 3.0 versions. For example, the version number of all the preview packages we are shipping today is 3.0.0-preview4-19216-03. We no longer use separate 0.x version numbers for client-side Blazor packages.

What will ship when

  • Server-side Blazor will ship as part of .NET Core 3.0. This was already announced last October.
  • Client-side Blazor won’t ship as part of the initial .NET Core 3.0 release, but we are now announcing it is committed to ship as part of a future .NET Core release (and hence is no longer an “experiment”).

With each preview release of .NET Core 3.0, we will continue to ship preview releases of both server and client-side Blazor.

Today’s preview release

New features in this preview release:

  • Templates updated to use the .razor file extension
  • _Imports.razor
  • Scope components with @using
  • New component item template
  • New Blazor icons
  • Blazor support in Visual Studio Code

Check out the ASP.NET Core 3.0 Preview 4 announcement for details on these improvements. See also the Blazor release notes for additional details on this preview release.

Get the Blazor preview

To get started with the Blazor preview install the following:

  1. .NET Core 3.0 Preview 4 SDK (3.0.100-preview4-011223)
  2. The Blazor templates on the command-line:

    dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.0.0-preview4-19216-03
    
  3. Visual Studio 2019 Preview with the ASP.NET and web development workload selected as well as the latest Blazor extension from the Visual Studio Marketplace, or Visual Studio Code with the latest C# extension (now with Blazor support!).

You can find getting started instructions, docs, and tutorials for Blazor at our new Blazor home page at https://blazor.net.

Blazor home page

Upgrade to the Blazor preview:

To upgrade your existing Blazor apps to the new Blazor preview first make sure you’ve installed the prerequisites listed above then follow these steps:

  • Update all Microsoft.AspNetCore.Blazor.* package references to 3.0.0-preview4-19216-03.
  • Remove any package reference to Microsoft.AspNetCore.Components.Server.
  • Remove any DotNetCliToolReference to Microsoft.AspNetCore.Blazor.Cli and replace with a package reference to Microsoft.AspNetCore.Blazor.DevServer.
  • In client Blazor projects add the <RazorLangVersion>3.0</RazorLangVersion> property.
  • Rename all _ViewImports.cshtml files to _Imports.razor.
  • Rename all remaining .cshtml files to .razor.
  • Rename components.webassembly.js to blazor.webassembly.js
  • Remove any use of the Microsoft.AspNetCore.Components.Services namespace and replace with Microsoft.AspNetCore.Components as required.
  • Update server projects to use endpoint routing:
// Replace this:
app.UseMvc(routes =>
{
    routes.MapRoute(name: "default", template: "{controller}/{action}/{id?}");
});

// With this:
app.UseRouting();

app.UseEndpoints(routes =>
{
    routes.MapDefaultControllerRoute();
});
  • Run dotnet clean on the solution to clear out old Razor declarations.

Blazor community page is now Awesome Blazor

As part of updating the Blazor site, we’ve decide to retire the Blazor community page and instead direct folks to the community driven Awesome Blazor site. Thank you Adrien Torris for maintaining this truly “awesome” list of Blazor resources!

Try out preview Blazor UI offerings from Telerik and DevExpress

Blazor benefits from an active and supportive community that has contributed all sorts of sample apps, components, and libraries to the Blazor ecosystem. Recently popular component vendors like Telerik and DevExpress have joined in the fun and shipped previews of Blazor UI components. We encourage you to give these Blazor UI offerings a try and let them know what you think.

Give feedback

We hope you enjoy this latest preview release of Blazor. As with previous releases, your feedback is important to us. If you run into issues or have questions while trying out Blazor, file issues on GitHub. You can also chat with us and the Blazor community on Gitter if you get stuck or to share how Blazor is working for you. After you’ve tried out Blazor for a while please let us know what you think by taking our in-product survey. Click the survey link shown on the app home page when running one of the Blazor project templates:

Blazor survey

Thanks for trying out Blazor!

The post Blazor now in official preview! appeared first on ASP.NET Blog.

ASP.NET Core updates in .NET Core 3.0 Preview 4

$
0
0

.NET Core 3.0 Preview 4 is now available and it includes a bunch of new updates to ASP.NET Core.

Here’s the list of what’s new in this preview:

  • Razor Components renamed back to server-side Blazor
  • Client-side Blazor on WebAssembly now in official preview
  • Resolve components based on @using
  • _Imports.razor
  • New component item template
  • Reconnection to the same server
  • Stateful reconnection after prerendering
  • Render stateful interactive components from Razor pages and views
  • Detect when the app is prerendering
  • Configure the SignalR client for server-side Blazor apps
  • Improved SignalR reconnect features
  • Configure SignalR client for server-side Blazor apps
  • Additional options for MVC service registration
  • Endpoint routing updates
  • New template for gRPC
  • Design-time build for gRPC
  • New Worker SDK

Please see the release notes for additional details and known issues.

Get started

To get started with ASP.NET Core in .NET Core 3.0 Preview 4 install the .NET Core 3.0 Preview 4 SDK

If you’re on Windows using Visual Studio, you also need to install the latest preview of Visual Studio 2019.

If you’re using Visual Studio Code, check out the improved Razor tooling and Blazor support in the C# extension.

Upgrade an existing project

To upgrade an existing an ASP.NET Core app to .NET Core 3.0 Preview 4, follow the migrations steps in the ASP.NET Core docs.

Please also see the full list of breaking changes in ASP.NET Core 3.0.

To upgrade an existing ASP.NET Core 3.0 Preview 3 project to Preview 4:

  • Update Microsoft.AspNetCore.* package references to 3.0.0-preview4-19216-03
  • In Razor Components apps (i.e. server-side Blazor apps) rename _ViewImports.cshtml to _Imports.razor for Razor imports that should apply to Razor components.
  • In Razor Component apps, in your Index.cshtml file, change the <script> tag that references components.server.js so that it references blazor.server.js instead.
  • Remove any use of the _RazorComponentInclude property in your project file and rename and component files using the .cshtml file extension to use the .razor file extension instead.
  • Remove package references to Microsoft.AspNetCore.Components.Server.
  • Replace calls to AddRazorComponents in Startup.ConfigureServices with AddServerSideBlazor.
  • Replace calls to MapComponentHub<TComponent> with MapBlazorHub.
  • Remove any use of the Microsoft.AspNetCore.Components.Services namespace and replace with Microsoft.AspNetCore.Components as required.
  • In Razor Component apps, replace the {*clientPath} route in the host Razor Page with “/” and add a call to MapFallbackToPage in UseEndpoints.
  • Update any call to UseRouting in your Startup.Configure method to move the route mapping logic into a call to UseEndpoints at the point where you want the endpoints to be executed.

Before:

app.UseRouting(routes =>
{
    routes.MapRazorPages();
});

app.UseCookiePolicy();

app.UseAuthorization();

After:

app.UseRouting();

app.UseCookiePolicy();

app.UseAuthorization();

app.UseEndpoints(routes =>
{
    routes.MapRazorPages();
    routes.MapFallbackToPage();
});

Razor Components renamed back to server-side Blazor

For a while, we’ve used the terminology Razor Components in some cases, and Blazor in other cases. This has proven to be confusing, so following a lot of community feedback, we’ve decided to drop the name ASP.NET Core Razor Components, and return to the name Server-side Blazor instead.

This emphasizes that Blazor is a single client app model with multiple hosting models:

  • Server-side Blazor runs on the server via SignalR
  • Client-side Blazor runs client-side on WebAssembly

… but either way, it’s the same programming model. The same Blazor components can be hosted in both environments.

In this preview of the .NET Core SDK we renamed the “Razor Components” template back to “Blazor (server-side)” and updated the related APIs accordingly. In Visual Studio the template will still show up as “Razor Components” when using Visual Studio 2019 16.1.0 Preview 1, but it will start showing up as “Blazor (server-side)” in a subsequent preview. We’ve also updated the template to use the new super cool flaming purple Blazor icon.

Blazor (server-side) template

Client-side Blazor on WebAssembly now in official preview

We’re also thrilled to announce that client-side Blazor on WebAssembly is now in official preview! Blazor is no longer experimental and we are committing to ship it as a supported web UI framework including support for running client-side in the browser on WebAssembly.

  • Server-side Blazor will ship as part of .NET Core 3.0. This was already announced last October.
  • Client-side Blazor won’t ship as part of the initial .NET Core 3.0 release, but we are now announcing it is committed to ship as part of a future .NET Core release (and hence is no longer an “experiment”).

With each preview release of .NET Core 3.0, we will continue to ship preview releases of both server and client-side Blazor.

Resolve components based on @using

Components in referenced assemblies are now always in scope and can be specified using their full type name including the namespace. You no longer need to import components from component libraries using the @addTagHelper directive.

For example, you can add a Counter component to the Index page like this:

<BlazorWebApp1.Pages.Counter />

Use the @using directive to bring component namespaces into scope just like you would in C# code:

@using BlazorWebApp1.Pages

<Counter />

_Imports.razor

Use _Imports.razor files to import Razor directives across multiple Razor component files (.razor) in a hierarchical fashion.

For example, the following _Imports.razor file applies a layout and adds using statements for all Razor components in a the same folder and in any sub folders:

@layout MainLayout
@using Microsoft.AspNetCore.Components.
@using BlazorApp1.Data

This is similar to how you can use _ViewImports.cshtml with Razor views and pages, but applied specifically to Razor component files.

New component item template

You can now add components to Blazor apps using the new Razor Component item template:

dotnet new razorcomponent -n MyComponent1

Reconnection to the same server

Server-side Blazor apps require an active SignalR connection to the server to function. In this preview, the app will now attempt to reconnect to the server. As long as the state for that client is still in memory, the client session will resume without losing any state.

When the client detects that the connection has been lost a default UI is displayed to the user while the client attempts to reconnect:

Attempting reconnect

If reconnection failed the user is given the option to retry:

Reconnect failed

To customize this UI define an element with components-reconnect-modal as its ID. The client will update this element with one of the following CSS classes based on the state of the connection:

  • components-reconnect-show: Show the UI to indicate the connection was lost and the client is attempting to reconnect.
  • components-reconnect-hide: The client has an active connection – hide the UI.
  • components-reconnect-failed: Reconnection failed. To attempt reconnection again call window.Blazor.reconnect().

Stateful reconnection after prerendering

Server-side Blazor apps are setup by default to prerender the UI on the server before client connection back to the server is established. This is setup in the _Host.cshtml Razor page:

<body>
    <app>@(await Html.RenderComponentAsync<App>())</app>

    <script src="_framework/blazor.server.js"></script>
</body>

In this preview the client will now reconnect back to the server to the same state that was used to prerender the app. If the app state is still in memory it doesn’t need to be rerendered once the SignalR connection is established.

Render stateful interactive components from Razor pages and views

You can now add stateful interactive components to an Razor page or View. When the page or view renders the component will be prerendered with it. The app will then reconnect to the component state once the client connection has been established as long as it is still in memory.

For example, the following Razor page renders a Counter component with an initial count that is specified using a form:

<h1>My Razor Page</h1>
<form>
    <input type="number" asp-for="InitialCount" />
    <button type="submit">Set initial count</button>
</form>

@(await Html.RenderComponentAsync<Counter>(new { InitialCount = InitialCount }))

@functions {
    [BindProperty(SupportsGet=true)]
    public int InitialCount { get; set; }
}

Interactive component on Razor page

Detect when the app is prerendering

While a Blazor app is prerendering, certain actions (like calling into JavaScript) are not possible because a connection with the browser has not yet been established. Components may need to render differently when being prerendered.

To delay JavaScript interop calls until after the connection with the browser has been established you can now use the OnAfterRenderAsync component lifecycle event. This event will only be called after the app has been fully rendered and the client connection established.

To conditionally render different content based on whether the app is currently being prerendered or not use IsConnected property on the IComponentContext service. This property will only return true if there is an active connection with the client.

Configure the SignalR client for server-side Blazor apps

Sometimes you need to configure the SignalR client used by server-side Blazor apps. For example, you might want to configure logging on the SignalR client to diagnose a connection issue.

To configure the SignalR client for server-side Blazor apps, add an autostart="false" attribute on the script tag for the blazor.server.js script, and then call Blazor.start passing in a config object that specifies the SignalR builder:

<script src="_framework/blazor.server.js" autostart="false"></script>
<script>
    Blazor.start({
        configureSignalR: function (builder) {
            builder.configureLogging(2); // LogLevel.Information
        }
    });
</script>

Improved SignalR connection lifetime handling

Preview 4 will improve the developer experience for handling SignalR disconnection and reconnection. Automatic reconnects can be enabled by calling the withAutomaticReconnect method on HubConnectionBuilder:

const connection = new signalR.HubConnectionBuilder()
    .withUrl("/chatHub")
    .withAutomaticReconnect()
    .build();

Without any parameters, withAutomaticReconnect() will cause the configure the client to try to reconnect, waiting 0, 2, 10 and 30 seconds respectively before between each attempt.

In order to configure a non-default number of reconnect attempts before failure, or to change the reconnect timing, withAutomaticReconnect accepts an array of numbers representing the delay in milliseconds to wait before starting each reconnect attempt.

const connection = new signalR.HubConnectionBuilder()
    .withUrl("/chatHub")
    .withAutomaticReconnect([0, 0, 2000, 5000]) // defaults to [0, 2000, 10000, 30000]
    .build();

Improved disconnect & reconnect handling opportunities

Before starting any reconnect attempts, the HubConnection will transition to the Reconnecting state and fire its onreconnecting callback. This provides an opportunity to warn users that the connection has been lost, disable UI elements, and mitigate confusing user scenarios that might occur due to the disconnected state.

connection.onreconnecting((error) => {
  console.assert(connection.state === signalR.HubConnectionState.Reconnecting);

  document.getElementById("messageInput").disabled = true;

  const li = document.createElement("li");
  li.textContent = `Connection lost due to error "${error}". Reconnecting.`;
  document.getElementById("messagesList").appendChild(li);
});

If the client successfully reconnects within its first four attempts, the HubConnection will transition back to the Connected state and fire onreconnected callbacks. This gives developers a good opportunity to inform users the connection has been reestablished.

connection.onreconnected((connectionId) => {
  console.assert(connection.state === signalR.HubConnectionState.Connected);

  document.getElementById("messageInput").disabled = false;

  const li = document.createElement("li");
  li.textContent = `Connection reestablished. Connected with connectionId "${connectionId}".`;
  document.getElementById("messagesList").appendChild(li);
});

If the client doesn’t successfully reconnect within its first four attempts, the HubConnection will transition to the Disconnected state and fire its onclosed callbacks. This is a good opportunity to inform users the connection has been permanently lost and recommend refreshing the page.

connection.onclose((error) => {
  console.assert(connection.state === signalR.HubConnectionState.Disconnected);

  document.getElementById("messageInput").disabled = true;

  const li = document.createElement("li");
  li.textContent = `Connection closed due to error "${error}". Try refreshing this page to restart the connection.`;
  document.getElementById("messagesList").appendChild(li);
})

Additional options for MVC service registration

We’re adding some new options for registering MVC’s various features inside ConfigureServices.

What’s changing

We’re adding three new top level extension methods related to MVC features on IServiceCollection. Along with this change we are updating our templates to use these new methods instead of UseMvc().

AddMvc() is not being removed and will continue to behave as it does today.

public void ConfigureServices(IServiceCollection services)
{
    // Adds support for controllers and API-related features - but not views or pages.
    //
    // Used by the API template.
    services.AddControllers();
}
public void ConfigureServices(IServiceCollection services)
{
    // Adds support for controllers, API-related features, and views - but not pages.
    //
    // Used by the Web Application (MVC) template.
    services.AddControllersWithViews();
}
public void ConfigureServices(IServiceCollection services)
{
    // Adds support for Razor Pages and minimal controller support.
    //
    // Used by the Web Application template.
    services.AddRazorPages();
}

These new methods can also be combined. This example is equivalent to the current AddMvc().

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllers();
    services.AddRazorPages();
}

These methods return an IMvcBuilder that can be chained to access any of the methods that are available today from the builder returned by AddMvc().

We recommend using whichever option feels best based on your needs.

Motivations

We wanted to provide some more options that represent how users use the product. In particular we’ve received strong feedback from users that want an API-focused flavor of MVC without the overhead for having the ability to serve views and pages. We tried to provide an experience for this in the past through the AddMvcCore() method, but that approach hasn’t been very successful. Users who tried using AddMvcCore() have been surprised by how much they need to know to use it successfully, and as a result we haven’t promoted its usage. We hope that AddControllers() will better satisfy this scenario.

In addition to the AddControllers() experience, we’re also attempting to create options that feel right for other scenarios. We’ve heard requests for this in the past, but not as strongly as the requests for an API-focused profile. Your feedback about whether AddMvc() could be improved upon, and how will be valuable.

What’s in AddControllers()

AddControllers() includes support for:

  • Controllers
  • Model Binding
  • API Explorer (OpenAPI integration)
  • Authorization [Authorize]
  • CORS [EnableCors]
  • Data Annotations validation [Required]
  • Formatter Mappings (translate a file-extension to a content-type)

All of these features are included because they fit under the API-focused banner, and they are very much pay-for-play. None of these features proactively interact with the request pipeline, these are activated by attributes on your controller or model class. API Explorer is an slight exception, it is a piece of infrastructure used by OpenAPI libraries and will do nothing without Swashbuckle or NSwag.

Some notable features AddMvc() includes but AddControllers() does not:

  • Antiforgery
  • Temp Data
  • Views
  • Pages
  • Tag Helpers
  • Memory Cache

These features are view-related and aren’t necessary in an API-focused profile of MVC.

What’s in AddControllersWithViews()

AddControllersWithViews() includes support for:

  • Controllers
  • Model Binding
  • API Explorer (OpenAPI integration)
  • Authorization [Authorize]
  • CORS [EnableCors]
  • Data Annotations validation [Required]
  • Formatter Mappings (translate a file-extension to a content-type)
  • Antiforgery
  • Temp Data
  • Views
  • Tag Helpers
  • Memory Cache

We wanted to position AddControllersWithViews() as a superset of AddControllers() for simplicity in explaining it. This features set also happens to align with the ASP.NET Core 1.X release (before Razor Pages).

Some notable features AddMvc() includes but AddControllersWithViews() does not: – Pages

What’s in AddRazorPages()

AddRazorPages() includes support for:

  • Pages
  • Controllers
  • Model Binding
  • Authorization [Authorize]
  • Data Annotations validation [Required]
  • Antiforgery
  • Temp Data
  • Views
  • Tag Helpers
  • Memory Cache

For now this profile includes basic support for controllers, but excludes many of the API-focused features listed below. We’re interested in your feedback about what should be included by default in AddRazorPages().

Some notable features AddMvc() includes but AddRazorPages() does not:

  • API Explorer (OpenAPI integration)
  • CORS [EnableCors]
  • Formatter Mappings (translate a file-extension to a content-type)

Endpoint Routing updates

In ASP.NET Core 2.2 we introduced a new routing implementation called Endpoint Routing which replaces IRouter-based routing for ASP.NET Core MVC. In the upcoming 3.0 release Endpoint Routing will become central to the ASP.NET Core middleware programming model. Endpoint Routing is designed to support greater interoperability between frameworks that need routing (MVC, gRPC, SignalR, and more …) and middleware that want to understand the decisions made by routing (localization, authorization, CORS, and more …).

While it’s still possible to use the old UseMvc() or UseRouter() middleware in a 3.0 application, we recommend that every application migrate to Endpoint Routing if possible. We are taking steps to address compatibility bugs and fill in previously unsupported scenarios. We welcome your feedback about what features are missing or anything else that’s not great about routing in this preview release.

We’ll be uploading another post soon with a conceptual overview and cookbook for Endpoint Routing in 3.0.

Endpoint Routing overview

Endpoint Routing is made up of the pair of middleware created by app.UseRouting() and app.UseEndpoints(). app.UseRouting() marks the position in the middleware pipeline where a routing decision is made – where an endpoint is selected. app.UseEndpoints() marks the position in the middleware pipeline where the selected endpoint is executed. Middleware that run in between these can see the selected endpoint (if any) or can select a different endpoint.

If you’re familiar with routing from using MVC then most of what you have experienced so far will behave the same way. Endpoint Routing understands the same route template syntax and processes URLs in a very similar way to the in-the-box implementations of IRouter. Endpoint routing supports the [Route] and similar attributes inside MVC.

We expect most applications will only require changes to the Startup.cs file.

A typical Configure() method using Endpoint Routing has the following high-level structure:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    // Middleware that run before routing. Usually the following appear here:
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
        app.UseDatabaseErrorPage();
    }
    else
    {
        app.UseExceptionHandler("/Error");

    }

    app.UseStaticFiles()

    // Runs matching. An endpoint is selected and set on the HttpContext if a match is found.
    app.UseRouting(); 

    // Middleware that run after routing occurs. Usually the following appear here:
    app.UseAuthentication()
    app.UseAuthorization()
    app.UseCors()
    // These middleware can take different actions based on the endpoint.

    // Executes the endpoint that was selected by routing.
    app.UseEndpoints(endpoints =>
    {
        // Mapping of endpoints goes here:
        endpoints.MapControllers()
        endpoints.MapRazorPages()
        endpoints.MapHub<MyChatHub>()
        endpoints.MapGrpcService<MyCalculatorService>()
    });

    // Middleware here will only run if nothing was matched.
}

MVC Controllers, Razor Pages, SignalR, gRPC, and more are added inside UseEndpoints() – they are now part of the same routing system.

New template for gRPC

The gRPC template has been simplified to a single project template. We no longer include a gRPC client as part of the template. For instructions on how to create a gRPC client, refer to the docs.

.
├── appsettings.Development.json
├── appsettings.json
├── grpc.csproj
├── Program.cs
├── Properties
│   └── launchSettings.json
├── Protos
│   └── greet.proto
├── Services
│   └── GreeterService.cs
└── Startup.cs

3 directories, 8 files

Design-time build for gRPC

Design-time build support for gRPC code-generation makes it easier to rapidly iterate on your gRPC services. Changes to your *.proto files no longer require you to build your project to re-run code generation.

Design time build

Worker SDK

In Preview 3 we introduced the new Worker Service template. In Preview 4 we’ve further decoupled that template from Web by introducing its own SDK. If you create a new Worker Service your csproj will now look like the following:

<Project Sdk="Microsoft.NET.Sdk.Worker">

  <PropertyGroup>
    <TargetFramework>netcoreapp3.0</TargetFramework>
    <UserSecretsId>dotnet-WebApplication59-A2B1DB8D-0408-4583-80BA-1B32DAE36B97</UserSecretsId>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.Extensions.Hosting" Version="3.0.0-preview4.19216.2" />
  </ItemGroup>
</Project>

We’ll have more to share on the new Worker SDK in a future post.

Give feedback

We hope you enjoy the new features in this preview release of ASP.NET Core! Please let us know what you think by filing issues on GitHub.

The post ASP.NET Core updates in .NET Core 3.0 Preview 4 appeared first on ASP.NET Blog.


Announcing .NET Core 3 Preview 4

$
0
0

Today, we are announcing .NET Core 3.0 Preview 4. It includes a chart control for Windows Forms, HTTP/2 support, GC updates to use less memory, support for CPU limits with Docker, the addition of PowerShell in .NET Core SDK Docker container images, and other improvements. If you missed it, check out the improvements we released in .NET Core 3.0 Preview 3 just last month.

Download .NET Core 3 Preview 4 right now on Windows, macOS and Linux.

ASP.NET Core and Entity Framework Core updates are also being released today.

WinForms Chart control now available for .NET Core

We’ve been hearing that some developers were not able to migrate their existing .NET Framework applications to .NET Core because they had a dependency on the Chart control. We’ve fixed that for you!

The System.Windows.Forms.DataVisualization package (which includes the chart control) is now available on NuGet, for .NET Core. You can now include this control in your .NET Core WinForms applications!

Chart control in Visual Studio

We ported the System.Windows.Forms.DataVisualization library to .NET Core over the last few sprints. The source for the chart control is available at dotnet/winforms-datavisualization, on GitHub. The control was migrated to ease porting to .NET Core 3, but isn’t a component we intend to innovate in. For more advanced data visualization scenarios check out Power BI.

The best way to familiarize yourself with the Charting control is to take a look at our ChartSamples project. It contains all existing chart types and can guide you through every step.

Chart samples app

 

Enabling the Chart control in your .NET project

To use the Chart control in your WinForms Core project add a reference to the System.Windows.Forms.DataVisualization NuGet package. You can do it by either searching for System.Windows.Forms.DataVisualization in the NuGet Package Manager (don’t forget to check Include prerelease box) or by adding the following lines in your .csproj file.

<ItemGroup>
    <PackageReference Include="System.Windows.Forms.DataVisualization" Version="1.0.0-prerelease.19212.2"/>
</ItemGroup>

Note: The WinForms designer is currently under development and you won’t be able to configure the control from the designer just yet. For now you can either use a code-first approach or you can create and configure the control in a .NET Framework application using the designer and then port your project to .NET Core. Porting guidelines are available in the How to port desktop applications to .NET Core 3.0 post.

WPF

The WPF team published more components to dotnet/wpf between the Preview 3 and Preview 4 releases.

The following components are now available as source:

The team published an engineering write-up on what they’ve been working on. You can expect them to publish more code to GitHub soon.

Improving .NET Core Version APIs

We have improved the .NET Core version APIs in .NET Core 3.0. They now return the version information you would expect. These changes while they are objectively better are technically breaking and may break applications that rely on version APIs for various information.

You can now get access to the following information via existing APIs:

C:testappsversioninfo>dotnet run
.NET Core version:
Environment.Version: 3.0.0
RuntimeInformation.FrameworkDescription: .NET Core 3.0.0-preview4.19113.15
CoreFX Build: 3.0.0-preview4.19113.15
CoreFX Hash: add4cacbfb7f7d3f5f07630d10b24e38da4ad027

The code to produce that output follows:

WriteLine(".NET Core version:");
  WriteLine($"Environment.Version: {Environment.Version}");
  WriteLine($"RuntimeInformation.FrameworkDescription: {RuntimeInformation.FrameworkDescription}");
  WriteLine($"CoreCLR Build: {((AssemblyInformationalVersionAttribute[])typeof(object).Assembly.GetCustomAttributes(typeof(AssemblyInformationalVersionAttribute),false))[0].InformationalVersion.Split('+')[0]}");
  WriteLine($"CoreCLR Hash: {((AssemblyInformationalVersionAttribute[])typeof(object).Assembly.GetCustomAttributes(typeof(AssemblyInformationalVersionAttribute), false))[0].InformationalVersion.Split('+')[1]}");
  WriteLine($"CoreFX Build: {((AssemblyInformationalVersionAttribute[])typeof(Uri).Assembly.GetCustomAttributes(typeof(AssemblyInformationalVersionAttribute),false))[0].InformationalVersion.Split('+')[0]}");
  WriteLine($"CoreFX Hash: {((AssemblyInformationalVersionAttribute[])typeof(Uri).Assembly.GetCustomAttributes(typeof(AssemblyInformationalVersionAttribute), false))[0].InformationalVersion.Split('+')[1]}");

Tiered Compilation (TC) Update

Tiered compilation (TC) is a runtime feature that is able to control the compilation speed and quality of the JIT to achieve various performance outcomes. It is enabled by default in .NET Core 3.0 builds.

The fundamental benefit and capability of TC is to enable (re-)jitting methods with slower but faster to produce or higher quality but slower to produce code in order to increase performance of an application as it goes through various stages of execution, from startup through steady-state. This contrasts with the non-TC approach, where every method is compiled a single way (the same as the high-quality tier), which is biased to steady-state over startup performance.

We are considering what the default TC configuration should be for the final 3.0 release. We have been investigating the performance impact (positive and/or negative) for a variety of application scenarios, with the goal of selecting a default that is good for all scenarios, and providing configuration switches to enable developers to opt apps into other configurations.

TC remains enabled in Preview 4, but we changed the functionality that is enabled by default. We are looking for feedback and additional data to help us decide if this new configuration is best, or if we need to make more changes. Our goal is to select the best overall default, and then provide one or more configuration switches to enable other opt-in behaviors.

There are two tiers, tier 0 and tier 1. At startup, tier 0 code can be one of the following:

  • Ahead-of-time compiled Ready to Run (R2R) code.
  • Tier 0 jitted code, produced by “Quick JIT”. Quick JIT applies fewer optimizations (similar to “minopts”) to compile code faster.

Both of these types of tier 0 code can be “upgraded” to tier 1 code, which is fully-optimized jitted code.

In preview 4, R2R tiering is enabled by default and tier 0 jitted code (or Quick JIT) is disabled. This means that all jitted code is jitted as tier 1, by default. Tier 1 code is higher quality (executes faster), but takes longer to generate, so can increase startup time. For Preview 3, TC, including Quick JIT, were enabled.

To enable Quick JIT (tier 0 jitted code):

<TieredCompilationQuickJit>true</TieredCompilationQuickJit>

To disable TC completely:

<TieredCompilation>false</TieredCompilation>

Please try out the various compilation modes, including the Preview 4 default, and give us feedback.

HTTP/2 Support

We now have support for HTTP/2 in HttpClient. The new protocol is a requirement for some APIs, like gRPC and Apple Push Notification Service. We expect more services to require HTTP/2 in the future.

ASP.NET also has support for HTTP/2, however it is an independent implementation that is optimized for scale.

In Preview 4, HTTP/2 is not enabled by default, but can be enabled with one of the following methods:

  • Set AppContext.SetSwitch("System.Net.Http.SocketsHttpHandler.Http2Support", true); app context setting
  • Set DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP2SUPPORT environment variable to true

These configurations (either one) need to be set before using HttpClient if you intend to use HTTP/2.

Note: the preferred HTTP protocol version will be negotiated via TLS/ALPN and HTTP/2 will only be used if the server selects to use it.

SDK Docker Images Contain PowerShell Core

PowerShell Core has been added to the .NET Core SDK Docker container images, per requests from the community. PowerShell Core is a cross-platform (Windows, Linux, and macOS) automation and configuration tool/framework that works well with your existing tools and is optimized for dealing with structured data (e.g. JSON, CSV, XML, etc.), REST APIs, and object models. It includes a command-line shell, an associated scripting language and a framework for processing cmdlets.

You can try out PowerShell Core, as part of the .NET Core SDK container image, by running the following Docker command:

docker run --rm mcr.microsoft.com/dotnet/core/sdk:3.0 pwsh -c Write-Host "Hello Powershell"

There are two main scenarios that having PowerShell inside the .NET Core SDK container image enables, which were not otherwise possible:

Example syntax for launching PowerShell for a (volume-mounted) containerized build:

  • docker run -it -v c:myrepo:/myrepo -w /myrepo mcr.microsoft.com/dotnet/core/sdk:3.0 pwsh build.ps1
  • docker run -it -v c:myrepo:/myrepo -w /myrepo mcr.microsoft.com/dotnet/core/sdk:3.0 ./build.ps1

For the second example to work, on Linux, the .ps1 file needs to have the following pattern, and needs to be formatted with Unix (LF) not Windows (CRLF) line endings:

#!/usr/bin/env pwsh
Write-Host "test"

If you are new to PowerShell and would like to learn more, we recommend reviewing the getting started documentation.

Note: PowerShell Core is now available as part of .NET Core 3.0 SDK container images. It is not part of the .NET Core 3.0 SDK.

Better support Docker CPU (–cpus) Limits

The Docker client allows limiting memory and CPU. We improved support for memory limits in Preview 3, and have now started improving CPU limits support.

Round up the value of the CPU limit

In the case where --cpus is set to a value close (enough) to a smaller integer (for example, 1.499999999), the runtime would previously round that value down (in this case, to 1). As a result, the runtime would take advantage of less CPU than requested, leading to CPU underutilization.

By rounding up the value, the runtime augments the pressure on the OS threads scheduler, but even in the worst case scenario (--cpus=1.000000001 — previously rounded down to 1, now rounded to 2), we have not observed any overutilization of the CPU leading to performance degradation.

Thread pool honors CPU limits

The next step is ensuring that the thread pool honors CPU limits. Part of the algorithm of the thread pool is computing CPU busy time, which is, in part, a function of available CPUs. By taking CPU limits into account when computing CPU busy time, we avoid various heuristic of the threadpool competing with each other: one trying to allocate more threads to increase the CPU busy time, and the other one trying to allocate less threads because adding more threads doesn’t improve the throughput.

Making GC Heap Sizes Smaller by default

While working on improving support for docker memory limits as part of Preview 3, we were inspired to make more general GC policy updates to improve memory usage for a broader set of applications (even when not running in a container). The changes better align the generation 0 allocation budget with modern processor cache sizes and cache hierarchy.

Damian Edwards on our team noticed that the memory usage of the ASP.NET benchmarks were cut in half with no negative effect on other performance metrics. That’s a staggering improvement! As he says, these are the new defaults, with no change required to his (or your) code (other than adopting .NET Core 3.0).

The memory savings that we saw with the ASP.NET benchmarks may or may not be representative of what you’ll see with your application. We’d like to hear how these changes reduce memory usage for your application.

Better support for many proc machines

Based on .NET’s Windows heritage, the GC needed to implement the Windows concept of processor groups to support machines with > 64 processors. This implementation was made in .NET Framework, 5-10 years ago. With .NET Core, we made the choice initially for the Linux PAL to emulate that same concept, even though it doesn’t exist in Linux.

We have since abandoned this concept in the GC and transitioned it exclusively to the Windows PAL. We also now expose a configuration switch, GCHeapAffinitizeRanges, to specify affinity masks on machines with >64 processors. Maoni Stephens wrote about this change in Making CPU configuration better for GC on machines with > 64 CPUs.

Hardware Intrinsic API changes

The Avx2.ConvertToVector256* methods were changed to return a signed, rather than unsigned type. This puts them inline with the Sse41.ConvertToVector128* methods and the corresponding native intrinsics. As an example, Vector256<ushort> ConvertToVector256UInt16(Vector128<byte>) is now Vector256<short> ConvertToVector256Int16(Vector128<byte>).

The Sse41/Avx.ConvertToVector128/256* methods were split into those that take a Vector128/256<T> and those that take a T*. As an example, ConvertToVector256Int16(Vector128<byte>) now also has a ConvertToVector256Int16(byte*) overload. This was done because the underlying instruction which takes an address does a partial vector read (rather than a full vector read or a scalar read). This meant we were not able to always emit the optimal instruction coding when the user had to do a read from memory. This split, allows the user to explicitly select the addressing form of the instruction when needed (such as when you don’t already have a Vector128<T>).

The FloatComparisonMode enum entries and the Sse/Sse2.Compare methods were renamed to clarify that the operation is ordered/unordered and not the inputs. They were also reordered to be more consistent across the SSE and AVX implementations. An example is that Sse.CompareEqualOrderedScalar is now Sse.CompareScalarOrderedEqual. Likewise, for the AVX versions, Avx.CompareScalar(left, right, FloatComparisonMode.OrderedEqualNonSignalling) is now Avx.CompareScalar(left, right, FloatComparisonMode.EqualOrderedNonSignalling).

The ARM64 intrinsics are not going to be considered stable for .NET Core 3.0. They were removed from in box and moved to a separate System.Runtime.Intrinsics.Experimental package that is available on our MyGet feed. This is a similar mechanism to what we did for the x86 intrinsics in .NET Core 2.1

Assembly Load Context Improvements

Enhancements to AssemblyLoadContext:

  • Enable naming contexts
  • Added the ability to enumerate ALCs
  • Added the ability to enumerate assemblies within an ALC
  • Made the type concrete – so instantiation is easier (no requirement for custom types for simple scenarios)

See dotnet/corefx #34791 for more details.

The appwithalc sample demonstrates these new capabilities. The output from that sample is displayed below.

Hello ALC(World)!

Enumerate over all ALCs:
Default

Enumerate over all assemblies in "Default" ALC:
System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e
appwithalc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null
System.Runtime, Version=4.2.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
System.Runtime.Loader, Version=4.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
interfaces, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null
System.Console, Version=4.1.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
System.Runtime.Extensions, Version=4.2.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
System.Threading, Version=4.1.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
System.Text.Encoding.Extensions, Version=4.1.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a

Load "library" assembly via new "my custom ALC" ALC
Foo: Hello

Enumerate over all assemblies in "my custom ALC" ALC:
library, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null

Enumerate over all ALCs:
Default
my custom ALC

Load "library" assembly via Assembly.LoadFile
Foo: Hello

Enumerate over all ALCs:
Default
my custom ALC
Assembly.LoadFile(C:gittestappsappwithalcappwithalcbinDebugnetcoreapp3.0library.dll)

Closing

Thanks for trying out .NET Core 3.0. Please continue to give us feedback, either in the comments or on GitHub. We are listening carefully and will continue to make changes based on your feedback.

Take a look at the .NET Core 3.0 Preview 1, Preview 2 and Preview 3 posts if you missed those. With this post, they describe the complete set of new capabilities that have been added so far with the .NET Core 3.0 release.

The post Announcing .NET Core 3 Preview 4 appeared first on .NET Blog.

Announcing Entity Framework Core 3.0 Preview 4

$
0
0

Today, we are making the fourth preview of Entity Framework Core 3.0 available on NuGet, alongside .NET Core 3.0 Preview 4 and ASP.NET Core 3.0 Preview 4. We encourage you to install this preview to test the new functionality and assess the impact of the included breaking changes.

What’s new in EF Core 3.0 Preview 4?

This preview includes more than 50 bug fixes and functional enhancements. You can query our issue tracker for the full list of issues fixed in Preview 4, as well as for the issues fixed in Preview 3, and Preview 2.

Some of the most important changes are:

LINQ queries are no longer evaluated on the client

The new LINQ implementation that we will introduce in an upcoming preview will not support automatic client evaluation except on the last select operator on your query. In preparation for this change, in preview 4 we have switched the existing client evaluation behavior to throw by default.

Although it should still possible to restore the old behavior by re-configuring client evaluation to only cause a warning, we recommend that you test your application with the new default. If you see client evaluation exceptions, you can try modifying your queries to avoid client evaluation. If that doesn’t work, you can introduce calls to AsEnumerable() or ToList() to explicitly switch the processing of the query to the client in places where this is acceptable. If none of these options work, you can use raw SQL queries instead.

See the breaking change details to learn more about why we are making this change.

If you hit any case in which a LINQ expression isn’t translated to SQL and you get a client evaluation exception instead, but you know exactly what SQL translation you expect to get, we ask you to create an issue to help us improve EF Core.

The EF Core runtime is no longer part of the ASP.NET Core shared framework

Note that this change was actually introduced in Preview 1 but was not previously announced in this blog.

The main consequence of this change is that no matter what type of application you are building or what database your application uses, you always obtain EF Core by installing the NuGet package for the EF Core provider of your choice.In any operating system supported for .NET Core development, you can install the preview bits by installing a provider for EF Core 3.0 Preview 4.

For example, to install the SQLite provider, type this in the command line:

$ dotnet add package Microsoft.EntityFrameworkCore.Sqlite -v 3.0.0-preview4.19216.3

Or from the Package Manager Console in Visual Studio:

PM> Install-Package Microsoft.EntityFrameworkCore.Sqlite -Version 3.0.0-preview4.19216.3

See the breaking change details for more information.

The dotnet ef tool is no longer part of the .NET Core SDK

This change allows us to ship dotnet ef as a regular .NET CLI tool that can be installed as either a global or local tool.For example, to be able to manage migrations or scaffold a DbContext, install dotnet ef as a global tool typing the following command:

$ dotnet tool install --global dotnet-ef --version 3.0.0-preview4.19216.3

For more information, see this breaking change’s details.

Dependent entities sharing tables with their principal entities can now be optional

This enables for example, owned entities mapped to the same table as the owner to be optional, which is a highly requested improvement.

See issue #9005 for more details.

Key generation improvements for in-memory database

We have made several improvements to simplify using in-memory database for unit testing. For example, each generated property now gets incremented independently, when the database is deleted, key generation is reset and starts at 1, and if a property in an entity contains a value larger than the last value returned by the generator, the latter is bumped to start on the next available value.

For more details, see issue #6872.

Separate methods for working with raw SQL queries as plain strings or interpolated strings

The existence of method overloads accepting SQL as plain strings or interpolated strings made it very hard to predict which version would be used and how parameters would be processed. To eliminate this source of confusion, we added the new methods FromSqlRawFromSqlInterpolatedExecuteSqlRaw, and ExecuteSqlInterpolated. The existing FromSql and ExecuteSqlCommand methods are now obsolete.

See issue #10996 for more details.

Database connection no longer remains open until a TransactionScope is disposed

This makes EF Core work better with database providers that are optimized to work with System.Transactions, like SqlClient and Npgsql.

See this breaking change’s details for more information.

Code Analyzer that detects usage of EF Core internal APIs

EF Core exposes some of its internal APIs in public types. For example, all types under nested EF Core namespaces named Internal are considered internal APIs even if they are technically public types. In the past, this made it easy for application developers and provider writers to unintentionally use internal APIs in their code. With this new analyzer, using EF Core internal APIs produces a warning by default. For example:

EF1001: Microsoft.EntityFrameworkCore.Internal.MethodInfoExtensions is an internal API that supports the Entity Framework Core infrastructure and not subject to the same compatibility standards as public APIs. It may be changed or removed without notice in any release.

If the usage is intentional, the warning can be suppressed like any code analysis warning.

See issue #12104 for more details.

Improved performance of iterating over tracked entities using .Local

We have improved the implementation so that iterating over the contents of .Local to be three times faster. You should still consider calling ToObservableCollection() if you are going to iterate multiple times.

See issue #14231 for more details.

The concept of query types has been renamed to entities without keys

Note that this change was included in preview 3 but wasn’t previously announced in this blog.

Query types were introduced in EF Core 2.1 to enable reading data from database tables and views that contained no unique keys. Having query types as a separate concept from entity types has proven to obscure their purpose and what makes them different. It also led us to have to introduce some undesired redundancy and unintended inconsistencies in our APIs. We believe the consolidation of the concepts and eventual removal of the query types API will help reduce the confusion.

See the breaking change details for more information.

New preview of the Cosmos DB provider for EF Core

Although the work is still far from complete, we have been making progress in the provider in successive previews of EF Core 3.0. For example, the new version takes advantage of the new Cosmos DB SDK, enables customizing the names of properties used in the storage, handles value conversions, uses a deterministic approach for key generation, and allows specifying the Azure region to use.

For full details, see the Cosmos DB provider task list on our issue tracker.

What’s next?

So far, in this release we have focused most of our efforts on the improvements that we believe could have the highest impact on existing applications and database providers tying to upgrade from previous versions. Implementing these “breaking changes” and architectural improvements early in the release gives us more time to gather feedback and react to unanticipated issues. It also means that many important features will be completed in later previews.

As detailed in our list of new features, in 3.0 we plan to deliver an improved LINQ implementation, Cosmos DB support, support for C# 8.0 features like nullable reference types and async collections, reverse engineering of database views, property bag entities, and a new version of EF6 that can run on .NET Core.

You can expect at least initial support for most of these features to arrive within the next couple of .NET Core 3.0 preview cycles.

We would like to take this opportunity to provide additional information on our progress towards those goals, and important adjustments we are making moving forward:

EF Core 3.0 will target .NET Standard 2.1

After investigating several options, we have concluded that making EF Core 3.0 target .NET Standard 2.1 is the best path to take advantage of new features in .NET and C#, like integration with IAsyncEnumerable<T>. This is consistent with similar decisions announced last year for ASP.NET Core 3.0 and C# 8.0, and enables us to move forward without introducing compatibility adapters that could hinder our ability to deliver great new functionality on .NET Core down the road.

Although Preview 4 still targets .NET Standard 2.0, the RTM version of EF Core 3.0 will not be compatible with .NET implementations that only support .NET Standard 2.0, like .NET Framework. Customers that need to run EF Core on .NET Framework, should plan to continue using EF Core 2.1 or 2.2.

EF 6.3 will target .NET Standard 2.1, in addition to already supported .NET Framework versions

We found that the API surface offered by .NET Standard 2.1 is adequate for supporting EF6 without removing any runtime features. In addition to .NET Standard 2.1, the EF 6.3 NuGet package will as usual support .NET Framework 4.0 (without async support) and .NET Framework 4.5 and newer. For the time being, we are only going to test and support EF 6.3 running on .NET Framework and .NET Core 3.0, but targeting .NET Standard 2.1 opens the possibility to have EF 6.3 working on other .NET implementations, as long as they fully implement the standard and don’t have other runtime limitations.

Closing

The EF team would like to thank everyone for their feedback and their contributions to this preview. Once more, we encourage you to try out the latest bits and submit any new feedback to our issue tracker.

The post Announcing Entity Framework Core 3.0 Preview 4 appeared first on .NET Blog.

Accelerating Compute-Intensive Workloads with Intel® AVX-512

$
0
0

This guest post was authored by Junfeng Dong, John Morgan, and Li Tian from Intel Corporation.

Introduction

Last year we introduced Intel® Advanced Vector Extensions 512 (Intel® AVX-512) support in Microsoft* Visual Studio* 2017 through this VC++ blog post. In this follow-on post, we cover some examples to give you a taste of how Intel® AVX-512 provides performance benefits. These examples include calculating the average of an array, matrix vector multiplication, and the calculation of the Mandelbrot set. Microsoft Visual Studio 2017 version 15.5 or later (we recommend the latest) is required to compile these demos, and a computer with an Intel® processor which supports the Intel® AVX-512 instruction set is required to run them. See our ARK web site for a list of the processors.

Test System Configuration

Here is the hardware and software configuration we used for compiling and running these examples:

  • Processor: Intel® Core™ i9 7980XE CPU @ 2.60GHz, 18 cores/36 threads
  • Memory: 64GB
  • OS: Windows 10 Enterprise version 10.0.17134.112
  • Power option in Windows: Balanced
  • Microsoft Visual Studio 2017 version 15.8.3

Note: The performance of any modern processor depends on many factors, including the amount and type of memory and storage, the graphics and display system, the particular processor configuration, and how effectively the program takes advantage of it all. As such, the results we obtained may not exactly match results on other configurations.

Sample Walkthrough: Array Average

Let’s walk through a simple example of how to write some code using Intel® AVX-512 intrinsic functions. As the name implies, Intel® Advanced Vector Extensions (Intel® AVX) and Intel® AVX-512 instructions are designed to enhance calculations using computational vectors, which contain several elements of the same data type. We will use these AVX and AVX-512 intrinsic functions to speed up the calculation of the average of an array of floating point numbers.

First we start with a function that calculates the average of array “a” using scalar (non-vector) operations:

static const int length = 1024*8;
static float a[length];
float scalarAverage() {
  float sum = 0.0;
  for (uint32_t j = 0; j < _countof(a); ++j) {
    sum += a[j];
  }
  return sum / _countof(a);

This routine sums all the elements of the array and then divides it by the number of elements. To “vectorize” this calculation we break the array into groups of elements or “vectors” that we can calculate simultaneously. For Intel® AVX we can fit 8 float elements in each vector, as each float is 32 bits and the vector is 256 bits, so 256/32 = 8 float elements. We sum all the groups into a vector of partial sums, and then add the elements of that vector to get the final sum. For simplicity, we assume that the number of elements is a multiple of 8 or 16 for AVX and AVX-512, respectively, in the sample code shown below. If the number of total elements doesn’t cleanly fit into these vectors, you would have to write a special case to handle the remainder.

Here is the new function:

float avxAverage () {
  __m256 sumx8 = _mm256_setzero_ps();
  for (uint32_t j = 0; j < _countof(a); j = j + 8) {
    sumx8 = _mm256_add_ps(sumx8, _mm256_loadu_ps(&(a[j])));
  }
  float sum = sumx8.m256_f32[0] + sumx8.m256_f32[1] +
  sumx8.m256_f32[2] + sumx8.m256_f32[3] +
  sumx8.m256_f32[4] + sumx8.m256_f32[5] +
  sumx8.m256_f32[6] + sumx8.m256_f32[7];
  return sum / _countof(a);
}

Here, _mm256_setzero_ps creates a vector with eight zero values, which is assigned to sumx8. Then, for each set of eight contiguous elements from the array, _mm256_loadu_ps loads them into a 256-bit vector which _mm256_add_ps adds to the corresponding elements in sumx8, making sumx8 a vector of eight subtotals. Finally, these subtotals are added to create the total number.

Compared to the scalar implementation, this single instruction, multiple data (SIMD) implementation executes fewer add instructions. For an array with n elements, a scalar implementation will execute n add instructions, but using Intel® AVX only (n/8 + 7) add instructions are needed. As a result, Intel® AVX can potentially be up to 8X faster than the scalar implementation.

Similarly, here is the code for the Intel® AVX-512 version:

float avx512AverageKernel() {
  __m512 sumx16 = _mm512_setzero_ps();
  for (uint32_t j = 0; j < _countof(a); j = j + 16) {
    sumx16 = _mm512_add_ps(sumx16, _mm512_loadu_ps(&(a[j])));
  }
  float sum = _mm512_reduce_add_ps (sumx16);
  return sum / _countof(a);

As you can see, this version is almost identical to the previous function except for the way the final sum is calculated. With 16 elements in the vector, we would need 16 array references and 15 additions to calculate the final sum if we were to do it the same way as the AVX version. Fortunately, AVX-512 provides the _mm512_reduce_add_ps intrinsic function to generate the same result, which makes the code much easier to read. This function adds the first eight elements to the rest, then adds the first four of that vector to the rest, then two and finally sums those to get the total with just four addition instructions. Using Intel® AVX-512 to find the average of an array with n elements requires the execution of (n/16+4) addition instructions which is about half of what was needed for Intel® AVX when n is large. As a result, Intel® AVX-512 can potentially be up to 2x faster than Intel® AVX.

For this example, we used only a few of the most basic Intel® AVX-512 intrinsic functions, but there are hundreds of vector intrinsic functions available which perform various operations on a selection of different data types in 128-bit, 256-bit, and 512-bit vectors with options such as masking and rounding. These functions may use instructions from various Intel® instruction set extensions, including Intel® AVX-512. For example, the intrinsic function_mm512_add_ps() is implemented using the Intel® AVX-512 vaddps instruction. You can use the Intel Software Developer Manuals to learn more about Intel® AVX-512 instructions, and the Intel Intrinsics Guide to find particular intrinsic functions. Click on a function entry in the Intrinsics Guide and it will expand to show more details, such as Synopsis, Description, and Operation.

These functions are declared using the immintrin.h header. Microsoft also offers the intrin.h header, which declares almost all Microsoft Visual C++* (MSVC) intrinsic functions, including the ones from immintrin.h. You can include either of these headers in your source file.

Sample Walkthrough: Matrix Vector Multiplication

Mathematical vector and matrix arithmetic involves lots of multiplications and additions. For example, here is a simple multiplication of a matrix and a vector:

A computer can do this simple calculation almost instantly but multiplying very large vectors and matrices can take hundreds or thousands of multiplications and additions like this. Let’s see how we can “vectorize” those computations to make them faster. Let’s start with a scalar function which multiplies matrix t1 with vector t2:

static float *out;
static const int row = 16;
static const int col = 4096;
static float *scalarMultiply()
{
  for (uint64_t i = 0; i < row; i++)
  {
    float sum = 0;
    for (uint64_t j = 0; j < col; j++)
      sum = sum + t1[i * col + j] * t2[j];
    out[i] = sum;
  }
  return out;
}

As with the previous example, we “vectorize” this calculation by breaking each row into vectors. For AVX-512, because each float is 32 bits and a vector is 512 bits, a vector can have 512/32 = 16 float elements. Note that this is a computational vector, which is different from the mathematical vector that is being multiplied. For each row in the matrix we load 16 columns as well as the corresponding 16 elements from the vector, multiply them together, and add the products to a 16-element accumulator. When the row is complete, we can sum the accumulator elements to get an element of the result. Note that we can do this with Intel® AVX or Intel® Streaming SIMD Extensions (Intel® SSE) as well, and the maximum vector sizes with those extensions are 8 (256/32) and 4 (128/32) elements respectively.

A version of the multiply routine using Intel® AVX-512 intrinsic functions is shown here:

static float *outx16;
static float *avx512Multiply()
{
  for (uint64_t i = 0; i < row; i++)
  {
    __m512 sumx16 = _mm512_set1_ps(0.0);
    for (uint64_t j = 0; j < col; j += 16)
    {
      __m512 a = _mm512_loadu_ps(&(t1[i * col + j]));
      __m512 b = _mm512_loadu_ps(&(t2[j]));
      sumx16 = _mm512_fmadd_ps(a, b, sumx16);
    }
    outx16[i] = _mm512_reduce_add_ps(sum16);
  }
  return outx16;
}

You can see that many of the scalar expressions have been replaced by intrinsic function calls that perform the same operation on a vector.

We replace the initialization of sum to zero with a call of the _mm512_set1_ps() function that creates a vector with 16 zero elements and assign that to sumx16. Inside the inner loop we load 16 elements of t1 and t2 into vector variables a and b respectively using _mm512_loadu_ps(). The _mm512_fmadd_ps() function adds the product of each element in a and b to the corresponding elements in sumx16.

At the end of the inner loop we have 16 partial sums in sumx16 rather than a single sum. To calculate the final result we must add these 16 elements together using the _mm512_reduce_add_ps() function that we used in the array average example.

Floating-Point vs. Real Numbers

At this point we should note that this vectorized version does not calculate exactly the same thing as the scalar version. If we were doing all of this computation using mathematical real numbers it wouldn’t matter what order we add the partial products, but that isn’t true of floating-point values. When floating-point values are added, the precise result may not be representable as a floating-point value. In that case the result must be rounded to one of the two closest values that can be represented. The difference between the precise result and the representable result is the rounding error.

When computing the sum of products like this the calculated result will differ from the precise result by the sum of all the rounding errors. Because the vectorized matrix multiply adds the partial products in a different order from the scalar version the rounding errors for each addition can also be different. Furthermore, the _mm512_fmadd_ps() function does not round the partial products before adding them to the partial sums, so only the addition adds a rounding error. If the rounding errors differ between the scalar and vectorized computations the result may also differ. However, this doesn’t mean that either version is wrong. It just shows the peculiarities of floating-point computation.

Mandelbrot Set

The Mandelbrot set is the set of all complex numbers z for which the sequence defined by the iteration

z(0) = z, z(n+1) = z(n)*z(n) + z, n=0,1,2, …

remains bounded. This means that there is a number B such that the absolute value of all iterates z(n) never gets larger than B. The calculation of the Mandelbrot set is often used to make colorful images of the points that are not in the set where each color indicates how many terms are needed to exceed the bound. It is widely used as a sample to demonstrate vector computation performance. The kernel code of calculating the Mandelbrot set where B is 2.0 is available here.

static int mandel(float c_re, float c_im, int count)
{
  float z_re = c_re, z_im = c_im;
  int i;
  for (i = 0; i < count; ++i)
  {
    if (z_re * z_re + z_im * z_im > 4.f)
      break;
    float new_re = z_re * z_re - z_im * z_im;
    float new_im = 2.f * z_re * z_im;
    z_re = c_re + new_re;
    z_im = c_im + new_im;
  }
  return i;
}

Of course, it is impossible to evaluate every term of an infinite series, so this function evaluates no more than the number of terms specified by count, and we assume that if the series hasn’t diverged by that point, it isn’t going to. The value returned indicates how many terms did not diverge, and it is typically used to select a color for that point. If the function returns count the point is assumed to be in the Mandelbrot set.

We can vectorize this by replacing each scalar operation with a vector equivalent similar to the way we vectorized matrix vector multiplication. But a complication arises with the following source line: “if (z_re * z_re + z_im * z_im > 4.0f) break;”. How do you vectorize a conditional break?

In this instance we know that once the series exceeds the bound all later terms will also exceed it, so we can unconditionally continue calculating all elements until all of them have exceeded the bound or we have reached the iteration limit. We can handle the condition by using a vector comparison to mask the elements that have exceeded the bound and use that to update the results for the remaining elements. Here is the code for a version using Intel® Advanced Vector Extensions 2 (Intel® AVX2) functions.

/* AVX2 Implementation */
__m256i avx2Mandel (__m256 c_re8, __m256 c_im8, uint32_t max_iterations) {
  __m256 z_re8 = c_re8;
  __m256 z_im8 = c_im8;
  __m256 four8 = _mm256_set1_ps(4.0f);
  __m256 two8 = _mm256_set1_ps(2.0f);
  __m256i result = _mm256_set1_epi32(0);
  __m256i one8 = _mm256_set1_epi32(1);
  for (auto i = 0; i < max_iterations; i++) {
  __m256 z_im8sq = _mm256_mul_ps(z_im8, z_im8);
  __m256 z_re8sq = _mm256_mul_ps(z_re8, z_re8);
  __m256 new_im8 = _mm256_mul_ps(z_re8, z_im8);
  __m256 z_abs8sq = _mm256_add_ps(z_re8sq, z_im8sq);
  __m256 new_re8 = _mm256_sub_ps(z_re8sq, z_im8sq);
  __m256 mi8 = _mm256_cmp_ps(z_abs8sq, four8, _CMP_LT_OQ);
  z_im8 = _mm256_fmadd_ps(two8, new_im8, c_im8);
  z_re8 = _mm256_add_ps(new_re8, c_re8);
  int mask = _mm256_movemask_ps(mi8);
  __m256i masked1 = _mm256_and_si256(_mm256_castps_si256(mi8), one8);
  if (0 == mask)
    break;
    result = _mm256_add_epi32(result, masked1);
  }
return result;
}

The scalar function returns a value that represents how many iterations were calculated before the result diverged, so a vector function must return a vector with those same values. We compute that by first generating a vector mi8 that indicates which elements have not exceeded the bound. Each element of this vector is either all zero bits (if the test condition _CMP_LT_OQ is not true) or all one bits (if it is true). If that vector is all zero, then everything has diverged, and we can break out of the loop. Otherwise the vector value is reinterpreted as a vector of 32-bit integer values by _mm256_castps_si256, and then masked with a vector of 32-bit ones. That leaves us with a one value for every element that has not diverged and zeros for those that have. All that is left is to add that vector to the vector accumulator result.

The Intel® AVX-512 version of this function is similar, with one significant difference. The _mm256_cmp_ps function returns a vector value of type __m256. You might expect to use a _mm512_cmp_ps function that returns a vector of type __m512, but that function does not exist. Instead we use the _mm512_cmp_ps_mask function that returns a value of type __mmask16. This is a 16-bit value, where each bit represents one element of the vector. These values are held in a separate set of eight registers that are used for vectorized conditional execution. Where the Intel® AVX2 function had to calculate the values to be added to result explicitly, Intel® AVX-512 allows the mask to be applied directly to the addition with the _mm512_mask_add_epi32 function.

/* AVX512 Implementation */
__m512i avx512Mandel(__m512 c_re16, __m512 c_im16, uint32_t max_iterations) {
  __m512 z_re16 = c_re16;
  __m512 z_im16 = c_im16;
  __m512 four16 = _mm512_set1_ps(4.0f);
  __m512 two16 = _mm512_set1_ps(2.0f);
  __m512i one16 = _mm512_set1_epi32(1);
  __m512i result = _mm512_setzero_si512();
  for (auto i = 0; i < max_iterations; i++) {
    __m512 z_im16sq = _mm512_mul_ps(z_im16, z_im16);
    __m512 z_re16sq = _mm512_mul_ps(z_re16, z_re16);
    __m512 new_im16 = _mm512_mul_ps(z_re16, z_im16);
    __m512 z_abs16sq = _mm512_add_ps(z_re16sq, z_im16sq);
    __m512 new_re16 = _mm512_sub_ps(z_re16sq, z_im16sq);
    __mmask16 mask = _mm512_cmp_ps_mask(z_abs16sq, four16, _CMP_LT_OQ);
    z_im16 = _mm512_fmadd_ps(two16, new_im16, c_im16);
    z_re16 = _mm512_add_ps(new_re16, c_re16);
    if (0 == mask)
      break;
    result = _mm512_mask_add_epi32(result, mask, result, one16);
  }
  return result;
}

Each of the vectorized Mandelbrot calculations returns a vector instead of a scalar, and the value of each element is the same value that would have been returned by the original scalar function. You may have noticed that the returned value has a different type from the real and imaginary argument vectors. The arguments to the scalar function are type float, and the function returns an unsigned integer. The arguments to the vectorized functions are vector versions of float, and the return value is a vector that can hold 32-bit unsigned integers. If you need to vectorize a function that uses type double, there are vector types for holding elements of that type as well: __m128d, __m256d and __m512d. You may be wondering if there are vector types for other integer types such as signed char and unsigned short. There aren’t. Vectors of type __m128i, __m256i and __m512i are used for all integer elements regardless of size or signedness.

You can also convert or cast (reinterpret) vectors that hold elements of one type into a vector with elements of a different type. In the mandelx8 function the _mm256_castps_si256 function is used to reinterpret the __m256 result of the comparison function as a __m256i integer element mask for updating the result vector.

Performance of Intel® AVX-512

We measured the run time of the Mandelbrot, matrix vector multiplication, and array average kernel functions with Intel® AVX/AVX2 and Intel® AVX-512 intrinsic functions to compare the performance. The source code is compiled with “/O2”. On our test platform, Mandelbrot with Intel® AVX-512 is 1.77x1 faster than the Intel® AVX2 version. The sample code is available here. Array average (source code) is 1.91x1 faster and matrix vector multiplication (source code) is 1.80x1 faster compared to their AVX2 versions.

We previously stated that the performance achievable by Intel® AVX-512 should approximately double that of Intel® AVX2. We see that we don’t quite reach that number, and there are a few reasons why the speedup might not reach the expected value. One is that only the innermost loop of the calculation is sped up by the larger vector instructions, but the total execution time includes time spent executing outer loops and other overhead that did not speed up. Another potential reason is because the bandwidth of the memory system must be shared between all cores doing calculations, and this is a finite resource. When most of that bandwidth is being used, the processor can’t compute faster than the data becomes available.

Conclusions

We have presented several examples of how to vectorize array average and matrix vector multiplication as well as shown code for calculating the Mandelbrot set using Intel® AVX2 and Intel® AVX-512 functions. This code is a more realistic example of how to use Intel® AVX-512 than the sample code from our prior post. From data collected on our test platform, the Intel® AVX-512 code shows performance improvements between 77% and 91% when compared to Intel® AVX2.

Intel® AVX-512 fully utilizes Intel® hardware capabilities to improve performance by doubling the data that can be processed with a single instruction compared to Intel® AVX2. This capability can be used in artificial intelligence, deep learning, scientific simulations, financial analytics, 3D modeling, image/audio/video processing, and data compression. Use Intel® AVX-512 and unlock your application’s full potential.

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development.  All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.

Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting https://www.intel.com/content/www/us/en/design/resource-design-center.html.

Intel, the Intel logo, Core are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others

© Intel Corporation.

§ (1) As provided in this document
§ Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.
§ Configurations: Ran sample code on an Intel® desktop powered by Intel® Core™ i9 7980XE CPU @ 2.60GHz with 64 GB RAM running Windows 10 Enterprise version 10.0.17134.112
§ For more information go to  https://www.intel.com/content/www/us/en/benchmarks/benchmark.html

 

The post Accelerating Compute-Intensive Workloads with Intel® AVX-512 appeared first on C++ Team Blog.

Build Visual Studio templates with tags, for efficient user search and grouping

$
0
0

Visual Studio’s project templates enable you, the developer, to create multiple similar projects more efficiently by defining a common set of starter files. The project templates can be fully customized to meet the needs of a development team, or a group, and can be published to the Visual Studio Marketplace for others to download and use too! Once published, developers can install and access the template through Visual Studio’s New Project Dialog.

The newly designed New Project Dialog for Visual Studio 2019 was built to help developers get to their code faster. Using a search and filter focused experience, we are aiming to provide better discoverability for specific templates to start your application development. 

 

In this walkthrough, you will learn to 

  • Create a project template 
  • Add tagor filters to the project template 
  • Deploy the template as an extension using the VSIX project template 

Before getting started, please make sure you have installed Visual Studio 2019 with Visual Studio SDK. 

 

Creating a project template 

There are a few ways you can create a project template, but in this walkthrough, we will create a C# project template using the New Project Dialog. 

  1. In Visual Studio, launch the New Project Dialog File New > Project
    (or use the keyboard shortcutCTRL + SHIFT + N).
  2. Filter the list by Project type Extensions and select C# Project Template.
  3. Click Next then modify the Project name field and click Create.

 

Adding tags / filters to your project template 

Once you’ve created a project template, you can add tags or filters to it in the template’s .vstemplate XML file. 

  1. Add Visual Studio’s built-in tags as well as any custom tags to your project template using <LanguageTag>, <PlatformTag>, <ProjectTypeTag> elements under <TemplateData> and save the file. For example, as highlighted below:
  2. Save and close the vstemplate the XML file. 

 

Deploying the template as an extension using the VSIX project template 

Wrap your project template in a VSIX project template to deploy your template as an extension. 

  1. Create an Empty VSIX Project in the Solution created for the C# project template above.
    1. In the Solution Explorer, right click on the Solution and select Add > New Project.
    2. Type “vsix” in the search box and select Empty VSIX Project for C# (or VSIX Project if you are using earlier versions of Visual Studio).
    3. Click Next then modify the Project name field and click Create.
  2. Set the VSIX Project as a startup project.
    In the Solution Explorer, right click on the VSIX project and select Set as StartUp Project. Your Solution Explorer should now look something like this (with your VSIX project bolded):
  3. Add your project template as an asset to the VSIX project.
    1. Click on the Assets tab and select the New button.
    2. Set the Type field as Microsoft.VisualStudio.ProjectTemplate.
    3. Set the Source field as A project in current solution.
    4. Set the Project field as your project template.
    5. Click OK, then save and close the source.extension.vsixmanifest file.
  4. Run your code without invoking the debugger (CTRL + F5) 

That’s it! Your new project template will appear in the New Project dialog with the tags under your template’s description and filters enabled by those tags. You can also take it a step further and easily publish your project template to the Visual Studio Marketplace (and while you’re at it, also try out the little great things about Visual Studio 2019 and please let us know what you think)! Here is an example of one in an existing extensionTextmate Grammar Template. 

Have suggestions? 

We are continuing to work on our tools and to do that, we could use your help! Please share your feedback/comments below, or through the Visual Studio Developer Community, or tweet at our team @VisualStudio. 

The post Build Visual Studio templates with tags, for efficient user search and grouping appeared first on The Visual Studio Blog.

C++17/20 Features and Fixes in Visual Studio 2019

$
0
0

Visual Studio 2019 version 16.0 is now available and is binary compatible with VS 2015/2017. In this first release of VS 2019, we’ve implemented more compiler and library features from the C++20 Working Paper, implemented more <charconv> overloads (C++17’s “final boss”), and fixed many correctness, performance, and throughput issues. Here’s a list of the C++17/20 compiler/library feature work and the library fixes. (As usual, many compiler bugs were also fixed, but they aren’t listed here; compiler fixes tend to be specific to certain arcane code patterns. We recently blogged about compiler optimization and build throughput improvements in VS 2019, and we maintain a documentation page about compiler conformance improvements in VS 2019.)

New Features:

  • Implemented P1164 from C++20 unconditionally. This changes std::create_directory to check whether the target was already a directory on failure. Previously, all ERROR_ALREADY_EXISTS type errors were turned into success-but-directory-not-created codes.
  • The iterator debugging feature has been taught to properly unwrap std::move_iterator. For example, std::copy(std::move_iterator<std::vector<int>::iterator>, std::move_iterator<std::vector<int>::iterator>, int*) can now engage our memcpy fast path.
  • The standard library’s macroized keyword enforcement <xkeycheck.h> was fixed to emit the actual problem keyword detected rather than a generic message, look for C++20 keywords, and avoid tricking IntelliSense into saying random keywords were macros.
  • We added LWG 2221‘s operator<<(std::ostream, nullptr_t) for writing nullptrs to streams.
  • Parallel versions of is_sorted, is_sorted_until, is_partitioned, set_difference, set_intersection, is_heap, and is_heap_until were implemented by Miya Natsuhara.
  • P0883 “Fixing atomic initialization”, which changes std::atomic to value-initialize the contained T rather than default-initializing it, was implemented (also by Miya) when using Clang/LLVM with our standard library. This is currently disabled for C1XX as a workaround for a bug in constexpr processing.
  • Implemented the “spaceship” three-way comparison operator from P0515 “Consistent comparison”, with partial support for the C++20 <compare> header as specified in P0768 (specifically, the comparison category types and common_comparison_category type trait, but not the comparison algorithms which are undergoing some redesign in WG21). (Implemented by Cameron DaCamara in the compiler.)
  • Implemented the new C++20 P1008 rules for aggregates: a type with a user-declared constructor – even when defaulted or deleted so as not to be user-provided – is not an aggregate. (Implemented by Andrew Marino in the compiler.)
  • Implemented the remove_cvref and remove_cvref_t type traits from P0550, which are handy for stripping reference-ness and cv-qualification but without decaying functions and arrays to pointers (which std::decay and std::decay_t do).
  • C++17 <charconv> floating-point to_chars() has been improved: shortest chars_format::fixed is 60-80% faster (thanks to Ulf Adams at Google for suggesting long division), and shortest/precision chars_format::hex is complete. Further performance improvements for shortest fixed notation have been implemented and will ship in a future VS 2019 update, along with the decimal precision overloads that will complete the <charconv> implementation.
  • C++20 P0941 feature-test macros are now completely supported in the compiler and STL, including __has_cpp_attribute implemented by Phil Christensen. As a reminder, the feature-test macros are always active (i.e. defined or not defined, depending on the availability of the feature in question) regardless of the Standard mode option selected, because making them conditional on /std:c++latest would largely defeat their purpose.

Correctness Fixes:

  • std::allocator<void>, std::allocator::size_type, and std::allocator::difference_type have been un-deprecated.
  • A spurious static_cast not called for by the standard that accidentally suppressed C4244 narrowing warnings was removed from std::string. Attempting to call std::string::string(const wchar_t*, const wchar_t*) will now properly emit C4244 “narrowing a wchar_t into a char.”
  • Fixed std::filesystem::last_write_time failing when attempting to change a directory’s last write time.
  • std::filesystem::directory_entry‘s constructor was changed to store a failed result, rather than throwing an exception, when supplied a nonexistent target path.
  • std::filesystem::create_directory‘s 2-parameter version was changed to call the 1-parameter version, as the underlying CreateDirectoryExW function would perform copy_symlink when the existing_p was a symlink.
  • std::filesystem::directory_iterator no longer fails when encountering a broken symlink.
  • std::filesystem::space now accepts relative paths.
  • std::filesystem::path::lexically_relative is no longer confused by trailing slashes, reported as LWG 3096.
  • Worked around CreateSymbolicLinkW rejecting paths with forward slashes in std::filesystem::create_symlink.
  • Worked around the POSIX deletion mode delete function existing on Windows 10 LTSB 1609 but not actually being capable of deleting files.
  • std::boyer_moore_searcher and std::boyer_moore_horspool_searcher‘s copy constructors and copy assignment operators now actually copy things.
  • The parallel algorithms library now properly uses the real WaitOnAddress family on Windows 8 and later, rather than always using the Windows 7 and earlier fake versions.
  • std::system_category::message() now trims trailing whitespace from the returned message.
  • Some conditions that would cause std::linear_congruential_engine to trigger divide by 0 have been fixed.
  • The iterator unwrapping machinery we first exposed for programmer-user integration in VS 2017 15.8 (as described in https://devblogs.microsoft.com/cppblog/stl-features-and-fixes-in-vs-2017-15-8/ ) no longer unwraps iterators derived from standard library iterators. For example, a user that derives from std::vector<int>::iterator and tries to customize behavior now gets their customized behavior when calling standard library algorithms, rather than the behavior of a pointer.
  • The unordered container reserve function now actually reserves for N elements, as described in LWG 2156.
  • Many STL internal container functions have been made private for an improved IntelliSense experience. Additional fixes to mark members as private are expected in subsequent releases of Visual C++.
  • Times passed to the concurrency library that would overflow (e.g. condition_variable::wait_for(seconds::max())) are now properly dealt with instead of causing overflows that changed behavior on a seemingly random 29-day cycle (when uint32_t milliseconds accepted by underlying Win32 APIs overflowed).
  • Exception safety correctness problems wherein the node-based containers like list, map, and unordered_map would become corrupted were fixed. During a propagate_on_container_copy_assignment or propagate_on_container_move_assignment reassignment operation, we would free the container’s sentinel node with the old allocator, do the POCCA/POCMA assignment over the old allocator, and then try to acquire the sentinel node from the new allocator. If this allocation failed, the container is corrupted and can’t even be destroyed, as owning a sentinel node is a hard data structure invariant. This was fixed to allocate the new sentinel node from the source container’s allocator before destroying the existing sentinel node.
  • The containers were fixed to always copy/move/swap allocators according to propagate_on_container_copy_assignment, propagate_on_container_move_assignment, and propagate_on_container_swap, even for allocators declared is_always_equal.
  • std::basic_istream::read was fixed to not write into parts of the supplied buffer temporarily as part of rn => n processing. This gives up some of the performance advantage we gained in VS 2017 15.8 for reads larger than 4k in size, but efficiency improvements from avoiding 3 virtual calls per character are still present.
  • std::bitset‘s constructor no longer reads the ones and zeroes in reverse order for large bitsets.
  • When implementing P0083 “Splicing Maps And Sets”, we managed to overlook the fact that the merge and extract members of the associative containers should have overloads that accept rvalue containers in addition to the overloads that accept lvalue containers. We’ve rectified this oversight by implementing the rvalue overloads.
  • With the advent of the Just My Code stepping feature, we no longer need to provide bespoke machinery for std::function and std::visit to achieve the same effect. Removing that machinery largely has no user-visible effects, except that the compiler will no longer produce diagnostics that indicate issues on line 15732480 or 16707566 of <type_traits> or <variant>.
  • The <ctime> header now correctly declares timespec and timespec_get in namespace std in addition to declaring them in the global namespace.
  • We’ve fixed a regression in pair‘s assignment operator introduced when implementing LWG 2729 “Missing SFINAE on std::pair::operator=“; it now correctly accepts types convertible to pair again.
  • Fixed a minor type traits bug, where add_const_t etc. is supposed to be a non-deduced context (i.e. it needs to be an alias for typename add_const<T>::type, not const T).

Header Inclusion Restructuring:

The standard library’s physical design was substantially overhauled to avoid including headers when they are not necessary. A large number of customers want to use standard library containers but don’t want to use the iostreams and locales. However, the C++ standard library has a circular dependency among components:

  • Systems that depend on <locale> facets want to use std::string as part of their underlying implementations.
  • std::string wants a stream insertion operator, which depends on std::ostream, which depends on <locale>.
  • Historically our standard library worked around this problem by introducing a lower level header <xstring>, which defined std::string, but none of the other contents of <string>. <xstring> would be included by both <locale> components, and by <string>, restoring the directed dependency graph. However, this approach had a number of problems:
  • #include <string> needed to drag in all of the iostreams machinery to provide the stream insertion operator, even though most translation units wouldn’t use the stream insertion operator.
  • If someone included only <ostream> they got std::basic_string and std::ostream, but they did not get std::basic_string‘s stream insertion operator, the std::string typedef, or the string literals. Customers found this extremely confusing. For example, if one tried to stream insert a std::basic_string after including only <ostream>, the compiler would print an incredibly long diagnostic saying operator<< couldn’t be found, listing 26 unrelated overloads. Also, attempts to use std::string_literals, std::to_string, or other <string> components, would fail, which is confusing when std::basic_string was otherwise available.

In VS 2019, we resolve the circular reference completely differently. The stream insertion operator now finds the necessary ostream components using argument-dependent lookup, allowing us to provide it in the same place as string. This restores appropriate layering (of std::string below <locale> components), and makes it possible to use <string> without dragging in the entire mass of iostreams machinery.

If you have lots of .cpp files that include string and do something simple, for example:

#include <stdio.h>
#include <string>

void f(const std::string& s) {
  puts(s.c_str());
}

In VS 2017 15.9 this program takes 244 milliseconds to compile on a 7980XE test machine (average of 5 runs), while in VS 2019 16.0 it takes only 178 milliseconds (or about 73% of the time).

Moreover, seemingly unrelated headers like <vector> were pulled into this mess. For example, vector wants to throw std::out_of_range, which derives from std::runtime_error, which has a constructor that takes a std::string. We already had out-of-line functions for all throw sites, so the spurious include of <stdexcept> in <vector> was unnecessary and has been removed. The following program used to take 177 milliseconds to compile in VS 2017 15.9, but now only needs 151 milliseconds (85% of the time):

#include <vector>

void f(std::vector<int>& v) {
  v.push_back(42);
}

The one downside of this change is that several programs that were getting away with not including the correct headers may need to add #includes. If you were saying std::out_of_range before, you may need to #include <stdexcept>. If you were using a stream insertion operator, you may now need to #include <ostream>. This way, only translation units actually using <stdexcept> or <ostream> components pay the throughput cost to compile them.

Performance and Throughput Improvements:

  • if constexpr was applied in more places in the standard library for improved throughput and reduced code size in the copy family, permutations like reverse and rotate, and in the parallel algorithms library.
  • The STL now internally uses if constexpr to reduce compile times even in C++14 mode.
  • The runtime dynamic linking detection for the parallel algorithms library no longer uses an entire page to store the function pointer array, as marking this memory read-only was deemed no longer relevant for security purposes.
  • std::thread‘s constructor no longer waits for the thread to start, and no longer inserts so many layers of function calls between the underlying C library _beginthreadex and the supplied callable object. Previously std::thread put 6 functions between _beginthreadex and the supplied callable object, which has been reduced to only 3 (2 of which are just std::invoke). This also resolves an obscure timing bug where std::thread‘s constructor would hang if the system clock changed at the exact moment a std::thread was being created.
  • Fixed a performance regression in std::hash that we introduced when implementing std::hash<std::filesystem::path>.
  • Several places the standard library used to achieve correctness with catch blocks now use destructors instead. This results in better debugger interaction — exceptions you throw through the standard library in the affected locations will now show up as being thrown from their original throw site, rather than our rethrow. Not all standard library catch blocks have been eliminated; we expect the number of catch blocks to be reduced in subsequent releases of Visual C++.
  • Suboptimal codegen in std::bitset caused by a conditional throw inside a noexcept function was fixed by factoring out the throwing path.
  • The std::list and std::unordered_meow family use non-debugging iterators internally in more places.
  • Several std::list members were changed to reuse list nodes where possible rather than deallocating and reallocating them. For example, given a list<int> that already has a size of 3, a call to assign(4, 1729) will now overwrite the ints in the first 3 list nodes, and allocate one new list node with the value 1729, rather than deallocating all 3 list nodes and then allocating 4 new list nodes with the value 1729.
  • All locations the standard library was calling erase(begin(), end()) were changed to call clear() instead.
  • std::vector now initializes and erases elements more efficiently in certain cases.
  • <variant> has been refactored to make it more optimizer-friendly, resulting in smaller and faster generated code. Most notably, std::visit and the inliner have now become good friends.
  • We’ve applied clang-format to the STL’s headers for improved readability. (There were additional manual changes, e.g. adding braces to all control flow.)

Reporting Bugs:

Please let us know what you think about VS 2019. You can report bugs via the IDE’s Report A Problem and also via the web, at the Developer Community’s C++ tab.

Billy O’Neal, Casey Carter, and Stephan T. Lavavej

The post C++17/20 Features and Fixes in Visual Studio 2019 appeared first on C++ Team Blog.

Top Stories from the Microsoft DevOps Community – 2019.04.19

$
0
0

It’s a gorgeous long weekend here in England – it’s warm and the sun is shining, two things that don’t happen here all the time. I’m heading outside to enjoy it, and I hope you’ve got some nice weather, too. But if you’re stuck with the dreary end of winter, then I’ll leave you with some great articles from the community to keep you busy.

Publishing Static Content to Azure Blob Storage
It’s easy to create a powerful and scalable static site by putting Azure CDN in front of Azure Blob Storage. But how do you deploy that site? You could use AzCopy, but Jason N. Gaylord shows you a better way using Azure Pipelines.

What’s new in NDepend
The powerful NDepend analysis tool for .NET projects has long had integration into Visual Studio. But now they’ve added Azure DevOps integration: you can run NDepend as part of your Azure Pipelines build and show the results right in the dashboard. You can find the extension in the marketplace!

Why I don’t like PowerShell: the search for maintainable code
I love PowerShell, but Wouter de Kort isn’t convinced yet. He’s just joined a project that has sloppy PowerShell, and he’s on a mission to transform it into a modern software project. He’s refactored it, added unit tests and – of course – an Azure Pipelines build.

Terraform deployment with Azure DevOps
One of the cool things about IaaS automation is that you can stand up entire environments programmatically, giving you a safe and predictable way to build out your infrastructure. Florent Appointaire’s two-part series shows you how to use Azure Pipelines to automate Terraform deployments.

The ObjectSharp Podcast: Azure DevOps and Agile Tooling
Martin Woodward from the Azure DevOps team join the ObjectSharp podcast and talks about his history at Microsoft, how Azure DevOps has become a suite of independent products, and the tools that development teams are using in their DevOps transformation.

As always, if you’ve written an article about Azure DevOps or find some great content about DevOps on Azure then let me know! I’m @ethomson on Twitter.

The post Top Stories from the Microsoft DevOps Community – 2019.04.19 appeared first on Azure DevOps Blog.

Upcoming Updates for .NET Framework 4.8

$
0
0

Upcoming Updates for .NET Framework 4.8

The .NET Framework 4.8 product release is now available. The .NET Framework 4.8 product will receive updates on the same cadence and the usual channels (Windows Update, WSUS, Catalog) as all .NET Framework and Windows cumulative updates. For Windows 10, .NET Framework 4.8 updates will now be delivered as independent updates, alongside the Windows cumulative updates.  In this post, I explain how updates to .NET Framework 4.8 are delivered so you are ready for them.

 

What is new about .NET Framework 4.8 updates?

Updates for .NET Framework 4.8 on Windows 10 versions 1607, 1703, 1709, 1803 and Server 2016 will now be delivered independently and side by side with Windows cumulative updates. The way we update .NET Framework in Windows 10 version 1809 and Server 2019 is not changing and is described below.

 

What does not change?

Updates to .NET Framework 4.8 on Windows 8.1, Server 2012 R2, Server 2012, Windows 7 SP1, and Server 2008 R2 SP1 will continue to be delivered outside of the Windows cumulative update through the existing .NET Framework updates.

The update for .NET Framework 4.8 will be included into the .NET Framework Rollup, for each operating system. Likewise, for Windows 10 version 1809, Server 2019, and future releases, .NET Framework 4.8 will be updated as part of the existing .NET Framework updates.

The delivery of updates to all previous versions of .NET Framework (i.e. .NET 4.7.2, 4.7.1, 4.7, 4.6.2, 4.6.1, 4.6, 4.5.2, and 3.5) across all supported operating systems does not change. There is no change in the way you acquire or install these updates.

 

What should I expect?

You can expect the following experiences:

Windows update customers:

  • If you rely on Windows Update to keep your machine up to date and have automatic updates enabled, you will not notice any difference.  Updates for both Windows and the .NET Framework 4.8 and previous versions will be silently installed, and as usual you may be prompted for a reboot after installation.
  • If you manage Windows Update manually, you will notice that updates for .NET Framework 4.8 will be available alongside Windows cumulative updates. Please continue to apply the latest updates to keep your system up to date.

System and IT Administrators:

  • System administrators relying on Windows Server Update Services (WSUS) and similar update management applications will observe a new update for .NET Framework 4.8 when checking for updates applicable to Windows 10 versions 1607, 1703, 1709, 1803 and Server 2016. For all other operating systems the update for .NET Framework 4.8 will continue to be included into the existing .NET Framework update.
  • The Classifications for Updates for .NET Framework 4.8 remain the same as for the Cumulative Update for Windows and continue to show under “WindowsProducts. Updates that deliver new Security content will have the “Security Updates” classification and updates that solely carry new quality updates will either have the “Updates” or “Critical Updates” classification, depending on their criticality.
  • System administrators that rely on the Microsoft Update Catalog will be able to access updates for .NET Framework 4.8 by searching for each releases’ Knowledge Based (KB) update number.
  • You can use the update title to filter between the Windows Cumulative updates and .NET Framework updates. All other update artifacts remain the same.

 

.NET Update release vehicles across operating systems

Updates for .NET Framework 4.8 are delivered as described in the below table.

Frequently Asked Questions (FAQ)

  • What is the .NET Framework 4.8 product?

More information about the .NET Framework 4.8 product here: https://devblogs.microsoft.com/dotnet/announcing-the-net-framework-4-8/

  • If I don’t upgrade to .NET Framework 4.8 will anything change for how I receive Windows or .NET updates?

No. Updates for previous versions of .NET Framework and for Windows operating systems components remain the same.

  • Why are updates to all .NET Framework versions not delivered through a consistent single rollup vehicle across operating systems?

We aim to keep the .NET Framework update experience as smooth and consistent as possible across supported operating systems. Specifically for Windows 10 versions 1607, 1703, 1709, 1803 and Server 2016 systems (where .NET rollup updates did not exist), we chose to introduce the least possible change, and leave the experience untouched for updating previous versions of .NET Framework (i.e. .NET 4.7.2 and below). For .NET Framework 4.8 we are able to offer the same agility and flexibility as described in our recent post announcing cumulative updates for .NET Framework for Windows 10 October 2018 update across all operating systems.

  • I am an IT Administrator managing updates for my organization, how do I ensure that my deployments include all existing versions of .NET Framework?

As noted above, continue to rely on the same mechanisms for Windows and .NET Framework updates. Ensure that within your WSUS, SCCM or similar environment, you select updates that correspond to the “WindowsProduct, and continue to rely on the Classifications categories to select all applicable updates that align with your organization’s update criteria for Security and non-security content. This will ensure you continue to receive updates for all .NET Framework versions.

  • I rely on downloading updates from the Microsoft Update Catalog to support my organization’s internet-disconnected scenarios. Do I need to do anything differently to update systems with .NET Framework 4.8?

Whether you depend directly on the Microsoft Update Catalog website or import updates from Catalog into your managed environments (e.g. WSUS, or SCCM), please continue to rely on the Knowledge Base(KB) number lookup functionality to access .NET Framework updates. For operating systems where .NET Framework rollup updates existing already, continue to search and download the KBs for each target operating system. On operating systems where .NET Framework rollups did not previously exist (i.e. Windows 10 versions 1607, 1703, 1709, 1803 and Server 2016), search for the corresponding KB numbers that are specific to updates for .NET Framework 4.8.

  • Does anything change about the way updates to .NET Framework 3.5 get delivered once I upgrade to .NET Framework 4.8?

The .NET Framework 3.5 will continue to be delivered the same way (refer to the “.NET updates across Windows versions” table above).

For Windows 8.1 and previous operating systems, .NET Framework 3.5 updates are included in the .NET Framework Rollup.

For Windows 10 versions 1507, 1607, Server 2016, 1703, 1709, 1803, .NET Framework 3.5 updates are included in the Windows Cumulative updates.

For Windows 10 version 1809, Server 2019 and future versions, .NET Framework 3.5 updates are included in the .NET Framework Cumulative Update.

Please continue to install both .NET Framework 4.8 and Windows cumulative updates to be up to date for all .NET Framework versions.

The post Upcoming Updates for .NET Framework 4.8 appeared first on .NET Blog.


Azure.Source – Volume 79

$
0
0

Preview | Generally available | News & updates | Technical content | Azure shows | Events | Customers, partners, and industries

 

Now in preview

Azure Container Registry now supports Singularity Image Format containers

We announced public preview support for storing Singularity Image Files (SIF) in Azure Container Registry based on the OCI Distribution based Container Registries specification. The Singularity project defines a new secure SIF file format which enables untrusted users to run untrusted containers in a trusted way. The work done in collaboration with Sylabs enables customers using Singularity to leverage their investments in Azure Container Registry and other OCI complaint registries, without having to run and maintain another SIF distribution library.

Move your data from AWS S3 to Azure Storage using AzCopy

AzCopy v10 (Preview) now supports Amazon Web Services (AWS) S3 as a data source. Copy an entire AWS S3 bucket, or even multiple buckets, to Azure Blob Storage using AzCopy. Previously, if you wanted to migrate your data from AWS S3 to Azure Blob Storage, you had to bring up a client between the cloud providers to read the data from AWS to then put it in Azure Storage. We addressed this issue in the latest release of AzCopy using a scale out technique thanks to the new Blob API.

Animated GIF showing AzCopy moving a 50GB
                file

Also in preview

 

Now generally available

Announcing general availability of Apache Hadoop 3.0 on Azure HDInsight

We announced the general availability of Apache Hadoop 3.0 on Azure HDInsight. Microsoft Azure is the first cloud provider to offer customers the benefit of the latest innovations in the most popular open source analytics projects, with unmatched scalability, flexibility, and security. With the general availability of Apache Hadoop 3.0 on Azure HDInsight, we are building upon existing capabilities with a number of key enhancements that further improve performance and security, and deepen support for the rich ecosystem of big data analytics applications.

Manage Azure HDInsight clusters using .NET, Python, or Java

We announced the general availability of the new Azure HDInsight management SDKs for .NET, Python, and Java. Azure HDInsight is an easy, cost-effective, enterprise-grade service for open source analytics that enables customers to easily run popular open source frameworks including Apache Hadoop, Spark, Kafka, and others.

Also generally available

 

News & updates

Announcing Azure Government Secret private preview and expansion of DoD IL5

We announced a significant milestone in serving our mission customers from cloud to edge with the initial availability of two new Azure Government Secret regions, now in private preview and pending accreditation. In addition, we expanded the scope of all Azure Government regions to enable DoD Impact Level 5 (IL5) data, providing a cost-effective option for L5 workloads with a broad range of available services.

Thumbnail from DoD Impact Level 5 Expansion on Azure Government

Microsoft open sources Data Accelerator, an easy-to-configure pipeline for streaming at scale

We announced that an internal Microsoft project known as Data Accelerator is now being open sourced. Data Accelerator for Apache Spark simplifies streaming big data using Spark. Data Accelerator has been used for two years within Microsoft for processing streamed data across many internal deployments handling data volumes at Microsoft scale. Offering an easy to use platform to learn and evaluate your streaming needs and requirements, we are excited to share this project with the wider community as open source.

Microsoft driving standards for the token economy with the Token Taxonomy Framework

We announced that the Token Taxonomy Initiative (TTI) is a milestone in the maturity of the blockchain industry, which brings together some of the most important blockchain platforms from the Ethereum ecosystem, Hyperledger and IBM, Intel, R3, and Digital Asset in a joint effort to establish a common taxonomy for tokens.

New Bot Framework v4 Template for QnA Maker

The QnA Maker service lets you easily create and manage a knowledge base from your data, including FAQ pages, support URLs, PDFs, and doc files. You can test and publish your knowledge base and then connect it to a bot using a bot framework sample or template. With this update we have simplified the bot creation process by allowing you to easily create a bot from your knowledge base, without the need for any code or settings changes.

Screenshot
                    of QnA Maker help bot on the QnA Maker page

Azure Updates

Learn about important Azure product updates, roadmap, and announcements. Subscribe to notifications to stay informed.

Technical content

Rewrite HTTP headers with Azure Application Gateway

We are pleased to share the capability to rewrite HTTP headers in Azure Application Gateway. With this, you can add, remove, or update HTTP request and response headers while the request and response packets move between the client and backend application. You can also add conditions to ensure that the headers you specify are rewritten only when the conditions are met. The capability also supports several server variables which help store additional information about the requests and responses, thereby enabling you to make powerful rewrite rules.

Machine Learning powered detections with Kusto query language in Azure Sentinel

As cyberattacks become more complex and harder to detect. The traditional correlation rules of a SIEM are not enough, they are lacking the full context of the attack and can only detect attacks that were seen before. This can result in false negatives and gaps in the environment. In addition, correlation rules require significant maintenance and customization since they may provide different results based on the customer environment. Advanced Machine Learning capabilities that are built in into Azure Sentinel can detect indicative behaviors of a threat and helps security analysts to learn the expected behavior in their enterprise. Here you will see three examples.

Screenshot of finding firewall traffic
                    anomalies using Azure Monitor Logs query

.NET application migration using Azure App Services and Azure Container Services

Designed for developers and solution architects who need to understand how to move business critical apps to the cloud, this online workshop series gets you hands-on with a proven process for migrating an existing ASP.NET based application to a container-based application. Join us live for 90 minutes on Wednesday and Fridays through May 3 to get expert guidance and to get your questions answered. At the end of this series you will have a good understanding of container concepts, Docker architecture and operations, Azure Container Services, Azure Kubernetes Services and SQL Azure PaaS solutions.

Automated Machine Learning: how do teams work together on an AutoML project?

In this article from Medium, the author shows you an automated machine learning use case (published on GitHub) and, specifically, how a data scientist, a project manager, and a business lead can use automated machine learning to improve team collaboration and learning, and facilitate the successful implementation of data science initiatives.

Uploading your JSON data to Azure Cosmos DB for MongoDB API

If you have built an application and are currently storing the data in a static JSON file, you may want to consider the MongoDB API for Microsoft Azure Cosmos DB. You will have the document data storage you require for your application with the full management of Microsoft Azure with Cosmos DB along with the ability to scale out globally. This will permit you to create replication to regions where your customers are.

Search Like a Boss with Azure Graph Query

Frank Boucher shows how to install the Azure Graph Query extension and explains why you should definitely care about it, and do a few simple queries across multiple Azure subscription.

Thumbnail from Search Like
                    a Boss with Azure Graph Query

Securing IoT Data Capture at its Source

What happens when devices only require your organization’s network for connectivity to pass through data or accept commands? Do those attempting to access the IoT devices only access the IoT devices or do they attempt to access other parts of the network now connected to the newly installed IoT device?  Enter the new realm of Shadow IT of which “off-the-shelf” IoT devices are being connected to company networks at the request of businesses without understanding the risks or notifying those who govern over the networks themselves, the IT Professional.

How to develop an IoT strategy that yields desired ROI

In an earlier post, we discussed why and how to get started with IoT, recommending that companies shift their mindset, develop a business case, secure ongoing executive sponsorship and budget, and seize the early-mover advantage. This post covers the six elements of crafting an IoT strategy that will yield ongoing ROI.

Azure shows

Episode 275 - Azure Foundations | The Azure Podcast

Derek Martin, a Technology Solutions Principal (TSP) at Microsoft talks about his approach to ensuring that customers get the foundational elements of Azure in place first before deploying anything else. He discusses why Microsoft is getting more opinionated, as a company, when advocating for best practices.

Getting started with Azure App Configuration | Azure Friday

Azure App Configuration is a service that enables you to centralize your application configuration. Built on the simple concept of key-value pairs, this service provides manageability, availability, and ease-of-use. You can use Azure App Configuration to store and retrieve settings for applications, microservices, platforms, and CI/CD pipelines.

Real-time ML Based Anomaly Detection in Azure Stream Analytics | Internet of Things Show

Azure Stream Analytics is a PaaS cloud offering on Microsoft Azure to help customers analyze IoT telemetry data in real-time. Stream Analytics now has embedded ML models for Anomaly Detection, which can be invoked with simple function calls. Learn how you can leverage this powerful feature set for your scenarios.

Using Ethereum Logic Apps to publish ledger data to Azure Search | Block Talk

In this episode we use the Ethereum Logic App connector to push contract data into an Azure Search index. This makes contract data available to a wide range of Enterprise applications via simple search queries.

DevOps for ASP.NET Developers Pt. 1 - What is DevOps? | On .NET

DevOps is the union of people, process, and products to enable continuous delivery of value to our end users. Azure DevOps is everything you need to turn an idea into a working piece of software. In this first episode of the DevOps for ASP.NET Developers series, Abel and Jeremy introduce us the benefits of DevOps.

 

How to deploy monitored Azure App Services with Azure DevOps | Azure Makers Series

Learn to use Azure DevOps to configure continuous build and release for your web apps. With Application Insights, you'll even be able to monitor everything in real-time—from IDE all the way to production.

Thumbnail from How to deploy monitored Azure App Services with Azure DevOps

How to use Azure Resource Manager | Azure Tips and Tricks

In this edition of Azure Tips and Tricks, learn how to use Azure Resource Manager templates to describe your infrastructure and deploy it.

Thumbnail from How to use Azure Resource Manager

How to browse your resources in the Azure Portal | Azure Portal Series

The Azure Portal enables you to view and navigate to all your resources more easily. In this video, learn how to go through your Azure resources across locations and subscriptions and customize your views.

Thumbnail
                            from Azure Portal Series

How to export your resources to CSV using the Azure Portal | Azure Portal Series

The Azure Portal enables you to customize the information you'd like to export. In this video, learn how easy it is to export your files to CSV.

Thumbnail
                            from Azure Portal Series

Udi Dahan on Microservices | Azure DevOps Podcast

This week Udi Dahan, founder of NServiceBus, CEO of Particular Software, and Microsoft Regional Director, joins the Azure DevOps Podcast to discuss microservices and some of the trends, challenges, and problems in the software industry today. Udi gives his advice and recommendations to developers and teams on how to go about making decisions around microservices while giving examples of common mistakes and problems he often sees. He also gives advice on those looking to move forward with an existing legacy system they are trying to modernize as well as those who are looking to build something entirely new.

Episode 7 - “Gaming” March Madness with Azure AI | AzureABILITY Podcast

Doyenne of Data Science Laura Edell visits the pod with new-word-crafter / AI-expert Anthony Franklin to talk about how to use Azure AI to "game" March Madness. During the episode we talk about all sorts of things related to Machine Learning and AI.

Events

Put IoT in action to overcome public building safety challenges

IoT brings transparency to public safety initiatives. Advances in sensors, edge computing, and data analytics give stakeholders a more comprehensive, more immediate view of events as they unfold. Faster, smarter reactions potentially enhance public safety. Given the nature of public safety projects, however, IoT needs collaborative frameworks that provide community members with the network they need to work toward common goals together. This brings challenges that span hardware, software, networks, security, and platform management. Innovative companies are solving these problems, however. Learn how SoloInsight and Microsoft have created secure, manageable and economical IoT solutions for public safety by registering for the IoT in Action webinar, IoT and the New Safety Net. Get insights from industry experts and Microsoft partner SoloInsight around how transparent frameworks create secure buildings.

Customers, partners, and industries

Azure resources to assess risk and compliance

This post walks through some common recommendations for various functions in Financial Services organizations. It is vital for customers in the Financial Services Industry to deliver innovation and value to their customers while adhering to strict security and regulatory requirements. Azure is uniquely positioned to help global FSI customers meet their regulatory requirements and we understand the complexities of trying to innovate fast and effectively, while also ensuring that key regulations and compliance necessities are not overlooked.

Deploying Grafana for production deployments on Azure

Grafana is one of the popular and leading open source tools for visualizing time series metrics. Grafana has quickly become the preferred visualization tool of choice for developers and operations teams for monitoring server and application metrics. Grafana dashboards enable operation teams to quickly monitor and react to performance, availability, and overall health of the service. You can now also use it to monitor Azure services and applications by leveraging the Azure Monitor data source plugin, built by Grafana Labs.


Azure hybrid storage performance & rewrite HTTP headers with Application Gateway | Azure This Week - A Cloud Guru

In this Easter special of Azure This Week, Lars covers hybrid storage performance and a new app service migration assistant. Plus you can now rewrite HTTP headers with Application Gateway.

Thumbnail
                            from Azure This Week by A Cloud Guru

Azure Cost Management now generally available for Pay-As-You-Go customers!

$
0
0

We are excited to announce the general availability of Azure Cost Management features for all Pay-As-You-Go and Azure Government customers that will greatly enhance your ability to analyze and proactively manage your cloud costs. These features will allow you to analyze your cost data, configure budgets to drive accountability for cloud costs, and export pre-configured reports on a schedule to support deeper data analysis within your own systems. This release for Pay-As-You-Go customers also provides invoice reconciliation support in the Azure portal via a usage csv download of all charges applicable to your invoices.

New feature

Azure Usage Download for invoice reconciliation

As a part of this general availability for Pay-As-You-Go customers, we are now providing usage download capabilities in the Azure portal. This downloadable csv file can be used to reconcile your charges with your monthly invoice.

Azure Usage Download for invoice reconciliation

Your usage download file can also be accessed by a new API that is now available for developers. To learn more about developing on top of our APIs, including Usage Download, please visit our Azure REST API documentation.

Generally available features

The features below are now generally available for Pay-As-You-Go and Azure Government customers within the Azure portal. Log into the Azure portal and test them out today! If you are a Government customer, log into the Azure Government portal.

Cost analysis

This feature allows you to track costs over the course of the month and offers you a variety of ways to analyze your data. To learn more about how to use Cost Analysis, please visit our documentation, “Quickstart: Explore and analyze costs with Cost analysis.”

Cost analysis dashboard in Azure Cost Management

Budgets

Use budgets to proactively manage costs and drive accountability within your organization. To learn more about using Azure budgets please visit our documentation, “Tutorial: Create and manage Azure budgets.”

Budgets in Azure Cost Management

Exports

Export all of your cost data to an Azure storage account using our new exports feature. You can use this data in external systems and combine it with your own data to maximize your cost management capabilities. To learn more about using Azure exports please visit our documentation, “Tutorial: Create and manage exported data.”

Export of cost data to an Azure storage account using the export feature

GA data limitations

The GA release of the features identified above has a few limitations that are identified below. We expect to bring many of these features to you soon so stay tuned for announcements of future releases!

  • Feature support for Pay-As-You-Go customers is available for native Azure resources only. Resources available via the Azure Marketplace, including recurring charges, will be supported in upcoming releases.
  • Cost management data for Pay-As-You-Go customers is currently only available from September 2018 and later. Data prior to this date can be accessed via the Usage Details API.
  • Feature support for Azure Reserved Instances is not currently available for Pay-As-You-Go or Azure Government customers and will be incorporated into upcoming releases.
  • Feature support for the Power BI Content Pack is not currently available for Pay-As-You-Go customers and will be incorporated into upcoming releases.

Follow us on Twitter @AzureCostMgmt for exciting cost management updates.

Detecting threats targeting containers with Azure Security Center

$
0
0

More and more services are moving to the cloud and bringing their security challenges with them. In this blog post, we will focus on the security concerns of container environments.

In a previous blog post Azure Security Center announced new features for containers security, including Docker recommendations and compliance based on the CIS benchmark for containers. We’ll go over several security concerns in containerized environments, from the Docker level to the Kubernetes cluster level, and we will show how Azure Security Center can help you detect and mitigate threats in the environment as they’re occurring in real time.

Docker analytics

When it comes to Docker a common access vector for attackers is a misconfigured daemon. By default the Docker engine is accessible only via a UNIX socket. This setting guarantees that the Docker engine won’t be accessible remotely. However, in many cases, remote management is required. Therefore, Docker support also TCP sockets. Docker supports an encrypted and authenticated remote communication. However running the daemon with a TCP socket, without explicitly specifying the “tlsverify” flag in the daemon execution, will enable anyone with a network access to the Docker host to send unauthenticated API requests to the Docker engine.

docker1

Fig. 1 – Exposed Docker Daemon that is accessible over the network


A host that runs an exposed Docker daemon would be compromised very quickly. In Microsoft Threat Intelligence Center’s honeypots, scanners that are searching for exposed Docker daemon are seen frequently. Azure Security Center can detect and alert on such behavior.

 

exposed_alert

Fig 2.  – Exposed Docker alert


Another security concern could be running your containers with higher privileges than they really need. A container with high privileges can access the host’s resources. Thus, a compromised privileged container may lead to a compromised host. Azure Security Center detects and alerts when a privileged container runs.

 

privileged_alert

Fig. 3 – privileged container alert


There are additional suspicious behaviors that Azure Security Center can detect including running an SSH server in the container and running malicious images.

Cluster level security

Usually running a single instance of Docker is not enough and a container cluster is needed. Most people use Kubernetes for their container orchestration. A major concern in managing clusters is the possibility of privilege escalation and lateral movements inside the cluster.  We will demonstrate several scenarios and will show how Azure Security Center can help identify those malicious activities.

For the first demonstration, we’ll use a cluster without RBAC enabled.

In such a scenario (Fig. 4), the service account that is mounted by default to the pods has high cluster privileges. If one of the containers is compromised, an attacker can access the service account that is mounted to that container and use it for communicating with the API server.

 

k81

Fig. 4 – Vulnerable web application container accesses the API Server


In our case, one of the containers in the cluster is running a web application that is vulnerable with a remote code execution vulnerability and exposed to the Internet. There are many examples of vulnerabilities in web applications that allow remote code execution, including CVE-2018-7600.

We will use this RCE vulnerability to send a request to the API sever from the compromised application that is running in the cluster. Since the service account has high privileges, we can perform any action in the cluster. In the following example, we retrieve the secrets from the cluster and save the output on the filesystem of the web application so we can access it later:

A snippet of command text which would allow an attacker to retrieve all secrets from a compromised cluster.

Fig. 5 – The payload send request to the API server


In fig. 5., we send a request to the API server (in the IP 10.0.0.1) that lists all the secrets in the default namespace. We do this by using the service account token that is located at /var/run/secretes/kubernetes.io/serviceaccount/token on the compromised container.

Now we can access the file secrets.txt that stores the secrets:

An image showing the information dump of the cluster's secrets.

 

Fig. 6 – dump of the cluster’s secrets


We can also list, delete, and create new containers and change other cluster resources.

Azure Security Center can identify and alert on suspicious requests to the API server from Kubernetes nodes (auditd on the cluster’s nodes required):

 

k8s_alert_1

Fig. 7 – Suspicious API request alert


One mitigation for this attack is to manage permissions in the cluster with RBAC. RBAC enables the user to grant different permissions to different accounts. By default, service accounts have no permissions to perform actions in the cluster.

However, many times even if RBAC is enabled attackers can still use such vulnerable containers for malicious purposes. A very convenient way to monitor and manage the cluster is through the Kubernetes Dashboard. The Dashboard, a container by itself, gets the default RBAC permissions that also does not enable any significant action. In order to use the dashboard many users grant permissions to the kubernetes-dashboard service account. In such cases attackers can perform actions in the cluster by using the dashboard container as a proxy instead of using the API server directly. The following payload retrieves the overview page of the default namespaces from the Kubernetes dashboard which contains information about main resources in the namespace:

A snippet of command text which would allow an attacker to perform any actions in a cluster by using the dashboard container as a proxy for the API server.

 

Fig. 8 – request to the dashboard

 

In Fig. 8, a request is sent from the compromised container to the dashboard’s cluster IP (10.0.182.140 in this case). Fig. 9 describes the attack vector when the dashboard is used.

 

k8s2

Fig. 9 – Vulnerable container accesses the Kubernetes Dashboard


Azure Security Center can also identify and alert on suspicious requests to the dashboard container from Kubernetes nodes (auditd on the cluster’s nodes required).

 

k8s_alert_2

Fig. 10 – Suspicious request to the dashboard alert


Even if specific permissions were not given to any container, attackers with access to a vulnerable container can still gain valuable information about the cluster. Every Kubernetes node runs the Kubernetes agent named Kubelet which manages the containers that run on the specific node. Kubelet exposes a read-only API that does not require any authentication in port 10255. Anyone with network access to the node can query this API and get useful information about the node. Specifically querying http://[NODE IP]:10255/pods/ will retrieve all the running pods on the node.

http://[NODE IP]:10255/spec/ will retrieve information about the node itself such as CPU and memory consumption. Attackers can use this information for better understanding the environment of the compromised container.
Lateral movement and privilege escalation are among the top security concerns in container clusters. Detecting abnormal behavior in the cluster can help you detect and mitigate those threats.

Get started with Azure Security Center

Learn more about Azure Security Center alerts and protection for containers. Start using the Standard tier of Azure Security Center to protect your containers for free today.

Azure SQL Data Warehouse reserved capacity and software plans now generally available

$
0
0

We’re excited to share more ways to optimize your Azure costs. Today we are releasing the general availability of Azure SQL Data Warehouse reserved capacity and software plans for RedHat Enterprise Linux and SUSE.

Save up to 65 percent on your Azure SQL Data Warehouse workloads

Starting today, you can purchase Reserved Capacity for Azure SQL Data Warehouse and get up to a 65 percent discount over pay-as-you-go rates. Select from 1-year or 3-year pre-commit options.

Reserved capacity is purchased in increments of 100 cDWU. Multiple warehouses in the same region can use a single pool of Reserved Capacity. The fully elastic properties of the service remain and operations beyond the reserved capacity will be billed using pay-as-you-go pricing. As always, storage is charged separately from compute and will continue to be charged separately when purchasing Reserved Capacity.

More flexibility with exchanges and refunds

We’ve made it easy to exchange your reserved capacity and make other changes, like region or term. You can also cancel the reserved capacity at any time and get a refund (terms apply).

Next steps

Save up to 18 percent on your RedHat Enterprise Linux costs

You can now purchase plans for RedHat Enterprise Linux and save up to 18 percent. Plans are available only for Red Hat Enterprise Linux virtual machines (VMs) and the discount does not apply to RedHat Enterprise Linux SAP HANA VMs or RedHat Enterprise Linux SAP Business Apps VMs.

RedHat plan discounts apply only to the VM size that you select at the time of purchase. RHEL plans can’t be refunded or exchanged after purchase.

Next steps

Purchase a RHEL plan through the Azure portal or refer to the documentation, “Prepay for Azure software plans.”

Save up to 64 percent on your SUSE Linux costs

With SUSE plans, you can save up to 64 percent on your SUSE software costs. SUSE plans get the auto-fit benefit, so you can scale up or down your SUSE VM sizes and the reservations will continue to apply. SUSE plans are available for the following SUSE images:

  • SUSE Linux Enterprise Server for SAP Priority
  • SUSE Linux Enterprise Server for HPC Priority
  • SUSE Linux Enterprise Server for HPC Standard
  • SUSE Linux Enterprise Server Standard

SUSE plans can’t be refunded or exchanged after purchase.

Next steps

Purchase a SUSE plan through the Azure portal or refer to the documentation, “Prepay for SUSE software plans from Azure Reservations.”

New experience and API for purchasing reservations and software plans

We’re also excited to launch the new experience to purchase reservations and software plans. You can now add multiple products to your cart and purchase them together. The experience also shows purchase recommendations for VM sizes that have consistent usage over last 30 days, helping you to selecting the right VM size.

Each product in your cart is processed as a separate transaction. If one purchase fails, other products in the cart are not impacted and will still be purchased.

New purchase experience

You can now purchase azure reservation and software plans using REST APIs. Visit reservation API documentation to learn more.

Enable only enterprise agreement admins to purchase reservations

This new feature enables organizations to centralize reservation purchases, instead of allowing all subscription owners to purchase reservations. Disabling the Add Reserved Instances setting on your enterprise agreement (EA) enrollment will allow only the EA admins who are owners of at least one Azure subscription to purchase reservations. If you purchased a reservation and you want to scope the reservation discount to a subscription where you don’t have owner access, then add a user to the reservation who has owner access to that subscription. The user can then update the reservation scope.

Connect IIoT data from disparate systems to unlock manufacturing insights

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>