Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Introducing Azure Cost Management for partners

$
0
0

As a partner, you play a critical role in successful planning and managing long-term cloud implementations for your customers. While the cloud grants the flexibility to scale the cloud infrastructure to the changing needs, it does become challenging to control the spend when cloud costs can fluctuate dramatically with demand. This is where Azure Cost Management comes in to help you track and control cloud cost, prevent overspending and increase predictability for your cloud costs

Announcing general availability of Azure Cost Management for all cloud solution partners (CSPs) who have onboarded their customers to the new Microsoft Customer agreement. With this update, partners and their customers can take advantage of Azure Cost Management tools available to manage cloud spend, similar to the cost management capabilities available for pay-as-you-go (PAYG) and enterprise customers today.

This is the first of the periodic updates to enable cost management support for partners that enables partners to understand, analyze, dissect and manage cost across all their customers and invoices.

With this update, CSPs use Azure Cost Management to:

  • Understand invoiced costs and associate the costs to the customer, subscriptions, resource groups, and services.
  • Get an intuitive view of Azure costs in cost analysis with capabilities to analyze costs by customer, subscription, resource group, resource, meter, service, and many other dimensions.
  • View resource costs that have Partner Earned Credit (PEC) applied in Cost Analysis.
  • Set up notifications and automation using programmatic budgets and alerts when costs exceed budgets.
  • Enable the Azure Resource Manager policy that provides customer access to Cost Management data. Customers can then view consumption cost data for their subscriptions using pay-as-you-go rates.

For more information see, Get Started with Azure Cost Management as a Partner.

Analyze costs by customer, subscription, tags, resource group or resource using cost analysis

Using cost analysis, partners can group by and filter costs by customer, subscription, tags, resource group, resource, and reseller Microsoft partner Network identifier (MP NID), and have increased visibility into costs for better cost control. Partners can also view and manage the costs in the billing currency and in US dollars for billing scopes.

An image showing how you can group and filter costs in cost analysis.

Reconcile cost to an invoice

Partners can reconcile costs by invoice across their customers and their subscriptions to understand the pre-tax costs that contributed to the invoice.

An image showing how cost analysis can help analyze Azure spend to reconcile cost.

You can analyze azure spend for the customers you support and their subscriptions and resources. With this enhanced visibility into the costs of your customers, you can use spending patterns to enforce cost control mechanisms, like budgets and alerts to manage costs with continued and increased accountability.

Enable cost management at retail rates for your customers

In this update, a partner can also enable cost management features, initially at pay-as-you-go rates for your customers and resellers who have access to the subscriptions in the customer’s tenant. As a partner, if you decide to enable cost management for the users with access to the subscription, they will have the same capabilities to analyze the services they consume and set budgets to control costs that are computed at pay-as-you-go prices for Azure consumed services. This is just the first of the updates and we have features planned in the first half of 2020 to enable cost management for customers at prices that partner can set by applying a markup on the pay-as-you-go prices.

Partners can set a policy to enable cost management for users with access to an Azure subscription to view costs at retail rates for a specific customer.

An image showing how partners can set a policy to view costs at retail rates for a specific customer.

If the policy is enabled for subscriptions in the customer’s tenant, users with role-based access control (RBAC) access to the subscription can now manage Azure consumption costs at retail prices.

An image showing how customers with RBAC access can manage Azure consumption at retail prices.

Set up programmatic budgets and alerts to automate and notify when costs exceed threshold

As a partner, you can set up budgets and alerts to send notifications to specified email recipients when the cost threshold is exceeded. In the partner tenant, you can set up budgets for costs as invoiced to the partner. You can also set up monthly, quarterly, or annual budgets across all your customers, or for a specific customer, and filter by subscription, resource, reseller MPN ID, or resource group.

An image showing how you can set up budgets and alerts.

Any user with RBAC access to a subscription or resource group can also set up budgets and alerts for Azure consumption costs at retail rates in the customer tenant if the policy for cost visibility has been enabled for the customer.

An image showing how users can create budgets.

When a budget is created for a subscription or resource group in the customer tenant, you can also configure it to call an action group. The action group can perform a variety of different actions when your budget threshold is met. For more information about action groups, see Create and manage action groups in the Azure portal. For more information about using budget-based automation with action groups, see Manage costs with Azure budgets.

All the experiences that we provide in Azure Cost Management natively are also available as REST APIs for enabling automated cost management experiences.

Coming soon

  • We will be enabling cost recommendation and optimization suggestions, for better savings and efficiency in managing Azure costs.
  • We will launch Azure Cost Management at retail rates for customers who are not on the Microsoft Customer Agreement and are supported by CSP partners.
  • Showback features that enable partners to charge a markup on consumption costs are also being planned for 2020.

Try Azure Cost Management for partners today! It is natively available in the Azure portal for all partners who have onboarded customers to the new Microsoft Customer Agreement.


Set Environment Variables for Debug, Launch, and Tools with CMake and Open Folder

$
0
0

There are many reasons why you may want to customize environment variables. Many build systems use environment variables to drive behavior; debug targets sometimes need to have PATH customized to ensure their dependencies are found; etc. Visual Studio has a mechanism to customize environment variables for debugging and building CMake projects and C++ Open Folder. In Visual Studio 2019 16.4 we made some changes to simplify this across Visual Studio’s JSON configuration files.

This post is going to cover how to use this feature from the ground up with the new changes since not everyone may be familiar with how and when to use this feature. However, for those who have used the feature before, here’s a quick summary of the changes:

  • Your existing configuration files will still work with these changes, but IntelliSense will recommend using the new syntax going forward.
  • Debug targets are now automatically launched with the environment you specify in CMakeSettings.json and CppProperties.json. Custom tasks have always had this behavior and still do.
  • Debug targets and custom tasks can have their environments customized using a new “env” tag in launch.vs.json and tasks.vs.json.
  • Configuration-specific variables defined in CppProperties.json are automatically picked up by debug targets and tasks without the need to set “inheritEnvironments”. CMakeSettings.json has always worked this way and still does.

Please try out the new preview and let us know what you think.

Customizing Environment Variables

There are now two ways to specify environment variables for CMake and C++ Open Folder. The first is to set up the overall build environment. This is done in CMakeSettings.json for CMake and CppProperties.json for C++ Open Folder. Environment variables can be global for the project or specific to an individual configuration (selected with the configuration dropdown menu). These environment variables will be passed to everything, including CMake builds, custom tasks, and debug targets.

Environment variables can also be used in any value in Visual Studio’s configuration JSON files using the syntax:

"${env.VARIABLE_NAME}"

Global and configuration specific environment variables can be defined in “environment” blocks in both CMakeSettings.json and CppProperties.json. For example, the following example sets environment variables differently for a “Debug” and “Release” configuration in CMakeSettings.json:

{
  // These environment variables apply to all configurations.
  "environments": [
    {
      "DEBUG_LOGGING_LEVEL": "warning"
    }
  ],

  "configurations": [
    {
      // These environment variables apply only to the debug configuration.
      // Global variables can be overridden and new ones can be defined.
      "environments": [
        {
          "DEBUG_LOGGING_LEVEL": "info;trace",
          "ENABLE_TRACING": "true"
        }
      ],

      "name": "Debug",
      "generator": "Ninja",
      "configurationType": "Debug",
      "inheritEnvironments": [ "msvc_x64_x64" ],
      "buildRoot": "${projectDir}\out\build\${name}",
      "installRoot": "${projectDir}\out\install\${name}",
      "cmakeCommandArgs": "",
      "buildCommandArgs": "-v",
      "ctestCommandArgs": "",
      "variables": []
    },

    {
      // Configurations do not need to override environment variables.
      // If none are defined, they will inherit the global ones.

      "name": "Release",
      "generator": "Ninja",
      "configurationType": "RelWithDebInfo",
      "buildRoot": "${projectDir}\out\build\${name}",
      "installRoot": "${projectDir}\out\install\${name}",
      "cmakeCommandArgs": "",
      "buildCommandArgs": "-v",
      "ctestCommandArgs": "",
      "inheritEnvironments": [ "msvc_x64_x64" ],
      "variables": []
    }
  ]
}

The second way to specify environment variables is used to customize individual debug targets and custom tasks. This is done in launch.vs.json and tasks.vs.json respectively. To do this, you add an “env” tag to individual targets and tasks that specifies the environment variables that need to be customized. You can also unset a variable by setting it to null.

The following example sets an environment variable for a debug target to enable some custom logging in launch.vs.json and the same “env” syntax can be applied to any task in tasks.vs.json:

{
  "version": "0.2.1",
  "defaults": {},
  "configurations": [
    {
      "type": "default",
      "project": "cmake\CMakeLists.txt",
      "projectTarget": "envtest.exe",
      "name": "envtest.exe",
      "env": {
        "DEBUG_LOGGING_LEVEL": "trace;info"
        "ENABLE_TRACING": "true"
      }
    }
  ]
}

Keep in mind, if you want an environment variable to be set for all debug targets and tasks, it is better to do it globally in CMakeSettings.json or CppProperties.json

Send us Feedback

Your feedback is a critical part of ensuring that we can deliver the best experience.  We would love to know how Visual Studio 2019 version 16.4 is working for you. If you find any issues or have a suggestion, the best way to reach out to us is to Report a Problem.

The post Set Environment Variables for Debug, Launch, and Tools with CMake and Open Folder appeared first on C++ Team Blog.

gRPC vs HTTP APIs

$
0
0

ASP.NET Core now enables developers to build gRPC services. gRPC is an opinionated contract-first remote procedure call framework, with a focus on performance and developer productivity. gRPC integrates with ASP.NET Core 3.0, so you can use your existing ASP.NET Core logging, configuration, authentication patterns to build new gRPC services.

This blog post compares gRPC to JSON HTTP APIs, discusses gRPC’s strengths and weaknesses, and when you could use gRPC to build your apps.

gRPC strengths

Developer productivity

With gRPC services, a client application can directly call methods on a server app on a different machine as if it was a local object. gRPC is based around the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. The server implements this interface and runs a gRPC server to handle client calls. On the client, a strongly-typed gRPC client is available that provides the same methods as the server.

gRPC is able to achieve this through first-class support for code generation. A core file to gRPC development is the .proto file, which defines the contract of gRPC services and messages using Protobuf interface definition language (IDL):

Greet.proto

// The greeting service definition.
service Greeter {
  // Sends a greeting
  rpc SayHello (HelloRequest) returns (HelloReply);
}

// The request message containing the user's name.
message HelloRequest {
  string name = 1;
}

// The response message containing the greetings
message HelloReply {
  string message = 1;
}

Protobuf IDL is a language neutral syntax, so it can be shared between gRPC services and clients implemented in different languages. gRPC frameworks use the .proto file to code generate a service base class, messages, and a complete client. Using the generated strongly-typed Greeter client to call the service:

Program.cs

var channel = GrpcChannel.ForAddress("https://localhost:5001")
var client = new Greeter.GreeterClient(channel);

var reply = await client.SayHelloAsync(new HelloRequest { Name = "World" });
Console.WriteLine("Greeting: " + reply.Message);

By sharing the .proto file between the server and client, messages and client code can be generated from end to end. Code generation of the client eliminates duplication of messages on the client and server, and creates a strongly-typed client for you. Not having to write a client saves significant development time in applications with many services.

Performance

gRPC messages are serialized using Protobuf, an efficient binary message format. Protobuf serializes very quickly on the server and client. Protobuf serialization results in small message payloads, important in limited bandwidth scenarios like mobile apps.

gRPC requires HTTP/2, a major revision of HTTP that provides significant performance benefits over HTTP 1.x:

  • Binary framing and compression. HTTP/2 protocol is compact and efficient both in sending and receiving.
  • Multiplexing of multiple HTTP/2 calls over a single TCP connection. Multiplexing eliminates head-of-line blocking at the application layer.

Real-time services

HTTP/2 provides a foundation for long-lived, real-time communication streams. gRPC provides first-class support for streaming through HTTP/2.

A gRPC service supports all streaming combinations:

  • Unary (no streaming)
  • Server to client streaming
  • Client to server streaming
  • Bidirectional streaming

Note that the concept of broadcasting a message out to multiple connections doesn’t exist natively in gRPC. For example, in a chat room where new chat messages should be sent to all clients in the chat room, each gRPC call is required to individually stream new chat messages to the client. SignalR is a useful framework for this scenario. SignalR has the concept of persistent connections and built-in support for broadcasting messages.

Deadline/timeouts and cancellation

gRPC allows clients to specify how long they are willing to wait for an RPC to complete. The deadline is sent to the server, and the server can decide what action to take if it exceeds the deadline. For example, the server might cancel in-progress gRPC/HTTP/database requests on timeout.

Propagating the deadline and cancellation through child gRPC calls helps enforce resource usage limits.

gRPC weaknesses

Limited browser support

gRPC has excellent cross-platform support! gRPC implementations are available for every programming language in common usage today. However one place you can’t call a gRPC service from is a browser. gRPC heavily uses HTTP/2 features and no browser provides the level of control required over web requests to support a gRPC client. For example, browsers do not allow a caller to require that HTTP/2 be used, or provide access to underlying HTTP/2 frames.

gRPC-Web is an additional technology from the gRPC team that provides limited gRPC support in the browser. gRPC-Web consists of two parts: a JavaScript client that supports all modern browsers, and a gRPC-Web proxy on the server. The gRPC-Web client calls the proxy and the proxy will forward on the gRPC requests to the gRPC server.

Not all of gRPC’s features are supported by gRPC-Web. Client and bidirectional streaming isn’t supported, and there is limited support for server streaming.

Not human readable

HTTP API requests using JSON are sent as text and can be read and created by humans.

gRPC messages are encoded with Protobuf by default. While Protobuf is efficient to send and receive, its binary format isn’t human readable. Protobuf requires the message’s interface description specified in the .proto file to properly deserialize. Additional tooling is required to analyze Protobuf payloads on the wire and to compose requests by hand.

Features such as server reflection and the gRPC command line tool exist to assist with binary Protobuf messages. Also, Protobuf messages support conversion to and from JSON. The built-in JSON conversion provides an efficient way to convert Protobuf messages to and from human readable form when debugging.

gRPC recommended scenarios

gRPC is well suited to the following scenarios:

  • Microservices – gRPC is designed for low latency and high throughput communication. gRPC is great for lightweight microservices where efficiency is critical.
  • Point-to-point real-time communication – gRPC has excellent support for bidirectional streaming. gRPC services can push messages in real-time without polling.
  • Polyglot environments – gRPC tooling supports all popular development languages, making gRPC a good choice for multi-language environments.
  • Network constrained environments – gRPC messages are serialized with Protobuf, a lightweight message format. A gRPC message is always smaller than an equivalent JSON message.

Conclusion

gRPC is a powerful new tool for ASP.NET Core developers. While gRPC is not a complete replacement for HTTP APIs, it offers improved productivity and performance benefits in some scenarios.

gRPC on ASP.NET Core is available now! If you are interested in learning more about gRPC, check out these resources:

The post gRPC vs HTTP APIs appeared first on ASP.NET Blog.

Microsoft Ignite 2019 Bing Maps APIs session recordings available

$
0
0

The Bing Maps team was in Orlando, Florida, November 4th through the 8th for Microsoft Ignite. If you were not able to attend the event, the session recordings are now available for viewing on https://techcommunity.microsoft.com/.

Optimizing Workforce Itinerary Scheduling and Travel Time with Bing Maps

Optimizing Workforce Itinerary Scheduling and Travel Time with Bing Maps Session PictureModern workforce management requires state-of-the-art geospatial technologies and solutions. Do you have field staff that you need to assign to visit multiple customer locations? Do you want optimized routes for those visits? Save time and reduce costs by leveraging the recently released Bing Maps Multi-Itinerary Optimization API. View session recording.


Harness the power of Bing Maps Location Insights in your Enterprise Applications

Harness the power of Bing Maps Location Insights in your Enterprise Applications session pictureIn the modern workplace, location is key. Bing Maps helps your business take location-relevant data into account in day to day business decisions. Whether you are planning new sites, optimizing your offerings based on local situations, or providing your customers with highly relevant local experiences, Bing Maps has location data and APIs for you. In this session, you will learn about the Microsoft mapping offerings for Location Recognition, Local Search, and Local Insights. View session recording


Announcing Bing Maps Geospatial Analytics Platform Preview for Enterprise Business Planning

Announcing Bing Maps Geospatial Analytics Platform Preview for Enterprise Business Planning session pictureNeed help in determining where to expand, build or deploy resources to meet customer demand? Don’t know how your business KPI vary from location to location? The new Bing Maps Geospatial Analytics solution takes the difficulty out of wrangling location intelligence data and helps you arrive at location-based business decisions faster. In this session, you’ll learn how to use a local business, parcel info, demographics, etc. to identify underserved locations for future business expansion or uncover key locations insights for your business.
View session recording

For more information about the products shared at Microsoft Ignite, visit https://www.microsoft.com/en-us/maps/msevents and join our LinkedIn group for other news and updates.

- Bing Maps Team

Python in Visual Studio Code – November 2019 Release

$
0
0

We are pleased to announce that the November 2019 release of the Python Extension for Visual Studio Code is now available. You can  download the Python extension from the Marketplace, or install it directly from the extension gallery in Visual Studio Code. If you already have the Python extension installed, you can also get the latest update by restarting Visual Studio Code. You can learn more about  Python support in Visual Studio Code  in the documentation.  

In this release we focused mostly on product quality. We closed a total of 60 issues, 39 of them being bug fixes. However, we’re also pleased to deliver delightful features such as: 

  • Add imports “quick fix” when using the Python Language Server 
  • Altair plot support  
  • Line numbers in the Notebook Editor 

If you’re interested, you can check the full list of improvements iour changelog. 

Add Imports “Quick Fix” when using the Python Language Server 

We’re excited to announce that we have brought the magic of automatic imports to Python developers in VS Code by way of an add imports quick fix. Automatic imports functionality was one of the most requested features on our GitHub repo (GH21), and when you enable the Microsoft Language Server, you will get this new functionality. To enable the Language Server, add the setting python.jediEnabled: false to your settings.json file. 

The add imports quick fix within VS Code is triggered via code action lightbulb. To use the quick fix, begin typing a package name within the editor for which you do not have an import statement at the header of the file. You will notice that if a code action is available for this package (i.e. you have a module installed within your environment with the name you’ve supplied), a yellow squiggle will appear. If you hover over that text, code action lightbulb will appear indicating that an ’import code action is available for the package. You’ll see a list of potential imports (again, based on what’s installed within your environment), allowing you to choose the package that you wish to import. 

Example of auto import suggestion for path submodule

The add imports code action will also recognize some of the most popular abbreviations for the following Python packages: numpy as nptensorflow as tf, pandas as pdmatplotlib.pyplot as plt, matplotlib as mpl, math as m, scipy.io as spio, and scipy as sp. 

Example of auto completions suggestions behaviour

The import suggestion list is ordered such that all import statements that appear at the top of the list are package (or module) imports; those that appear lower in the list are import statements for additional modules and/or members (e.g. classes, objects, etc.) from specified packages. 

Import suggestion for sys module

Make sure you have linting enabled since this functionality is tied to the Language Server linting capability. You can enable linting by opening the Command Palette (View > Command Palette…), running the “Python: Enable Linting” command and selecting “On” in the drop-down menu. 

Altair plots support

The Notebook Editor and the Python Interactive window now both support rendering plots built with Altair,  a declarative statistical visualization library for Python.

Jupyter Notebook example displaying Altair support

Line Numbers in the Notebook Editor

Line numbers are now supported in the notebook editor. On selected code cells, you can toggle the line numbers by pressing the “L” key.

Other Changes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. Some notable changes include:

  • Fix running a unittest file to not execute only the first test. (thanks Nikolay Kondratyev). (#4567)
  • Added commands translation for Farsi and Turkish (thanks Nikronic). (#8092)
  • Added command translations for Turkish (thanks alioguzhan). (#8320)
  • Place all plots on a white background regardless of theme. (#8000)

We are continuing to A/B test new features, so if you see something different that was not announced by the team, you may be part of the experiment! To see if you are part of an experiment, you can check the first lines in the Python extension output channel. If you wish to opt-out of A/B testing, you can open the user settings.json file (View > Command Palette… and run Preferences: Open Settings (JSON)) and set the “python.experiments.enabled” setting to false.

Be sure to download the Python extension for Visual Studio Code now to try out the above improvements. If you run into any problems, please file an issue on the Python VS Code GitHub page.

The post Python in Visual Studio Code – November 2019 Release appeared first on Python.

AI-assisted IntelliSense for your team’s codebase

$
0
0

Visual Studio IntelliCode uses machine learning to offer useful, contextually-rich code completion suggestions as you type, allowing you to learn APIs more quickly and code faster. Although IntelliCode’s base model was trained on over 3000 top open source C# GitHub repositories, it does not include all the custom types in your code base. To produce useful, high-fidelity, contextually-rich suggestions, the model needs to be tailored to unique types or domain-specific APIs that aren’t used in open source code. To make IntelliSense recommendations based on the wisdom of your team’s codebase, the model needs to train with your team’s code.

Earlier this year, we extended our ML model training capabilities beyond our initial Github trained base model to enable you to personalize your IntelliCode completion suggestions by creating team models trained on your own code.

Team completions shared and automated easily!

Your team completions become part of your normal developer workflow just by associating a model to your repo. Anyone with access to your repository, automatically gets team completions – no extra configuration steps are required!

Once you’re ready, you can keep your completions up-to-date with our new Azure DevOps task that can retrain your models on CI. When a change is made to your codebase, the model is automatically trained and shared with your team.

2 steps to team completions

Set up and share

Repository-associated models are automatically shared with others working in the same codebase as long as users have enabled automatic acquisition of team models in Visual Studio. To enable automatic acquisition by going to Tools > Options > IntelliCode > Acquire team models for completion. Access to the repository is access to the model. When training, we collect some information about the checked-out commit where the training took place. Anyone who requests that model must have the same commit in their repository and be able to produce the same information that was collected during training to receive the team model.

See more details on how to acquire and share team completions here.

Automate

Once you’re happy with the team completions on you repo, you should set up to automatically retrain as part of your continuous integration (CI) pipeline in Azure Pipelines. When code changes are pushed to your repository, the build task runs and your team completions are retrained and made available to the repo. In parallel, Visual Studio checks for updates to team completions  and will update automatically .

Install the Visual Studio IntelliCode Team Model Training task from Visual Studio Marketplace to your Azure DevOps organization or Azure DevOps Server (formerly TFS).

See more details about how to configure and automate the build task here.

Tell us what you think!

We’d love to understand your current experience with IntelliCode and where we can improve. Try out sharing team completions and automating updates today and tell us what you think of the new experience. Please note that you’ll need to be on at least Visual Studio 2019 version 16.4 preview 3 to try out these updates to the IntelliCode team completions experience.

Please raise issues and comments Visual Studio “report a problem”.

We’re interested to hear feedback about the recommendations themselves, the performance of the feature, or any capabilities you might be missing.

To keep up with the future of AI-assisted development, sign up for our Insiders newsletter.

The post AI-assisted IntelliSense for your team’s codebase appeared first on Visual Studio Blog.

Developing on Windows – Hello World

$
0
0

Hello (Dev) World!

My name is Avri 🙋🏾‍ and I’m a Program Manager here at Microsoft focused on improving the Windows developer experience! I’m a member of the engineering team here, where I get to collaborate with a bunch of other FANTASTIC people to create and improve Windows developer tools; and in this blog series, I’ll call out tons of ways to improve your end-to-end dev experience. Blogs in this series will include news on Windows Terminal, Windows Subsystem for Linux (WSL), Windows Performance, VS Code and Visual Studio, VM’s and Containers, Developer PowerToys, and more!

So, what’s new in the Windows Developer World?

Using Windows Terminal and WSL to build web applications

If you’re anything like me, you’re a huge fan of multitasking. Well, with some of the new updates from Windows Terminal and WSL2, finding time to do it all is now a dream come true. You can start by creating your web application in the new Windows Terminal. Let’s say you’re running a script but want to keep an eye on some tasks you have open in the background. You can change the opacity of your window with “Ctrl+Shift+scroll” and be the multitasking champ! As a fresh-out of college grad, I distinctly remember the days (everyday) of working on more than one assignment at once. And now that I’m an FTE, the number of things I must do at one time has increased four fold. Cool updates like this one to Windows Terminal though help me stay on task(s).

In addition to being workload friendly, you can also run the Windows Subsystem for Linux (WSL) in Windows Terminal. And with the release of Windows Terminal v0.6, any WSL distribution installed will be automatically detected and added to the profiles.json file! My favorite distro is Ubuntu, and with WSL, you can use your favorite Linux commands on Windows!

The next generation of WSL, called WSL 2, has released to Windows Insiders! WSL2 has full system call compatibility and significantly faster file system performance. WSL 2 also works great with VS Code! From a WSL 2 prompt, navigate to your project folder, enter “code.” and your project will open in VS Code using the WSL remote extension.

Here’s the cool thing about WSL2 and VS Code – the only thing happening on the Windows side is the UI, which means everything else (the debugger, interpreter, and installed extensions) are all running in WSL! For all files, you can open, edit, set break points and debug! As Craig eloquently states in the video, “you can get the full debugging experience inside of Windows, inside of VS Code, running on Linux.”

Gif of man opening briefcase

Check out the video here if you want to see how you can do even more with Windows Terminal and WSL2!

Updates to Windows Developer Tools 

Windows Terminal

Terminal Preview v0.6 release

In case you missed it, there have been many updates to the Windows Terminal (now available in the Microsoft Store and on Github)! New features include and updated tab UI, a new font, and more!

WSL2 – Windows Subsystem for Linux 2

We recently announced some stellar updates to WSL2 as well. Checkout this new blog on memory reclamation.

PowerToys – Windows system utilities to maximize productivity

The newest preview release of PowerToys contains three utilities – FancyZones, Windows Shortcut Guide, and PowerRename – with all the code for the project on GitHub. The repo also contains the information and tools you need to understand how the PowerToys’ utilities work together and how to create your own utilities. More information is available in the Windows Insider blog post on PowerToys.

Signing Off

It’s amazing what determined minds can make happen. The team is amped to ship all these great features, updates, and bug fixes to you all. Leave us some feedback below. I’m signing off for now but stay tuned and be sure to follow this blog series for all things dev on Windows!

Oh! And follow me on Twitter (@AvriNichole) for more news. ✌🏽

The post Developing on Windows – Hello World appeared first on Windows Developer Blog.

5 attributes of successful teams


Microsoft cloud in Norway opens with availability of Microsoft Azure

$
0
0

Norway city skyline and waterfront.

Today, we’re announcing the availability of Microsoft Azure from our new cloud datacenter regions in Norway, marking a major milestone as the first global cloud provider to deliver enterprise-grade services in country. These new regions demonstrate our ongoing investment to help enable digital transformation and advance intelligent cloud and intelligent edge computing technologies across both commercial and public sectors.

DNB, Equinor, Lånekassen, and Posten are just a few of the customers and partners leveraging our cloud services to accelerate innovation and increase computing resources. This new offering of Microsoft Azure delivers scalable, highly available, and resilient cloud services to Norwegian companies and organizations while meeting data residency, security, and compliance needs.

Our President, Brad Smith, recently visited Norway to celebrate this important launch and to discuss how vital trust is for those we serve, not only to help bring forth innovation but to ensure our customers are protected.

“Our customers have entrusted us to protect, operate, and develop our platform in a way that keeps their data private and secure. This is an immense responsibility that we can’t just claim, but a responsibility that we must earn every single day.” - Brad Smith, President, Microsoft

Accelerating digital transformation in Norway

As we further our expansion commitment, we consider the demand for locally delivered cloud services and the opportunity for digital transformation in the market. Azure enables our customers and partners to increase their utilization of public cloud services and accelerate investments into private and hybrid cloud solutions. Norwegian organizations can now embrace these benefits to further innovation and build digital businesses at scale. Below are just a few of the customers and partners embracing Microsoft Azure in Norway.

The Norwegian banking industry is recognized for its rapid technology adoption, digitalizing the services that build the best products for customers. As Norway’s largest financial services group, DNB Group is a major operator in several industries, for which they also have a Nordic or international strategy. With Microsoft Azure, DNB will be able to migrate to the cloud in accordance with Norwegian data handling regulations to modernize, gain operational efficiency, and secure the best experience for its customers. 

“The possibility of data residency was a decisive factor in choosing Microsoft’s datacenter regions. Now we are looking forward to using the cloud to modernize and achieve efficiency and agility in order to ensure the best experience for our customers.” - Alf Otterstad, Executive Vice President, Group IT, DNB

Equinor, a broad energy company developing oil, gas, wind, and solar energy in more than 30 countries worldwide, has chosen Microsoft Azure to enable its digital transformation journey through a seven-year consumption and development agreement. With this strategic partnership, anchored in cloud-enabled innovation, and by moving its whole system portfolio to Azure, Equinor is aiming to achieve a more cost-efficient, safer, and more reliable operation. Equinor will utilize a variety of cloud services like machine learning and advanced analytics to improve performance, decrease costs, and increase safety. Through the partnership with Microsoft and leveraging capabilities within Azure, Equinor seeks to be a leader in the transformation of the energy industry worldwide and a growing force in renewables.

“Equinor’s ambition is to become a global digital leader within our industry. We have a long history of innovation and technology development. The strategic partnership will, through cloud services, involve development of the next-generation IT workplace, extended business application platforms, and mixed-reality solutions.” - Åshild Hanne Larsen, CIO and SVP, Corporate IT Equinor

Lånekassen, the Norwegian State Educational Loan Fund, has over 1.1 million customers, composed of former and current students. By moving to Azure, it seeks to develop new and transformative citizen services, based on cognitive and analytical technologies. Lånekassen’s purpose is to make education possible, and to provide the Norwegian workforce with relevant competences. It aims to strengthen student funding as well as maintain and increase the already high level of automatized customer services and application processes.

“It has been a priority for Lånekassen to focus on how we can utilize new technology to deliver an even better service for our students and manage our student financing schemes even more efficiently. As we move our core solutions into the cloud, it will give us increased opportunities to innovate. We have already had great success with using machine learning, and we are now looking forward to optimizing our operations further.” - Nina Schanke Funnemark, CEO, Lånekassen

Posten Norge AS has chosen to use the Microsoft Azure platform to meet ever-changing market demands by modernizing some of its existing applications estate and creating new services for its customers and partners. Posten’s next-generation logistics system will provide its workforce with new digital toolsets to deliver even better customer experiences.

“Posten’s vision is to make everyday life simpler and the world smaller. With this vision, we aim to simplify and increase the value of trade and communication for people and businesses in the Nordic region. With the opening of Norwegian datacenter regions, we hope to accelerate and fuel our vision further.” - Arne Erik Berntzen, CIO, Posten AS

Bringing the complete cloud to Norway

The new cloud regions in Norway connect with Microsoft’s 54 regions via our global network, one of the largest and most innovative on the planet, spanning more than 130,000 miles of terrestrial fiber and subsea cable systems to deliver services to customers. Microsoft brings the global cloud closer to home for Norwegian organizations and citizens through our transatlantic system Marea, the highest-capacity subsea cable to cross the Atlantic.

The new cloud regions in Norway are targeted to expand in 2020 with Office 365, one of the world’s leading cloud-based productivity solutions, and Dynamics 365 and Power Platform, the next generation of intelligent business applications and tools.

Learn more about the new cloud services in Norway and the availability of Azure regions and services across the globe.

Learn how to accelerate time to insight at the Analytics in Azure virtual event

$
0
0

The next wave of analytics is here with Azure Synapse Analytics! Simply unmatched and truly limitless, we're excited about this launch and want to share the highlights with you. Please join us for the Analytics in Azure virtual event on Tuesday, December 10, 2019 from 10:00 AM – 11:00 AM Pacific Time. Be among the first to see how Azure Synapse can accelerate your organization’s time to insight. Sign up for the live stream today for reminders, agenda updates, and instructions to tune in live.

Azure Synapse is a limitless analytics service that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources at scale. Build end-to-end analytics solutions with a unified experience to ingest, prepare, manage, and serve data for immediate BI and machine learning needs, with blazing speed.

Join us for this virtual event and find out how to:

  • Query petabyte-scale data on demand from the data lake, or provision elastic compute resources for demanding workloads like data warehousing.
  • Build a modern data warehouse enhanced with streaming analytics, machine learning, BI, and AI capabilities.
  • Reduce project development time for machine learning, BI, and AI.
  • Easily optimize petabyte-scale workloads and automatically prioritize critical jobs.
  • Help safeguard data with Azure Active Directory integration, dynamic data masking, column-level and row-level security, and automated threat detection.

You’ll hear directly from Gayle Sheppard, Corporate Vice President of Microsoft Azure Data, and John Macintyre, Director of Product for Microsoft Azure Analytics, who will dive deep on Azure Synapse. Other Microsoft engineering and analytics experts will join the event with insights, demos, and answers to your questions in a live Q&A. Don’t miss your opportunity to ask how Azure Synapse can enable the growth and evolution of your business. 

There’s never been a better time to embrace technologies that allow you to unlock insights from all your data to stay competitive and fuel innovation with purpose. Register today for the Analytics in Azure virtual event on December 10th. We hope you can join us!

Finastra “did not expect the low RPO” of Azure Site Recovery DR

$
0
0

Today’s question and answer style post comes after I had the chance to sit down with Bryan Heymann, Head of Cloud Architecture at Finastra, discussing his experience with Azure Site Recovery. Finastra builds and deploys technology on its open software architecture, our conversation focused on the organization’s journey to replace several disparate disaster recovery (DR) technologies with Azure Site Recovery. To learn more about achieving resilience in Azure, refer to this whitepaper.

 


 

You have been on Azure for a few years now – before we get too deep in DR, can you start with some context on the cloud transformation that you are going through at Finastra?

We think of our cloud journey across three horizons. Currently, we're at “Horizon 0” – consolidating and migrating our core data centers to the cloud with a focus on embracing the latest technologies and reducing total cost of ownership (TCO.) The workloads are a combination of production sites and internal employee sites.

Initially, we went through a 6-month review with a third party to identify our datacenter strategy, and decided to select a public cloud. Ultimately, we realized that Microsoft would be a solid partner to help us on our journey. We moved some of our solutions to the cloud and our footprint has organically grown from there.

All this is the enabler to go to future “horizons” to ensure we continuously keep pace with the latest technology. Most importantly we’re looking to move up the value chain – so instead of us worrying about standing a server up, patching a server, auditing a server, identity on a server… we’re now ingesting and deploying the right policies for the service (not the server) and taking advantage of the availability, security, and disaster recovery options.

Exciting to hear about the journey you've taken so far. I believe DR is a requirement across all of those horizons, right?

Disaster recovery is front and center for us. We work closely with our clients to regularly test. At this point we have executed more than 50 test failovers. Disaster recovery and backups are non-negotiable standards in our shared environment.

What were you most skeptical about when it came to DR in Azure? What was it that helped you become convinced that Azure Site Recovery was the right choice?

We used just about every tool in our data centers and always had mixed results. We thought that Azure Site Recovery might be the same, but I was glad that we were wrong. We have a strong success rate and even wrote special dashboards to track our recovery point objective (RPO) for a holistic view on our posture! We were skeptical that Site Recovery would be point and click capable, and whether it would be able to keep up with the amount of change we have, when failing over from the East coast to the West coast. Our first DR test in Azure, over two years ago now, was actually wildly successful. We did not expect the low RPO that we saw and were delighted. I think this speaks volumes to Azure’s network backbone and how you handle replication, to be that performant.

We hear that from a lot of customers. Great to get further validation from you! Could you ‘double click’ on your onboarding experience with Azure Site Recovery, up to that first DR drill?

There wasn't any heavy onboarding, which is a good thing as it really wasn’t needed. It was so intuitive and easy for our team to use. The documentation was very accurate. The point and click capabilities of Site Recovery and the documentation enabled us to onboard and go. It has all been in line with what we needed, without surprises.

What kind of workloads are you protecting with Azure Site Recovery?

All of our virtual machines (VMs) across North America are using Site Recovery, everything from our lending space, to our payment space, to our core space. These systems support thousands of our customers, and each of those have their own end customers which would number in the millions – Site Recovery is our default disaster recovery mechanism across the whole fleet.

Wow, that’s a lot of customers and some sensitive financial spaces so no wonder disaster recovery is such a high priority for your teams. We regularly hear prospective customers asking whether Azure Site Recovery supports Linux – I'd love to understand if you have Linux-based applications using Site Recovery, and what your experience has been with those?

Actually, it was our very first application for which we deployed Azure Site Recovery – and it’s all Linux. Linux support for Site Recovery has been fantastic. We failover every six months, without any issues. The ease of use and the amount of times we have tested now has significantly increased. We pressed on our normal RPOs to get them down to very, very aggressive levels. Some of our Linux based applications are complex, but Site Recovery has worked without any issues.

You touched upon DR drills – I'd love to understand what your drill experience has been like?

The experience has been seamless and simple. The application itself may have some configurations that need to be considered during DR drills, such as post-scripts, but those are hammered out quickly. We try to do drills every six months, but at least once every year.

Which features of Azure Site Recovery do you like the most?

I love that I can fail across regions. I also love the granular recovery point reporting. It allows us to see where we may or may not be seeing problems. I'm not sure we ever got that from any other tools, it’s very powerful and it’s graphical user interface based – and any Joe could do it, it's not hard to select a VM and replicate it to another region. I especially like the fact that we are only charged for storage in the far side region so, financially, there's not an impact of having warm standbys and still we are able to hit our RPO.

If you were to go through this journey all over again with Azure Site Recovery, is there anything that you would have done differently?

I would have liked to get our knowledge base and plans in place for a month longer before implementing it. It's just so easy that we were able to blow through most of it, but we did miss a couple of simple things early on which were easily fixed later on our journey. We found out quickly we didn't want standard hard drives, we wanted premium for example.

Looking forward, how do you plan to leverage Azure Site Recovery?

We recently used Azure Site Recovery to move a customer in our payment space from on premises to Azure – we will now get those machines on Site Recovery across Azure regions, we're not going to rebuild the entire platform. It's obviously the de facto to get us out, and it is the standard for regional disaster recovery for VMs there. There is no other product used.

People ask me what keeps me up at night, there are really two things. “Are we secure?” and “Can we recover?” – I call it the butterfly effect. When you come in each morning, are you confident that if you cratered a datacenter, you could come up in a different one? I can confidently answer that with yes. We could fail out to another region, with all our data. That's a pretty nice spot to be in, especially when you're sitting in a hyperscale cloud. I know that I have storage replication. I know that I own the network links. To allow somebody to run this stuff on our behalf was a mindset change, but it has really been a positive experience for us.

Bringing confidential computing to Kubernetes

$
0
0

Historically, data has been protected at rest through encryption in data stores, and in transit using network technologies, however as soon as that data is processed in the CPU of a computer it is decrypted and in plain text. New confidential computing technologies are game changing as they provide data protection, even when the code is running on the CPU, with secure hardware enclaves. Today, we are announcing that we are bringing confidential computing to Kubernetes workloads.

Confidential computing with Azure

Azure is the first major cloud platform to support confidential computing building on Intel® Software Guard Extensions (Intel SGX). Last year, we announced the preview of the DC-series of virtual machines that run on Intel® Xeon® processors and are confidential computing ready.

This confidential computing capability also provides an additional layer of protection even from potentially malicious insiders at a cloud provider, reduces the chances of data leaks and may help address some regulatory compliance needs.

Confidential computing enables several previously not possible use-cases. Customers in regulated industries can now collaborate together using sensitive partner or customers data to detect fraud scenarios without giving the other party visibility into that data. In another example customers can perform mission critical payment processing in secure enclaves.

How it works for Kubernetes

With confidential computing for Kubernetes, customers can now get this additional layer of data protection for their Kubernetes workloads with the code running on the CPU with secure hardware enclaves. Use the open enclave SDK for confidential computing in code. Create a Kubernetes cluster on hardware that supports Intel SGX, such as the DC-series virtual machines running Ubuntu 16.04 or Ubuntu 18.04 and install the confidential computing device plugin into those virtual machines. The device plugin (running as a DaemonSet) surfaces the usage of the Encrypted Page Cache (EPC) RAM as a schedulable resource for Kubernetes. Kubernetes users can then schedule pods and containers that use the Open Enclave SDK onto hardware which supports Trusted Execution Environments (TEE).

The following pod specification demonstrates how you would schedule a pod to have access to a TEE by defining a limit on the specific EPC memory that is advertised to the Kubernetes scheduler by the device plugin available in preview.

How to schedule a pod to access a TEE

Now the pods in these clusters can run containers using secure enclaves and take advantage of confidential computing. There is no additional fee for running Kubernetes containers on top of the base DC-series cost.

The Open Enclave SDK was recently open sourced by Microsoft and made available to the Confidential Computing Consortium, under the Linux Foundation for standardization to create a single uniform API to use with a variety of hardware components and software runtimes across the industry landscape.

Try out confidential computing for Kubernetes with Azure today. Let us know what you think in our survey or on GitHub.

Azure high-performace computing at SC’19

$
0
0

HBv2 Virtual Machines for HPC, Azure’s most powerful yet, now in preview

Azure HB v2-series Virtual Machines (VM) for high-performance computing (HPC) are now in preview in the South Central US region.

HBv2-series Virtual Machines are Azure’s most advanced HPC offering yet, featuring performance and Message Passing Interface scalability rivaling the most advanced supercomputers on the planet, and price and performance on par with on-premises HPC deployments.

HBv2 Virtual Machines are designed for a variety of real-world HPC applications, from fluid dynamics to finite element analysis, molecular dynamics, seismic processing & imaging, weather modeling, rendering, computational chemistry, and more.

Each HBv2 Virtual Machines features 120 AMD EPYCTM 7742 processor cores at 2.45 GHz (3.3 GHz Boost), 480 GB of RAM, 480 MB of L3 cache, and no simultaneous multithreading. A HBv2 Virtual Machine also provides up to 340 GB per second of memory bandwidth, up to four teraflops of double-precision compute, and up to eight teraflops of single-precision compute.

Finally, a HBv2 Virtual Machine features 900 GB of low-latency, high-bandwidth block storage via NVMeDirect, and supports up to eight Azure Managed Disks.

200 Gigabits high data rate (HDR) InfiniBand comes to the Azure

HBv2-series Virtual Machines feature one of the cloud’s first deployment of 200 Gigabit per second HDR InfiniBand networking from Mellanox, which provides up to 8 times higher bandwidth and 16 times lower latencies than found elsewhere on the public cloud.

With HBv2 Virtual Machines, Azure is also introducing two new network features to support the highest sustained performance for tightly-coupled workloads. The first is adaptive routing, which helps optimize Message Passing Interface performance on congested networks. The second is support for dynamic connected transport (DCT) which provides reliable transport, and enhancements to scalable, asynchronous, and high-performance communication.

As with HB and HC Virtual Machines, HBv2 Virtual Machines support hardware-based offload for Message Passing Interface collectives.

Azure & Cray deliver cloud-based seismic imaging at 28,000 cores, 42 GB per second reads, and 62 GB per second write performance

Customers come to Azure for our ability to support their largest and most critical workloads. Energy companies have been among the first and most eager to embrace our advanced HPC capabilities, including for their core subsurface discovery workloads. Advances in subsurface computing support more accurate identification of energy resources, as well as safer extraction of these resources from challenging areas such as beneath thick deposits of salt in the Gulf of Mexico.

As part of our work with one of our strategic partner operators energy exploration customers, today we are sharing that Azure recently supported what is believe to be one of the largest cloud-based seismic processing workload yet.

Powered by up to 468 Azure HB Virtual Machines totaling 28,080 AMD EPYC first generation CPU cores and more than 123 terabyte per second of aggregate memory bandwidth, the customer was able to run imaging jobs utilizing a variety of pre-stack and post-stack migration, full-waveform inversion, and real-time migration techniques.

Seismic imaging is as much about data movement as it is compute, however, to support this record scale customer workload Cray provided the supercomputing firm’s vaunted ClusterStor storage system. Announced earlier this year, Cray® ClusterStor™ in Azure is a dedicated Lustre filesystem solution to accelerate data processing for the largest and most complex HPC and AI jobs run on Azure, and can optionally be connected to Azure H-series Virtual Machines. Not only does Cray ClusterStor in Azure leverage the same technology that powers many of the fastest HPC filesystems on the planet, it also is among the most affordable on the cloud. Over a typical three-year reserved instance period, Cray ClusterStor in Azure can cost as little as 1/10th of Lustre offerings found on other public clouds.

The combination of the Azure HB-series Virtual Machines and Cray ClusterStor provided a highly scalable solution as delivering an 11.5x improvement in time to solution as the pool of compute virtual machines was increased from 16 to 400.

The Cray ClusterStor in Azure storage solution, whose measured performance peaked at 42 GB per second (reads) and 62 GB per second (writes) also delivered significant differentiation for the customer by driving a 66 percent improvement in application performance as compared to an alternative, high-performance network file system (NFS) approach.

Available now

Azure HBv2-series Virtual Machines are currently available in South Central US, with additional regions rolling out soon.

3 small business stories to celebrate National Entrepreneurship Month

Windows 10 SDK Preview Build 19023 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 19023 or greater). The Preview SDK Build 19023 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017 and 2019. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1903 or earlier to the Microsoft Store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2019 here.
  • This build of the Windows SDK will install on only on Windows 10 Insider Preview builds.
  • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following static URL: https://software-download.microsoft.com/download/sg/Windows_InsiderPreview_SDK_en-us_19023_1.iso.

Tools Updates

Message Compiler (mc.exe)

  • Now detects the Unicode byte order mark (BOM) in .mc files. If the If the .mc file starts with a UTF-8 BOM, it will be read as a UTF-8 file. Otherwise, if it starts with a UTF-16LE BOM, it will be read as a UTF-16LE file. If the -u parameter was specified, it will be read as a UTF-16LE file. Otherwise, it will be read using the current code page (CP_ACP).
  • Now avoids one-definition-rule (ODR) problems in MC-generated C/C++ ETW helpers caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of MCGEN_EVENTWRITETRANSFER are linked into the same binary, the MC-generated ETW helpers will now respect the definition of MCGEN_EVENTWRITETRANSFER in each .cpp file instead of arbitrarily picking one or the other).

Windows Trace Preprocessor (tracewpp.exe)

  • Now supports Unicode input (.ini, .tpl, and source code) files. Input files starting with a UTF-8 or UTF-16 byte order mark (BOM) will be read as Unicode. Input files that do not start with a BOM will be read using the current code page (CP_ACP). For backwards-compatibility, if the -UnicodeIgnore command-line parameter is specified, files starting with a UTF-16 BOM will be treated as empty.
  • Now supports Unicode output (.tmh) files. By default, output files will be encoded using the current code page (CP_ACP). Use command-line parameters -cp:UTF-8 or -cp:UTF-16 to generate Unicode output files.
  • Behavior change: tracewpp now converts all input text to Unicode, performs processing in Unicode, and converts output text to the specified output encoding. Earlier versions of tracewpp avoided Unicode conversions and performed text processing assuming a single-byte character set. This may lead to behavior changes in cases where the input files do not conform to the current code page. In cases where this is a problem, consider converting the input files to UTF-8 (with BOM) and/or using the -cp:UTF-8 command-line parameter to avoid encoding ambiguity.

TraceLoggingProvider.h

  • Now avoids one-definition-rule (ODR) problems caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of TLG_EVENT_WRITE_TRANSFER are linked into the same binary, the TraceLoggingProvider.h helpers will now respect the definition of TLG_EVENT_WRITE_TRANSFER in each .cpp file instead of arbitrarily picking one or the other).
  • In C++ code, the TraceLoggingWrite macro has been updated to enable better code sharing between similar events using variadic templates.

Signing your apps with Device Guard Signing

Windows SDK Flight NuGet Feed

We have stood up a NuGet feed for the flighted builds of the SDK. You can now test preliminary builds of theWindows 10 WinRT API Pack, as well as a microsoft.windows.sdk.headless.contracts NuGet package.

We use the following feed to flight our NuGet packages.

Microsoft.Windows.SDK.Contracts which can be used with to add the latest Windows Runtime APIs support to your .NET Framework 4.5+ and .NET Core 3.0+ libraries and apps.

The Windows 10 WinRT API Pack enables you to add the latest Windows Runtime APIs support to your .NET Framework 4.5+ and .NET Core 3.0+ libraries and apps.

Microsoft.Windows.SDK.Headless.Contracts provides a subset of the Windows Runtime APIs for console apps excludes the APIs associated with a graphical user interface.  This NuGet is used in conjunction with

Windows ML container development. Check out the Getting Started guide for more information.

Breaking Changes

Removal of api-ms-win-net-isolation-l1-1-0.lib

In this release api-ms-win-net-isolation-l1-1-0.lib has been removed from the Windows SDK. Apps that were linking against api-ms-win-net-isolation-l1-1-0.lib can switch to OneCoreUAP.lib as a replacement.

Removal of IRPROPS.LIB

In this release irprops.lib has been removed from the Windows SDK. Apps that were linking against irprops.lib can switch to bthprops.lib as a drop-in replacement.

Removal of WUAPICommon.H and WUAPICommon.IDL

In this release we have moved ENUM tagServerSelection from WUAPICommon.H to wupai.h and removed the header. If you would like to use the ENUM tagServerSelection, you will need to include wuapi.h or wuapi.idl.

API Updates, Additions and Removals

The following APIs have been added to the platform since the release of Windows 10 SDK, version 1903, build 18362.

Additions:

 

namespace Windows.AI.MachineLearning {
  public sealed class LearningModelSessionOptions {
    bool CloseModelOnSessionCreation { get; set; }
  }
}
namespace Windows.ApplicationModel {
  public sealed class AppInfo {
    public static AppInfo Current { get; }
    Package Package { get; }
    public static AppInfo GetFromAppUserModelId(string appUserModelId);
    public static AppInfo GetFromAppUserModelIdForUser(User user, string appUserModelId);
  }
  public interface IAppInfoStatics
  public sealed class Package {
    StorageFolder EffectiveExternalLocation { get; }
    string EffectiveExternalPath { get; }
    string EffectivePath { get; }
    string InstalledPath { get; }
    bool IsStub { get; }
    StorageFolder MachineExternalLocation { get; }
    string MachineExternalPath { get; }
    string MutablePath { get; }
    StorageFolder UserExternalLocation { get; }
    string UserExternalPath { get; }
    IVectorView<AppListEntry> GetAppListEntries();
    RandomAccessStreamReference GetLogoAsRandomAccessStreamReference(Size size);
  }
}
namespace Windows.ApplicationModel.AppService {
  public enum AppServiceConnectionStatus {
    AuthenticationError = 8,
    DisabledByPolicy = 10,
    NetworkNotAvailable = 9,
    WebServiceUnavailable = 11,
  }
  public enum AppServiceResponseStatus {
    AppUnavailable = 6,
    AuthenticationError = 7,
    DisabledByPolicy = 9,
    NetworkNotAvailable = 8,
    WebServiceUnavailable = 10,
  }
  public enum StatelessAppServiceResponseStatus {
    AuthenticationError = 11,
    DisabledByPolicy = 13,
    NetworkNotAvailable = 12,
    WebServiceUnavailable = 14,
  }
}
namespace Windows.ApplicationModel.Background {
  public sealed class BackgroundTaskBuilder {
    void SetTaskEntryPointClsid(Guid TaskEntryPoint);
  }
  public sealed class BluetoothLEAdvertisementPublisherTrigger : IBackgroundTrigger {
    bool IncludeTransmitPowerLevel { get; set; }
    bool IsAnonymous { get; set; }
    IReference<short> PreferredTransmitPowerLevelInDBm { get; set; }
    bool UseExtendedFormat { get; set; }
  }
  public sealed class BluetoothLEAdvertisementWatcherTrigger : IBackgroundTrigger {
    bool AllowExtendedAdvertisements { get; set; }
  }
}
namespace Windows.ApplicationModel.ConversationalAgent {
  public sealed class ActivationSignalDetectionConfiguration
  public enum ActivationSignalDetectionTrainingDataFormat
  public sealed class ActivationSignalDetector
  public enum ActivationSignalDetectorKind
  public enum ActivationSignalDetectorPowerState
  public sealed class ConversationalAgentDetectorManager
  public sealed class DetectionConfigurationAvailabilityChangedEventArgs
  public enum DetectionConfigurationAvailabilityChangeKind
  public sealed class DetectionConfigurationAvailabilityInfo
  public enum DetectionConfigurationTrainingStatus
}
namespace Windows.ApplicationModel.DataTransfer {
  public sealed class DataPackage {
    event TypedEventHandler<DataPackage, object> ShareCanceled;
  }
}
namespace Windows.Devices.Bluetooth {
  public sealed class BluetoothAdapter {
    bool IsExtendedAdvertisingSupported { get; }
    uint MaxAdvertisementDataLength { get; }
  }
}
namespace Windows.Devices.Bluetooth.Advertisement {
  public sealed class BluetoothLEAdvertisementPublisher {
    bool IncludeTransmitPowerLevel { get; set; }
    bool IsAnonymous { get; set; }
    IReference<short> PreferredTransmitPowerLevelInDBm { get; set; }
    bool UseExtendedAdvertisement { get; set; }
  }
  public sealed class BluetoothLEAdvertisementPublisherStatusChangedEventArgs {
    IReference<short> SelectedTransmitPowerLevelInDBm { get; }
 }
  public sealed class BluetoothLEAdvertisementReceivedEventArgs {
    BluetoothAddressType BluetoothAddressType { get; }
    bool IsAnonymous { get; }
    bool IsConnectable { get; }
    bool IsDirected { get; }
    bool IsScannable { get; }
    bool IsScanResponse { get; }
    IReference<short> TransmitPowerLevelInDBm { get; }
  }
  public enum BluetoothLEAdvertisementType {
    Extended = 5,
  }
  public sealed class BluetoothLEAdvertisementWatcher {
    bool AllowExtendedAdvertisements { get; set; }
 }
  public enum BluetoothLEScanningMode {
    None = 2,
  }
}
namespace Windows.Devices.Bluetooth.Background {
  public sealed class BluetoothLEAdvertisementPublisherTriggerDetails {
    IReference<short> SelectedTransmitPowerLevelInDBm { get; }
  }
}
namespace Windows.Devices.Display {
  public sealed class DisplayMonitor {
    bool IsDolbyVisionSupportedInHdrMode { get; }
  }
}
namespace Windows.Devices.Input {
  public sealed class PenButtonListener
  public sealed class PenDockedEventArgs
  public sealed class PenDockListener
  public sealed class PenTailButtonClickedEventArgs
  public sealed class PenTailButtonDoubleClickedEventArgs
  public sealed class PenTailButtonLongPressedEventArgs
  public sealed class PenUndockedEventArgs
}
namespace Windows.Devices.Sensors {
  public sealed class Accelerometer {
    AccelerometerDataThreshold ReportThreshold { get; }
  }
  public sealed class AccelerometerDataThreshold
  public sealed class Barometer {
    BarometerDataThreshold ReportThreshold { get; }
  }
  public sealed class BarometerDataThreshold
  public sealed class Compass {
    CompassDataThreshold ReportThreshold { get; }
  }
  public sealed class CompassDataThreshold
  public sealed class Gyrometer {
    GyrometerDataThreshold ReportThreshold { get; }
  }
  public sealed class GyrometerDataThreshold
  public sealed class Inclinometer {
    InclinometerDataThreshold ReportThreshold { get; }
  }
  public sealed class InclinometerDataThreshold
  public sealed class LightSensor {
    LightSensorDataThreshold ReportThreshold { get; }
  }
  public sealed class LightSensorDataThreshold
  public sealed class Magnetometer {
    MagnetometerDataThreshold ReportThreshold { get; }
  }
  public sealed class MagnetometerDataThreshold
}
namespace Windows.Foundation.Metadata {
  public sealed class AttributeNameAttribute : Attribute
  public sealed class FastAbiAttribute : Attribute
  public sealed class NoExceptionAttribute : Attribute
}
namespace Windows.Globalization {
  public sealed class Language {
    string AbbreviatedName { get; }
    public static IVector<string> GetMuiCompatibleLanguageListFromLanguageTags(IIterable<string> languageTags);
  }
}
namespace Windows.Graphics.Capture {
  public sealed class GraphicsCaptureSession : IClosable {
    bool IsCursorCaptureEnabled { get; set; }
  }
}
namespace Windows.Graphics.DirectX {
  public enum DirectXPixelFormat {
    SamplerFeedbackMinMipOpaque = 189,
    SamplerFeedbackMipRegionUsedOpaque = 190,
  }
}
namespace Windows.Graphics.Holographic {
  public sealed class HolographicFrame {
    HolographicFrameId Id { get; }
  }
  public struct HolographicFrameId
  public sealed class HolographicFrameRenderingReport
  public sealed class HolographicFrameScanoutMonitor : IClosable
  public sealed class HolographicFrameScanoutReport
  public sealed class HolographicSpace {
    HolographicFrameScanoutMonitor CreateFrameScanoutMonitor(uint maxQueuedReports);
  }
}
namespace Windows.Management.Deployment {
  public sealed class AddPackageOptions
  public enum DeploymentOptions : uint {
    StageInPlace = (uint)4194304,
  }
  public sealed class PackageManager {
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> AddPackageByUriAsync(Uri packageUri, AddPackageOptions options);
    IVector<Package> FindProvisionedPackages();
    PackageStubPreference GetPackageStubPreference(string packageFamilyName);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackageByUriAsync(Uri manifestUri, RegisterPackageOptions options);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackagesByFullNameAsync(IIterable<string> packageFullNames, RegisterPackageOptions options);
    void SetPackageStubPreference(string packageFamilyName, PackageStubPreference useStub);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> StagePackageByUriAsync(Uri packageUri, StagePackageOptions options);
  }
  public enum PackageStubPreference
  public enum PackageTypes : uint {
    All = (uint)4294967295,
  }
  public sealed class RegisterPackageOptions
  public enum RemovalOptions : uint {
    PreserveRoamableApplicationData = (uint)128,
  }
  public sealed class StagePackageOptions
  public enum StubPackageOption
}
namespace Windows.Media.Audio {
  public sealed class AudioPlaybackConnection : IClosable
  public sealed class AudioPlaybackConnectionOpenResult
  public enum AudioPlaybackConnectionOpenResultStatus
  public enum AudioPlaybackConnectionState
}
namespace Windows.Media.Capture {
  public sealed class MediaCapture : IClosable {
    MediaCaptureRelativePanelWatcher CreateRelativePanelWatcher(StreamingCaptureMode captureMode, DisplayRegion displayRegion);
  }
  public sealed class MediaCaptureInitializationSettings {
    Uri DeviceUri { get; set; }
    PasswordCredential DeviceUriPasswordCredential { get; set; }
  }
  public sealed class MediaCaptureRelativePanelWatcher : IClosable
}
namespace Windows.Media.Capture.Frames {
  public sealed class MediaFrameSourceInfo {
    Panel GetRelativePanel(DisplayRegion displayRegion);
  }
}
namespace Windows.Media.Devices {
  public sealed class PanelBasedOptimizationControl
  public sealed class VideoDeviceController : IMediaDeviceController {
    PanelBasedOptimizationControl PanelBasedOptimizationControl { get; }
  }
}
namespace Windows.Media.MediaProperties {
  public static class MediaEncodingSubtypes {
    public static string Pgs { get; }
    public static string Srt { get; }
    public static string Ssa { get; }
    public static string VobSub { get; }
  }
  public sealed class TimedMetadataEncodingProperties : IMediaEncodingProperties {
    public static TimedMetadataEncodingProperties CreatePgs();
    public static TimedMetadataEncodingProperties CreateSrt();
    public static TimedMetadataEncodingProperties CreateSsa(byte[] formatUserData);
    public static TimedMetadataEncodingProperties CreateVobSub(byte[] formatUserData);
  }
}
namespace Windows.Networking.BackgroundTransfer {
  public sealed class DownloadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
  public sealed class UploadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
}
namespace Windows.Networking.Connectivity {
  public enum NetworkAuthenticationType {
    Owe = 12,
  }
}
namespace Windows.Networking.NetworkOperators {
  public sealed class NetworkOperatorTetheringAccessPointConfiguration {
    TetheringWiFiBand Band { get; set; }
    bool IsBandSupported(TetheringWiFiBand band);
    IAsyncOperation<bool> IsBandSupportedAsync(TetheringWiFiBand band);
  }
  public sealed class NetworkOperatorTetheringManager {
    public static void DisableNoConnectionsTimeout();
    public static IAsyncAction DisableNoConnectionsTimeoutAsync();
    public static void EnableNoConnectionsTimeout();
    public static IAsyncAction EnableNoConnectionsTimeoutAsync();
    public static bool IsNoConnectionsTimeoutEnabled();
  }
  public enum TetheringWiFiBand
}
namespace Windows.Networking.PushNotifications {
  public static class PushNotificationChannelManager {
    public static event EventHandler<PushNotificationChannelsRevokedEventArgs> ChannelsRevoked;
  }
  public sealed class PushNotificationChannelsRevokedEventArgs
  public sealed class RawNotification {
    IBuffer ContentBytes { get; }
  }
}
namespace Windows.Security.Authentication.Web.Core {
  public sealed class WebAccountMonitor {
    event TypedEventHandler<WebAccountMonitor, WebAccountEventArgs> AccountPictureUpdated;
  }
}
namespace Windows.Security.Isolation {
  public sealed class IsolatedWindowsEnvironment
  public enum IsolatedWindowsEnvironmentActivator
  public enum IsolatedWindowsEnvironmentAllowedClipboardFormats : uint
  public enum IsolatedWindowsEnvironmentAvailablePrinters : uint
  public enum IsolatedWindowsEnvironmentClipboardCopyPasteDirections : uint
  public struct IsolatedWindowsEnvironmentContract
  public struct IsolatedWindowsEnvironmentCreateProgress
  public sealed class IsolatedWindowsEnvironmentCreateResult
  public enum IsolatedWindowsEnvironmentCreateStatus
  public sealed class IsolatedWindowsEnvironmentFile
  public static class IsolatedWindowsEnvironmentHost
  public enum IsolatedWindowsEnvironmentHostError
  public sealed class IsolatedWindowsEnvironmentLaunchFileResult
  public enum IsolatedWindowsEnvironmentLaunchFileStatus
  public sealed class IsolatedWindowsEnvironmentOptions
  public static class IsolatedWindowsEnvironmentOwnerRegistration
  public sealed class IsolatedWindowsEnvironmentOwnerRegistrationData
  public sealed class IsolatedWindowsEnvironmentOwnerRegistrationResult
  public enum IsolatedWindowsEnvironmentOwnerRegistrationStatus
  public sealed class IsolatedWindowsEnvironmentProcess
  public enum IsolatedWindowsEnvironmentProcessState
  public enum IsolatedWindowsEnvironmentProgressState
  public sealed class IsolatedWindowsEnvironmentShareFolderRequestOptions
  public sealed class IsolatedWindowsEnvironmentShareFolderResult
  public enum IsolatedWindowsEnvironmentShareFolderStatus
  public sealed class IsolatedWindowsEnvironmentStartProcessResult
  public enum IsolatedWindowsEnvironmentStartProcessStatus
  public sealed class IsolatedWindowsEnvironmentTelemetryParameters
  public static class IsolatedWindowsHostMessenger
  public delegate void MessageReceivedCallback(Guid receiverId, IVectorView<object> message);
}
namespace Windows.Storage {
  public static class KnownFolders {
    public static IAsyncOperation<StorageFolder> GetFolderAsync(KnownFolderId folderId);
    public static IAsyncOperation<KnownFoldersAccessStatus> RequestAccessAsync(KnownFolderId folderId);
    public static IAsyncOperation<KnownFoldersAccessStatus> RequestAccessForUserAsync(User user, KnownFolderId folderId);
  }
  public enum KnownFoldersAccessStatus
  public sealed class StorageFile : IInputStreamReference, IRandomAccessStreamReference, IStorageFile, IStorageFile2, IStorageFilePropertiesWithAvailability, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFile> GetFileFromPathForUserAsync(User user, string path);
  }
  public sealed class StorageFolder : IStorageFolder, IStorageFolder2, IStorageFolderQueryOperations, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFolder> GetFolderFromPathForUserAsync(User user, string path);
  }
}
namespace Windows.Storage.Provider {
  public sealed class StorageProviderFileTypeInfo
  public sealed class StorageProviderSyncRootInfo {
    IVector<StorageProviderFileTypeInfo> FallbackFileTypeInfo { get; }
  }
  public static class StorageProviderSyncRootManager {
    public static bool IsSupported();
  }
}
namespace Windows.System {
  public sealed class UserChangedEventArgs {
    IVectorView<UserWatcherUpdateKind> ChangedPropertyKinds { get; }
  }
  public enum UserWatcherUpdateKind
}
namespace Windows.UI.Composition.Interactions {
 public sealed class InteractionTracker : CompositionObject {
    int TryUpdatePosition(Vector3 value, InteractionTrackerClampingOption option, InteractionTrackerPositionUpdateOption posUpdateOption);
  }
  public enum InteractionTrackerPositionUpdateOption
}
namespace Windows.UI.Input {
  public sealed class CrossSlidingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class DraggingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class GestureRecognizer {
    uint HoldMaxContactCount { get; set; }
    uint HoldMinContactCount { get; set; }
    float HoldRadius { get; set; }
    TimeSpan HoldStartDelay { get; set; }
    uint TapMaxContactCount { get; set; }
    uint TapMinContactCount { get; set; }
    uint TranslationMaxContactCount { get; set; }
    uint TranslationMinContactCount { get; set; }
  }
  public sealed class HoldingEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class ManipulationCompletedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class ManipulationInertiaStartingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationStartedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationUpdatedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class RightTappedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class SystemButtonEventController : AttachableInputObject
  public sealed class SystemFunctionButtonEventArgs
  public sealed class SystemFunctionLockChangedEventArgs
  public sealed class SystemFunctionLockIndicatorChangedEventArgs
  public sealed class TappedEventArgs {
    uint ContactCount { get; }
  }
}
namespace Windows.UI.Input.Inking {
  public sealed class InkModelerAttributes {
    bool UseVelocityBasedPressure { get; set; }
  }
}
namespace Windows.UI.Text {
  public enum RichEditMathMode
  public sealed class RichEditTextDocument : ITextDocument {
    void GetMath(out string value);
    void SetMath(string value);
    void SetMathMode(RichEditMathMode mode);
  }
}
namespace Windows.UI.ViewManagement {
  public sealed class UISettings {
    event TypedEventHandler<UISettings, UISettingsAnimationsEnabledChangedEventArgs> AnimationsEnabledChanged;
    event TypedEventHandler<UISettings, UISettingsMessageDurationChangedEventArgs> MessageDurationChanged;
  }
  public sealed class UISettingsAnimationsEnabledChangedEventArgs
  public sealed class UISettingsMessageDurationChangedEventArgs
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    event TypedEventHandler<CoreInputView, CoreInputViewHidingEventArgs> PrimaryViewHiding;
    event TypedEventHandler<CoreInputView, CoreInputViewShowingEventArgs> PrimaryViewShowing;
  }
  public sealed class CoreInputViewHidingEventArgs
  public enum CoreInputViewKind {
    Symbols = 4,
  }
  public sealed class CoreInputViewShowingEventArgs
  public sealed class UISettingsController
}

The post Windows 10 SDK Preview Build 19023 available now! appeared first on Windows Developer Blog.


.NET Core November 2019 Updates – 2.1.14, 2.2.8, and 3.0.1

$
0
0

Today, we are releasing the .NET Core November 2019 Update. These updates only contain non-security fixes. See the individual release notes for details on updated packages.

NOTE: If you are a Visual Studio user, there are MSBuild version requirements so use only the .NET Core SDK supported for each Visual Studio version. Information needed to make this choice will be seen on the download page. If you use other development environments, we recommend using the latest SDK release.

Getting the Update

The latest .NET Core updates are available on the .NET Core download page. This update will be included in a future update of Visual Studio.

See the .NET Core release notes ( 2.1.14 | 2.2.8 | 3.0.1 ) for details on the release, including issues fixed and affected packages.

Docker Images

.NET Docker images have been updated for today’s release. The following repos have been updated.

Note: Look at the “Full Tag Listing” under the Description in each repository to see the updated Docker image tags.

Note: You must re-pull base images to get updates. The Docker client does not pull updates automatically.

The post .NET Core November 2019 Updates – 2.1.14, 2.2.8, and 3.0.1 appeared first on .NET Blog.

.NET Framework November 2019 Preview of Quality Rollup

$
0
0

Today, we are releasing the November 2019 Preview of Quality Rollup

Quality and Reliability

This release contains the following quality and reliability improvements for .NET Framework for Windows 8.1, Server 2012 R2, Server 2012, Windows 7 SP1, Server 2008 R2 SP1 and Server 2008 SP2. Following this recent announcement, there are no optional non-security updates for Windows 10 as part of this release.

ASP.NET

  • ASP.NET will now emit a SameSite cookie header when HttpCookie.SameSite value is ‘None’ to accommodate upcoming changes to SameSite cookie handling in Chrome. As part of this change, FormsAuth and SessionState cookies will also be issued with SameSite = ‘Lax’ instead of the previous default of ‘None’, though these values can be overridden in web.config.

CLR1

  • Addresses and issue where some ClickOnce applications or applications creating the default AppDomain with a restricted permission set may observe application launch or application runtime failures, or unexpected behaviors. The observable issue was the System.AppDomainSetup.TargetFrameworkName is null, leading to any quirks enabling reverting back to .NET Framework 4.0 behaviors.

WCF2

  • Addresses a race condition when using WCF TCP Port Sharing where a client which disconnects part way through the session establishment handshake can result in the WCF Service no longer being able to accept new connections.

  • When an IIS worker process has many WCF TCP web services using TCP Port Sharing and the worker process crashes while under high CPU load, a race condition during TCP Port sharing reinitialization can result in some endpoints not being able to accept client connections. Added an AppSetting to enable retrying initialization when this happens.

WPF3

  • Addresses and issue where some Per-Monitor Aware WPF applications that host System-Aware or Unaware child-windows running on .NET Framework 4.8 may occasionally encounter a crash with exception System.Collections.Generic.KeyNotFoundException.

SQL

  • Addresses an issue with SqlClient Bid traces where information wasn’t being printed due to incorrectly formatted strings.

1 Common Language Runtime (CLR)
2 Windows Communication Foundation (WCF)
3 Windows Presentation Foundation (WPF)

Getting the Update

The Preview of Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog.

Note: Customers that rely on Windows Update and Windows Server Update Services will automatically receive the .NET Framework version-specific updates. Advanced system administrators can also take use of the below direct Microsoft Update Catalog download links to .NET Framework-specific updates. Before applying these updates, please ensure that you carefully review the .NET Framework version applicability, to ensure that you only install updates on systems where they apply.

The following table is for earlier Windows and Windows Server versions.

Product Version Preview of Quality Rollup
Windows 8.1, Windows RT 8.1 and Windows Server 2012 R2 Catalog 4524743
.NET Framework 3.5 Catalog 4514371
.NET Framework 4.5.2 Catalog 4514367
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4524420
.NET Framework 4.8 Catalog 4531181
Windows Server 2012 Catalog 4524742
.NET Framework 3.5 Catalog 4514370
.NET Framework 4.5.2 Catalog 4514368
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4524419
.NET Framework 4.8 Catalog 4531180
Windows 7 SP1 Windows Server 2008 R2 SP1 Catalog 4524741
.NET Framework 3.5.1 Catalog 4507004
.NET Framework 4.5.2 Catalog 4507001
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4524421
.NET Framework 4.8 Catalog 4531182
Windows Server 2008 Catalog 4524744
.NET Framework 2.0, 3.0 Catalog 4507003
.NET Framework 4.5.2 Catalog 4507001
.NET Framework 4.6 Catalog 4524421

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

The post .NET Framework November 2019 Preview of Quality Rollup appeared first on .NET Blog.

How AI can supercharge content understanding for businesses

$
0
0

Organizations face challenges when it comes to extracting insights, finding meaning, and uncovering new opportunities in the vast troves of content at their disposal. In fact, 82 percent of organizations surveyed in the latest Harvard Business Review (HBR) Analytic Services report say that exploring and understanding their content in a timely manner is a significant challenge. This is exacerbated because content is not only spread over multiple systems but also in multiple formats such as PDF, JPEG, spreadsheets, and audio files.

The first wave of artificial intelligence (AI) was designed for narrow applications, training a single model to address a specific task such as handwriting recognition. What’s been challenging, however, is that these models individually can’t capture all the different attributes hidden in various types of content. This means developers must painfully stitch together disparate components to fully understand their content.

Instead, organizations need a solution that spans vision, speech, and language to fully unlock insights from all content types. We are heavily investing in this new category of AI, called knowledge mining, to enable organizations to maximize the value of their content.

Knowledge mining with Azure Cognitive Search

Organizations can take advantage of knowledge mining today with Azure Cognitive Search. Organizations can now easily glean insights from all their content through web applications, bots, and Power BI visualizations. With Azure Cognitive Search, organizations can not only benefit from the industry’s most comprehensive domain-specific models but also integrate their own custom models. What used to take months to accomplish can be realized in mere hours without needing data science expertise.

Azure Cognitive Search

Delivering real business impact

The same Harvard Business Review report describes how our customers across industries are benefiting from knowledge mining in ways that were previously unimaginable.

  •  Financial Services: “The return on investment (ROI) for knowledge mining at a small fund with one or two analysts is 30 percent to 58 percent. For much larger funds, with 50 or more analysts, it is over 500 percent.”—Subra Bose, CEO of Financial Fabric.
  •  Healthcare: “A reliable knowledge mining platform can drive roughly a third of the costs out of the medical claims process.” —Ram Swaminathan, CEO at BUDDI Health.
  •  Manufacturing: “Unlocking this potential will significantly change the way we do business with our customers and how we service their equipment.” —Chris van Ravenswaay, global business solutions manager for Howden.
  •  Legal: “AI tells you what is inside the contract. It also tells you what the relationship of the contract is with the outside world.” —Monish Darda, CTO of Icertis.

And we’re just getting started. You can expect even deeper integration and more great knowledge mining experiences built with Azure Cognitive Search as we continue this journey. I encourage you to take a look at Harvard Business Review’s survey and findings and hear their perspective on the landscape of knowledge mining.

Getting started

Change feed support now available in preview for Azure Blob Storage

$
0
0

Change feed support for Microsoft Azure Blob storage is now available in preview. Change feed provides a guaranteed, ordered, durable, read-only log of all the creation, modification, and deletion change events that occur to the blobs in your storage account. This log is stored as append blobs within your storage account, therefore you can manage the data retention and access control based on your requirements.

Change feed is the ideal solution for bulk handling of large volumes of blob changes in your storage account, as opposed to periodically listing and manually comparing for changes. It enables cost-efficient recording and processing by providing programmatic access such that event-driven applications can simply consume the change feed log and process change events from the last checkpoint.

Some scenarios that would benefit from consuming a blob change feed include:

  • Bulk processing a group of newly uploaded files for virus scanning, resizing, or backups.
  • Storing, auditing, and analyzing changes to your objects over any period of time for data management or compliance.
  • Combining data uploaded by various IoT sensors into a single collection for data transformation and insights.
  • Additional data movement by synchronizing with a cache, search engine, or data warehouse.

How to get started

To enroll in preview, you will need to submit a request to register this feature to your subscription. After your request is approved (within a few days), any existing or new GPv2 or blob storage accounts in West US 2 and West Central US can then enable the change feed feature.

To submit a request, run the following PowerShell or Microsoft Azure CLI commands:

Register by using PowerShell
Register-AzProviderFeature -FeatureName Changefeed -ProviderNamespace Microsoft.Storage
Register-AzResourceProvider -ProviderNamespace Microsoft.Storage
 
Register by using Azure CLI
az feature register --namespace Microsoft.Storage --name Changefeed
az provider register --namespace 'Microsoft.Storage'

Once you’re registered and approved for the change feed preview, you can then turn it on for your storage accounts and start consuming the log. For more information, please see change feed support in Azure Blob Storage and process change feed in Azure Blob Storage. As with most previews, this feature should not be used for production workloads until it reaches general availability.

Cost

Change feed pricing is currently in preview and subject to change for general availability. Customers are charged for the blob change events captured by change feed as well as the data storage costs of the change feed log. See block blob pricing to learn more about pricing.

Build it, use it, and tell us about it

We will continue to improve our feature capabilities and would like to hear your feedback regarding change feed or other features through email at AzureStorageFeedback@microsoft.com. As a reminder, we love hearing all of your ideas and suggestions about Azure Storage, which you can post at Azure Storage feedback forum.

Azure Backup support for SQL Server 2019 and Restore as files

$
0
0

As SQL Server 2019 continues to push the boundaries of availability, performance, and data intelligence, a centrally managed, enterprise-scale backup solution is imperative to ensure the protection of all that data. This is especially true if you are running the SQL Server in the cloud to leverage the benefits of dynamic scale and don't want to continue using the legacy backup methods that are tedious, infrastructure-heavy, and difficult to scale.

We are excited to share native backup for SQL Server 2019 running in Azure Virtual Machine. This is a key addition to the general availability of Azure Backup for SQL Server Virtual Machine, announced earlier this year.  Azure Backup is a zero-infrastructure solution that protects standalone SQL Server and SQL AlwaysOn configurations in Azure Virtual Machine without the need to deploy and manage any backup infrastructure. While it offers long-term retention and central monitoring capabilities to help IT admins govern and meet their compliance requirements, it lets SQL Admin continue to exercise the power of self-service backup and restore for operational recoveries.

In addition to this, we are also sharing Azure Backup general availability for:

Backup SQL server in Azure VM

Restore as files:

Adding to the list of enhancements is the key capability of Restore as Files, now restore anywhere by recovering the backed-up data as .bak files. Move these backup files across subscriptions, regions, or even to on-premises SQL Servers and trigger database restore wherever you want. Besides aiding cross-subscription and cross-region restore scenarios, this feature helps users stay compliant by giving them greater control over storing and recovering backup data to any destination of their choice.Restore options for SQL server

 

Getting started:

Under the Restore operation, you will see a newly introduced option of Restore as files. Specify the destination server (this server should be SQL Server Virtual Machine registered to the vault) and path on that server. The service will dump all the .bak files specific to the recovery point you have chosen to this path. Typically, a network share path or path of a mounted Azure File share when specified as the destination enables easier access to these files by other machines in the same network or with the same Azure File share mounted on them.

Once the restore operation is completed, you can move these files to any machine across subscriptions or locations and restore them as a database using SQL Server Management Studio. Learn more.

Restore as files

Additional resources

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>